Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/330197628

Fifty Years of the Cooper-Harper Scale

Conference Paper · January 2019


DOI: 10.2514/6.2019-0563

CITATIONS READS
2 606

1 author:

David Mitchell
Mitchell Aerospace Research
61 PUBLICATIONS   673 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Pilot-Induced Oscillations View project

All content following this page was uploaded by David Mitchell on 26 April 2019.

The user has requested enhancement of the downloaded file.


AIAA SciTech Forum 10.2514/6.2019-0563
7-11 January 2019, San Diego, California
AIAA Scitech 2019 Forum

Fifty Years of the Cooper-Harper Scale

David G. Mitchell*

Mitchell Aerospace Research, Long Beach, CA, 90808, USA

The year 2019 marks the fiftieth anniversary of the publication of a seminal document, The
Use of Pilot Rating in the Evaluation of Aircraft Handling Qualities, by George E. Cooper and
Robert P. Harper, Jr. Despite taking a meager 52 pages to tell its story, NASA TN D-5153
manages to capture the essence of piloted evaluation of handling qualities. The Cooper-Harper
handling qualities rating scale has withstood numerous attempts at improvement,
simplification, or modernization, and even total replacement. With its continued use, we have
learned that there are shortcomings in the scale, and that several of the terms used are not
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

well-defined. Certainly our interpretation of the scale has evolved over the past five decades.
Yet it remains the standard tool for eliciting pilot opinion on aircraft handling qualities. This
paper reviews the evolution of the usage of the scale, discusses its strengths and weaknesses,
and demonstrates the value of the scale to handling-qualities evaluation for the foreseeable
future.

I. Nomenclature

CHPR = Cooper-Harper Pilot Rating


CHR = Cooper-Harper Rating
HQR = Handling Qualities Rating
POR = Pilot Opinion Rating
PR = Pilot Rating

II. Introduction

Despite years of research, the determination of the handling qualities of piloted aircraft is, as it has always been, an
opinion voiced by trained test pilots. Attempts at capturing handling qualities through quantitative measures have
never been successful, and given the myriad variables in human characteristics, probably will never be successful.
Handling qualities are succinctly defined as “those qualities or characteristics of an aircraft that govern the ease
and precision with which a pilot is able to perform the tasks required in support of an aircraft role” (Cooper and Harper
[1]). Prior to publication and adoption of the Cooper-Harper scale (Fig. 1), handling-qualities evaluation involved a
variety of rating scales, examples of which can be found in Refs. [1] and [2]. Many of these scales were quite terse,
consisting at times of only a handful of adjectives to assist the pilot in their application. There was little consistency
between the various rating scales, and hence inter- and intra-pilot scatter in ratings could be excessive.
The two authors of the Cooper-Harper scale (Fig. 2) led separate but parallel efforts to standardize the practice.
George Cooper, a test pilot at NACA (now NASA) Ames Research Center located on the West Coast of the US at
Moffett Field, California, developed a ten-point rating scale in the 1950’s and continued to modify it through the mid-
60’s (referred to variously as the NASA or Cooper rating scale). On the East Coast, Bob Harper, an engineer and test
pilot at Cornell Aero Labs (now Calspan) in Buffalo, New York, was developing his own version of a rating scale, a
scale that came to be known as the CAL scale. In 1966, Cooper and Harper collaborated on a joint scale [3], the final
result of which was published in Ref. [1] as the now-ubiquitous decision-tree scale of Fig. 1.
This paper provides an unofficial, informal retrospective on the uses of the Cooper-Harper scale, including modern
interpretations of some of its wording. Much of this material comes from the author’s experiences in teaching the use
of the scale to students at the US Air Force Test Pilot School (TPS), located at Edwards AFB, California. For the past
six years, this author has taught a ten-hour academic course on Handling Qualities Evaluation, that prominently

*
President, 3780 Kilroy Airport Way, Suite 200. dave@hqresearch.com. Associate Fellow AIAA.

Copyright © 2019 by David G. Mitchell. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
features the Cooper-Harper scale; and a simulation-based introduction and direct exposure course, featuring two hours
of one-on-one interaction with each TPS student, followed by a one-on-two (one pilot and one non-pilot with an
instructor observing and advising) simulation session. For the academic course, the students’ homework consists of
reading the Cooper-Harper report, and the course exam is a take-home test that covers elements of both the report and
the in-class lectures.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 1 The Cooper-Harper handling qualities rating scale [1].

Fig. 2. George Cooper and Bob Harper in 1984.

Besides the TPS assignment, the author has had the opportunity to work with the Cooper-Harper scale, and a
multitude of other scales, since the late 1970’s. This has included occasions to act as an evaluator (the author is not a
test pilot) in several simulation experiments, including one in which the author routinely misapplied the Cooper-
Harper scale [4]. Needless to say, this misapplication changed once the author read the Cooper-Harper report.

2
III. How It All Happened

For decades prior to the publication of the Cooper-Harper scale, a considerable amount of work went into the
development of ordinal pilot opinion scales. Handling qualities fall into a class of continua that psychophysicists
define as “metathetic” [2],[5], meaning changes in handling qualities come as a result of substituting stimuli, rather
than adding to them (“prothtetic” stimuli, such as mass and color, change by adding or subtracting some known
quantity). Hence there is no identifiable metric, except pilot opinion, by which we can rank handling qualities. And
rating scales are a simple method for documenting a test pilot’s subjective opinion with a single number – whether
rightly or wrongly.
Antecedents to the Cooper-Harper scale can be found in documents from the late 1940’s up until around 1970,
with most of the developmental work in the United States taking place on the West and East coasts.

A. West Coast: NACA, NASA, and George Cooper


At NACA (later NASA) Ames Research Center in Mountain View, California, test pilot and engineer George
Cooper led the effort to establish a rating system. Cooper’s 10-point “pilot opinion rating system for universal use”
(Fig. 3) was first published in 1957 [6]. (The version shown in Fig. 3 was not widely used; a minor modification was
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

made to the description for the “Unprintable” rating: it was changed to “motions possibly violent enough to prevent
pilot escape.”)
This scale was used at NASA Ames, with only two format changes (wording change for the rating of 10 as
mentioned above; and the adjective rating for 10 was no longer “Unprintable,” but simply left blank), until the
development of the Cooper-Harper scale in 1969.

ADJECTIVE RATING NUMERICAL DESCRIPTION PRIMARY MISSION CAN


RATING ACCOMPLISHED? BE LANDED

1 Excellent, includes optimum Yes Yes


OPERATION
NORMAL

Satisfactory 2 Good, pleasant to fly Yes Yes

Satisfactory, but with some


3 Yes Yes
mildly unpleasant characteristics
Acceptable, but with unpleasant
4 Yes Yes
EMERGENCY
OPERATION

characteristics
Unacceptable for normal
Unsatisfactory 5 Doubtful Yes
operation
Acceptable for emergency
6 Doubtful Yes
condition only*
Unacceptable even for emergency
7 No Doubtful
OPERATION

condition*
NO

Unacceptable 8 Unacceptable - dangerous No No

9 Unacceptable - uncontrollable No No

*!@ƳƳƳ !! Did not get back What mission?


Unprintable 10
to report
*(Failure of a stability augmenter)

Fig. 3 Cooper’s pilot opinion rating system in 1957 (redrawn from Ref. [6]).

B. East Coast: Cornell Aeronautical Laboratory and Bob Harper


Cornell Aeronautical Laboratory (CAL) of Buffalo, New York, owed its roots to the Curtiss-Wright Aircraft
Corporation. CAL was formed in 1946 as Cornell University acquired testing facilities from Curtiss-Wright [7]. CAL
was a multifaceted testing and research company, heavily involved in wind tunnel, aircraft, and automotive work – as
it still is today, operating as Calspan Corporation, still headquartered in Buffalo.

3
Rating scales were applied in flight testing at CAL years before Cooper at NACA published his scale. An example
of an early 10-point CAL scale from a report published in 1954 [8] is shown in Fig. 4. The flight test involved a series
of stick raps and rudder doublets; the pilots evaluated the ensuing residual motions. Hence this experiment was not a
classical handling qualities test.
A later pilot rating scale developed at CAL [9] eliminated numbers entirely and substituted letters (Fig. 5). The
pilot could assign one or two letters and a plus or minus to indicate a finer delineation in handling qualities; for
example, a configuration that is better than “Acceptable Good” but not as good as “Optimum” might be assigned a
rating of “AG+.” Elimination of a number effectively removes the apparent oversimplification of reducing the pilot’s
opinion to a single digit between 1 and 10; today, of course, plotting such ratings with a software application
necessitates the assignment of numbers to the letters on the scale (Optimum = 1, etc.), and the stigma still exists.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 4 CAL rating scale for evaluating dutch roll motions, from the early 1950’s [8].

Fig. 5 CAL rating scale from 1954 [9].

4
By 1959, CAL was using a 10-point scale (Fig. 6), similar in format to Cooper’s 1957 scale (Fig. 3), but with far
fewer descriptors. The most notable difference is that the two rightmost columns on the Cooper scale (“Primary
Mission Accomplished” and “Can Be Landed”) do not appear in any of the CAL-developed scales. This can present
obvious conflicts when, for example, the primary mission is landing: consider an airplane that has very poor handling
qualities, but that can still be landed. Ratings of 7, 8, 9, and 10 on the Cooper scale all reflect the fact that the aircraft
cannot be landed (primary mission cannot be accomplished), so the worst possible rating is a 6; by contrast, a
configuration that is still flyable (and landable), but with unacceptable characteristics, could be assigned a rating of 7,
8, 9, or 10 on the CAL scale.
Minor refinements to the CAL scale resulted in the scale shown in Fig. 7 [10]. This was the final version of the
scale before Harper and Cooper collaborated on a single unified rating scale.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 6 CAL 10-point rating scale from 1959.

Fig. 7 CAL rating scale from 1966 [10].

5
C. When the Cooper-Harper Scale is Inadvertently Wrong
As a bit of a footnote to this brief history of the Cooper-Harper scale, the reader is advised to carefully scrutinize
all versions of the scale in use. There is a version that is wrong – and it was created and distributed by NASA Ames
Research Center in a historical treatise on the scale in 1998 [11]. Compare the scale wrong in Fig. 8 with the correct
scale in Fig. 1: the block to the right of the second decision point suggests that HQRs 7, 8, and 9 indicate deficiencies
that only warrant, but do not require, improvement; and “Controllability not in question” is missing from the
“Demands on the Pilot” entry for HQR 7.
This scale has been adopted by several organizations, in part because for years it was found on the Web in
Wikipedia’s definition of “Cooper-Harper rating scale.” It might still be true except this author was tipped off to the
error, Dr. Chris Cotting at the Air Force Test Pilot School was informed of it, and he created and uploaded a corrected
version. The online version of Ref. [11] still shows the wrong scale.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 8 Cooper-Harper scale with errors, as distributed by NASA Ames Research Center [11].

IV. What Do We Call the Ratings?

A first question to be addressed is, what do we call the ratings? Abbreviations for pilot ratings have varied by era
and by agency since pilot opinion became linked to a number, certainly as early as the late 1940’s. It is inefficient to
report a pilot opinion rating as a “Cooper-Harper Handling Qualities Rating,” but that distinction is unambiguous.
And as long as the documentation that accompanies the ratings is clear as to the applicable scale, there should be no
confusion; it is unfortunately common to extract a graphic or text from a document and report on pilot ratings with no
attribution and no clear identification of the rating scale that was used. Any time historical flying and handling qualities
data are examined, it is critical that the applicable rating scale be clearly identified and, preferably, discussed. As an
example, the Background Information and User Guide (BIUG) [12] for the 1969 US military specification, MIL-F-
8785B [13] was written just as the Cooper-Harper scale was under development, and as a consequence the source data
were based on several different rating scales (though the most common were the NASA and CAL scales). The authors
of the first draft of the current Military Standard encountered the same difficulty in 1982 [14], and arguably more
because of the passage of time and the additional introduction of the Cooper-Harper scale.
Reports abound in open literature with the abbreviations PR for “pilot rating” and POR for “pilot opinion rating.”
These abbreviations are entirely ambiguous as there is no clear tie-in to any rating scale. More common are CHR or
CHPR, Cooper-Harper rating or Cooper-Harper pilot rating, two unambiguous terms; the former is typically used at
TPS.

6
A less familiar abbreviation is HQR, for Handling Qualities Rating: once again, there is the potential for confusion,
except this term has become, for the most part, synonymous with CHR or CHPR. Early flight test reports by some
elements of the US military also used HQRS, where the S can represent either “scale” or “system,” and in either case
implies the Cooper-Harper scale.
The best abbreviation is the most clear: CHR or CHPR. HQR is sometimes used as well, but such notations as PR
or POR should be avoided. And in any case, a clear definition of the rating scale should be provided. (There are times
that even an incorrect version of the Cooper-Harper scale has been applied, so it is always best to include a graphic of
the scale in question.)
In the remainder of this paper, the now-common abbreviation HQR will be used.

V. Pilot Opinion and Pilot Rating

Aerospace engineering is a quantitative field. Meticulous attention to precise calculations is required for initial
design, detailed analysis, and thorough reporting. Engineers love numbers.
Test piloting, by contrast, is often a profession comprised of impressions, personal experiences and preferences,
sometimes even biases, as the pilot attempts to translate what is seen and felt into words. Colloquialisms are not
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

uncommon, as pilots may use phrases that only they really appreciate. For pilot handling qualities assessment, numbers
are rare. It’s opinion that matters.
This is why there is a strong attraction to pilot ratings: it is a concise connection between the numbers person and
the opinions person. Of course, it also allows us a shorthand method to report on progress in developmental testing
(“last week this configuration was a 5, but now it’s a 3!”), and it’s a way to succinctly convey test results to high-level
managers who need hard numbers, not opinions.
But to the flight test engineer, to the design engineer, and to the next test pilot in line to evaluate the aircraft,
opinion matters more. An HQR of, say, 5, tells us something about the pilot’s impressions of the aircraft’s adequacy
for a task, its overall characteristics, and its demands on the pilot, but it doesn’t tell us why. Only words can do that.
It is important to recognize that a rating means nothing without amplifying comments. A well-designed comment
card is a necessary tool, more important even than a copy of the Cooper-Harper scale, for handling qualities evaluation.
Cooper and Harper summarized this as follows:

Pilot evaluation still remains the only method of assessing the interactions between pilot-vehicle performance
and total workload… It provides a basic measure of quality and serves as a standard with which pilot-airplane
system theory may be developed, performance measurements may be correlated, and significant airplane
design parameters and characteristics may be determined and correlated. [1]

In this regard, a well-developed pilot comment card is an essential tool for handling qualities evaluation. The
comment card should be tailored for the task and aircraft under consideration; there is no standard, one-size-fits-all
pilot comment card. The card must be specific in its talking points, may allow the pilot to summarize for those
characteristics considered to be acceptable and not needing amplification, and should finish with assignment of an
HQR (and, if appropriate, a PIO Tendency rating).
An example comment card is shown in Fig. 9, taken from Ref. [15]. This card was generated for a study of the
approach and landing handling qualities of very large transports but could be used for any landing study. If the test
pilot considered any attribute described in the questions to be not a factor (words for this might include easy,
satisfactory, desirable, etc.), it is sufficient to say so and move to the next question. Any other adjective requires
amplifying comments. At the end of the card, the pilot assigns two HQRs: one for the approach portion and one for
the flared landing portion. Historically, the landing has proven to be most critical for handling qualities, hence the
landing rating is usually more interesting, but asking the pilot to rate different segments of a task is valid as long as
there are not too many such segments, and as long as there is a clear division between those segments.
With a well-written and well-applied comment card an experienced engineer may, simply from reading the
comments, be able to deduce the approximate HQR the pilot has assigned. If comments and ratings don’t match up,
either the comments did not elicit useful information or there is a possibility that the card or the rating scale has been
misused. In any case, perceived discrepancies between comments and ratings should always be a topic of conversation
in a post-flight debrief.

7
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 9 A typical pilot comment card [15].

VI. The HQR Scale is Not Just for Pilots

There are numerous tales of the times in days past that acceptance of an aircraft hinged almost entirely on the
opinion of the chief pilot, usually a company pilot. Senior pilots have been able to influence flight planning, task
definition, test execution, and final reporting – and often with good reason: senior pilots bring a wealth of experience
that cannot be minimized.
It might come as a surprise to the novice flight test engineer that the Cooper-Harper Rating Scale is not just for
pilot use: flight testing has always been a team effort, and certainly as aircraft become more sophisticated and
automation becomes an integral part of modern flying, this is more true now than ever. It behooves the engineer to
become familiar with the Cooper-Harper scale, to understand its strengths and limitations, to properly interpret the
words on the scale, and especially to be ready to assist the pilot in applying the scale.
In testing advanced aircraft, the engineer (or pilot not flying) is critical to a successful handling qualities evaluation.
The observer assists the pilot in judging task performance and prompting for comments, including, if needed,
assignment of an HQR. Obviously, the pilot is the final authority in assigning a rating, but the non-pilot should feel
comfortable discussing the basis for the rating, if needed. This can be done only if the non-pilot understands the
Cooper-Harper scale as well as the pilot.

8
VII. Evolution in Handling Qualities Evaluation Methods

There have been critically important changes in our approach to handling qualities evaluations in the past 50 years.
It is arguable whether all the changes have been for the better, however. This section discusses the most significant of
those changes.

A. Short-Look Has Become the Norm


In their report, Cooper and Harper [1] discuss the value of “long-look” evaluations – tasks or flight phases that are
sufficiently long to allow the evaluation pilot to make a fair and reasoned judgment of handling qualities. The length
of such evaluations is not specified, but a prior study by Cornell Aero Lab demonstrated that a long-look technique is
more effective at reducing interpilot variability [16]. In that study, “long-look” meant that the evaluation pilot was
given unlimited time to perform a series of longitudinal maneuvers; by contrast, “short-look” meant a total evaluation
time of four to five minutes per configuration. In a flight of short-look evaluations, typically 16 configurations were
flown and rated; in a long-look flight, the number dropped to five. The authors found reduced variation in ratings for
the good configurations, but no real effect for configurations with poor handling qualities.
A study by McDonnell Aircraft Company more than 20 years later showed similar results [17], and the authors
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

recommend a procedure to reduce interpilot rating variability through repeated evaluations by different pilots, to a
maximum population of six.
While the long-look method is most effective, and might be attainable in a research study, there are three clear
problems with attempting it in a developmental flight program:
1. Flight time is expensive, especially for modern aircraft, and a rate of five evaluations per flight – perhaps
only one pilot rating per as many as 30 minutes of flight time – is not palatable to schedulers working
within tight budget and time constraints. Indeed, inasmuch as handling qualities flying often comes some
time after first flight, it is likely that the program is already over budget and behind schedule, meaning
the pressure to reduce flight time will be a key driver. So the ability to get, say, one evaluation per 10
minutes is much easier to sell than one evaluation per 30 minutes.
2. A workable test plan requires precision in wording and in flight scheduling. In modern flight testing,
allowing the pilot to perform loosely-defined maneuvers with no clear objectives or limitations will be
difficult to pass review. Flight safety is primary, especially during early development testing, and a
proposal to let the pilot fly “as long as necessary” to make an evaluation is not likely to pass any safety
review board.
3. It is difficult at times to even find three trained test pilots qualified to fly a prototype design through a
defined set of maneuvers, much less six. Attempts to train pilots unfamiliar with the system, just to
increase the data base, may increase rather than decrease the spread in pilot ratings.
The push over the past 50 years has been to apply a well-defined set of handling-qualities maneuvers that use the
short-look technique. Variability can be reduced by requiring the pilot to perform the maneuver several times before
giving formal comments and assigning a pilot rating.
A prime example is the precision offset landing task. This task has been used for evaluation of approach and
landing handling qualities since at least the 1970’s [18], has been refined for more general application [19], and is
used extensively at the USAF Test Pilot School as a teaching tool. The convention for this task has become something
like the following: evaluation pilot takes control of the configuration (or in a single-pilot prototype, configures the
aircraft) while downwind in level flight, maneuvers to a visual offset, and flies a manual approach and landing.
Typically, ancillary tasks, such as radio communications or distracting tower radio calls, are held to a minimum so the
pilot can concentrate on the flying task. In this scenario, the evaluation pilot may see the configuration for only two
or three minutes, with the most critical portion of the entire run (the flare and landing) limited to only seconds of
exposure. By requiring at least three landings, the intent is to allow the pilot to get a general feel for the configuration
(run one), try for best task performance (run two), then make a formal third run. The pilot should base comments and
ratings on all three landings, but the third landing should be considered to correlate best with the pilot’s opinions.
Application of the long-look technique – especially where the pilot is given “unlimited” time to fly a loosely
defined set of maneuvers or tasks – has become rare, if not extinct.

B. Task Performance is Explicitly Stated


This change has helped compensate for the move away from long-look and toward short-look evaluations. The
Cooper-Harper scale (Fig. 1) prominently mentions “desired performance” and “adequate performance” in multiple
locations, but the Cooper-Harper report [1] contains no explicit discussion about the interpretation of these terms.

9
Indeed, in Appendix A of Ref. [1], the authors list definitions of some of the terminology found on the scale, but they
addressed the meaning of “performance” in only general terms: “[t]he precision of control with respect to aircraft
movement that a pilot is able to achieve in performing a task. (Pilot-vehicle performance is a measure of handling
performance. Pilot performance is a measure of the manner or efficiency with which a pilot moves the principal
controls in performing a task.)”
A significant improvement in the application of the HQR scale has been the definition of the task itself, especially
what is meant by “desired performance” and “adequate performance.” By controlling the variability in performance
of a task, there is some confidence that the demands on the pilot are sufficiently high that any possible shortcomings
in handling qualities are exposed relatively quickly. As a result, the short-look method should be an effective and more
efficient approach for handling-qualities evaluation.

C. The Task is Designed to Force the Pilot into the Control Loop
Rather than rely on the ingenuity of the test pilot, short-look tasks are composed in such a way as to assure that
the pilot is tightly involved in closed-loop control from the outset. It is critical that the pilot provide control inputs that
will expose any potential issues – or will confirm the absence of such issues – in a short period of time.
At the Air Force Test Pilot School, this is illustrated to students during a two-hour ground simulation session in
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

which the student progresses from a poorly-designed task to a well-designed one. In the poorly-designed task (shown
in Fig. 10), the student (who may be a high-time fighter pilot or an engineer with zero stick time) is given control of
a constant-speed model of an F-16 with only pitch and roll control. A target airplane, an image of another F16, is
visible, starting out co-speed, co-altitude, co-heading, no maneuvering, with no winds or turbulence. The evaluation
pilot’s task is trivial: keep the target within a large circle (effectively a projected HUD image) for a minute.
As Fig. 10 shows, the visual scene is sparse, with only the target airplane, a fixed boresight, a flight path marker,
and a fixed circle in the pilot’s field of view. The pilot is told that the circle denotes the edge of desired performance
and the entire screen is adequate performance.
This task can be performed without touching the simulator’s control stick, but most students will naturally try to
tightly track target by centering the boresight on the target. As a result, minor pitch bobbles or lateral wallowing will
develop. Occasionally, a pilot will choose to maneuver off the target and rapidly reacquire it: an excellent test of
handling qualities, and the sort of maneuvering one would hope for in a long-look evaluation, when the pilot has time
to devise such a task. In most cases, the student who does this will assign a poorer HQR than will the student who
simply sits and watches. The message is that there is no predefined level of aggressiveness, and this in combination
with a much too lenient limit for desired performance makes it a poor choice for determining handling qualities.

Fig. 10. Example of a poorly-designed handling qualities task demonstrated to students at the US Air Force
Test Pilot School

10
D. Two Practical Examples
Current practices in task definition are best seen by considering two example evaluation tasks.
1. Up-and-Away Maneuvering Flight
Two important source documents for handling qualities research are those by DiFranco in 1966 [10] and Neal and
Smith in 1970 [20]. Both investigated short-period characteristics in up-and-away maneuvering at differing airspeeds.
In both studies, the test airplane was the USAF variable-stability NT-33A (Fig. 11) operated by CAL/Calspan, a
venerable workhorse for all manner of research and development that now, fittingly, resides in a hangar at the National
Museum of the United States Air Force at Wright-Patterson AFB, OH.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Fig. 11 Variable-Stability NT-33A (US Air Force Photo)


In the 1966 study [10] “the evaluation pilot was asked to perform specific tasks, and any other tasks he thought
appropriate to properly evaluate the configuration.” Tasks consisted of a trim shot; straight and level flight including
pilot-applied disturbances; turns at low and high bank angles (up to 60 deg); symmetrical pull-up and push-over; a
preprogrammed tracking task; and maneuvers with noise injected to the elevator to simulate turbulence. There is no
documentation in the reference of expected maneuver performance in terms of time, precision, or aggressiveness.
Piloting tasks in the 1970 study [20] were similar, with the addition of a simulated ground attack and a brief IFR
evaluation. As in the 1966 study, there is no documented set of performance limits.
Contrast those experiments with up-and-away maneuvering experiments based on heads-up display (HUD)
tracking of a series of discrete pitch and roll command signals. This task has been used on numerous experiments (a
version was included in the 1970 study) as a primary handling qualities and PIO evaluation method [21], [22]. A
pictorial representation of the display for the task is sketched in Fig. 12; tracking error is expressed in terms of time
the moving command bar is maintained within a fixed reticle as follows [19]:
• Desired Performance
±10 mils in pitch 50% of the time.
±10 mils in roll (measured at the end of the command bar) 50% of the time.
• Adequate Performance
±20 mils in pitch 50% of the time.
• ±20 mils in roll (measured at the end of the command bar) 50% of the time.

Fig. 12 HUD Symbology for Discrete Pitch-and-Roll Tracking Task [22]

11
2. Approach and Landing Task
For the evaluation of low-speed handling qualities, no task is more appropriate than the final approach and landing.
Early research studies that focused on landing handling qualities typically required the pilot to simply perform the
task with little instruction as to the precision or aggressiveness to be applied.
Cornell Aero Lab (Calspan) began to formalize the requirements for the approach and landing task in the 1970’s.
As an example, in the 1978 Landing Approach Higher-Order Systems (LAHOS [23]) flight research program, pilots
performed three approaches to flared landings: an ILS approach under simulated instrument conditions to 200 ft AGL
followed by a landing; and two visual close patterns and landings, with an intentional offset (sidestep) maneuver on
close final in each case. (Pilots also performed the discrete pitch tracking task described above but the primary
emphasis was on the landing task.) Further, an effort at defining a target touchdown zone was included as a 500-ft-
long portion of runway starting 1500 ft from the threshold. Perhaps not surprisingly, the flare and touchdown was the
most critical portion of the task.
The logic of a lateral (sidestep) offset in a pitch control study might be questioned, but there is strong justification.
It has been found (though not clearly documented) that vertical offsets do not work well; an intentional low offset
requires little pilot effort to correct, as a slight reduction in glide slope will serve, and an offset high can become a
safety-of-flight issue if the high sink rate is not arrested sufficiently quickly. The intent of the sidestep is not to inject
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

lateral upsets into the aircraft; it is instead a way to assure that the approach is not stabilized, forcing pilot inputs to
correct for the sidestep and then for any flight path errors that result.
A graphical example of the contemporary offset approach and landing as flown by Calspan at Niagara Falls, NY,
airport is sketched in Fig. 13, from Ref. [24]. During a typical landing pattern evaluation, the safety pilot configures
the aircraft for landing, including selecting the proper flight control experiment and engaging the variable stability
system, while on downwind. The evaluation pilot takes control on base turn and lines up on final for the offset landing.
The offset correction is initiated on a call from the safety pilot. The evaluation pilot uses the instrument landing
markers as the desired touchdown point. These markers are located 1,000 feet from the threshold. After touchdown,
the safety pilot will take control of the airplane and perform the takeoff and turn to downwind, while the evaluation
pilot provides comments and ratings for that configuration.
Clearly the offset approach and landing is not a task for early flight testing of a prototype airplane. There is a need
to build up gradually to the final conditions of the formal task. It has proven to be an effective task for evaluating
closed-loop control and handling qualities for the landing fight phase, and is an excellent example of the evolution of
task definition over the past 50 years.
Desired Performance
• Approach airspeed maintained
within ± 5 kts.
• Touchdown within 5 feet of
centerline
(main wheels on centerline).
• Touchdown within ± 250 feet of
aimpoint.
• Sink rate – smooth touchdown.
• No PIO.
Adequate Performance
• Approach airspeed maintained
within -5 kts/+10 kts.
• Touchdown within 25 feet of
centerline.
• Touchdown within ± 500 feet of
aimpoint.
• No PIO

Fig. 13 Precision offset landing task requirements and approach to Niagara Falls Airport [24].

12
VIII. Interpreting the Scale

The Cooper-Harper scale is succinct, and the accompanying report is well-written, so how could there be any
confusion over the usage of the scale? In this section we will explore some of the applications and terms that have
occasionally confounded students at the Air Force Test Pilot School as well as seasoned veteran test pilots.

A. A Decision Tree
It cannot be overstressed that the Cooper-Harper scale is a decision tree (Fig. 1). The user must always start in the
lower left corner with “pilot decisions” and move upward. Evidence of pilot rating scatter in past experiments is often
associated with misuse of the decision tree. Even experienced test pilots have been tempted to jump to a number
without answering the questions in the tree and have given questionable ratings as a result.
Proper application of the scale is enforced at the Air Force Test Pilot School by strict adherence to the decision-
tree portion of the scale. By answering the questions along the left side, the user is naturally led to the appropriate
branch for fine-tuning the rating.
Anyone familiar with the scale knows there is a decision process. The struggle is in enforcement of that process.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

B. Definitions of Terms
Visitors to the pilots’ office at NASA Ames Research Center will find copies of the HQR scale printed on sturdy
paper. On the back of each copy is a list of definitions of terms, taken directly from Ref. [1], as shown in Fig. 14.

Fig. 14 Definition of Terms from NASA TN D-5153


Astute readers of NASA TN D-5153 may find this list redundant; at the Air Force Test Pilot School, students are
given a copy of the scale on a laminated card suitable for tucking into the pocket of a flight suit, and on the back of
the card is a PIO Tendency Rating Scale: an acknowledgment that the students are expected to grasp the terms in Fig.
14 sufficiently well that a written reminder is indeed redundant. At TPS, the Cooper-Harper report is required reading
and a take-home exam is assigned based on the report. In addition, a 10-hour academic course on Handling Qualities
Evaluation is followed by multiple simulator sessions (described earlier in this paper) and flight exposure to reinforce
proper application of the scale.
1. Workload
One of the two most misinterpreted words on the scale, Cooper and Harper define workload as “integrated physical
and mental effort required to perform a specified piloting task.”

13
Ref. [1] further amplifies physical workload as “The effort expended by the pilot in moving or imposing forces on
the controls during a specified piloting task.” In handling qualities, every task demands physical workload: there is a
level of workload required to perform the task, regardless of the characteristics of the controlled aircraft. A reason we
employ trained test pilots in handling-qualities evaluation is that they are able to judge the baseline amount of physical
workload required to perform the task, and separate that from the additional effort required due to aircraft deficiencies.
Typical pilot comments from those who don’t understand this definition of workload are something like this: “well,
if I have to flare to land that’s workload, and if there’s workload the airplane isn’t ideal!” (More is written about this
particularly myopic view when we discuss the elusive rating of 1, below.)
Cooper and Harper further note for mental workload that it is “at present not amenable to quantitative analysis by
other than pilot evaluation...” This is still true 50 years later: we have not identified a method to accurately measure
the mental stress, thought processes, or heightened attention that signify mental workload. Lack of control movement
may indicate low mental workload, but it can also reflect a high mental workload. As a practical example, an Air
Force Test Pilot School student project in the mid-1990’s investigated the effect of rate limiting on landing handling
qualities [25]. On multiple runs, student pilots assigned HQRs that were much better than expected for some
configurations. The final report notes that “for many of the configurations flown, the pilots knew that there were
significant problems with the airplane. The CHRs [HQRs] do not reflect how bad the pilots really thought the airplane
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

was but instead were driven by task performance and perceived workload [emphasis added].” These pilots failed to
recognize that the act of backing out of the control loop to avoid catastrophe is a form of intense mental workload
with little physical workload required.
2. Compensation
As Fig. 14 defines it, compensation is the “additional pilot effort and attention required to maintain a given level
of performance in the face of deficient vehicle characteristics.” As such, it is separate from basic task workload, either
mental or physical, though as Cooper and Harper note, total workload is “comprised of the workload due to
compensation for aircraft deficiencies plus the workload due to the task” and hence workload is a composite measure.
A well-designed airplane may require mental workload to flare and land (“where do I start the flare maneuver?”),
and physical workload (“how much aft stick do I need?”), but it should not require compensation (“how do I make
this land better?”).
The challenge in judging compensation is not in the concept, but in the nuances of the adjectives used: “not a
factor,” “minimal,” “moderate,” “considerable,” “extensive,” “intense,” “maximum tolerable.” An especially difficult
division is between “moderate pilot compensation” and “considerable pilot compensation” as occurs for HQRs of 4
and 5, respectively; more is written about this below.
Here Cooper and Harper don’t help us. We can turn to Merriam-Webster online to get a bit of clarity (selecting
what seems the most relevant definition in each case).

“Minimal:” the least possible


“Moderate:” tending toward the mean or average amount
“Considerable:” large in extent or degree
“Extensive:” having wide or considerable extent
“Intense:” marked by or expressive of great zeal, energy, determination, or concentration
“Maximum:” the greatest quantity or value attainable or attained

Invariably, discussion will break out among the students and staff at TPS about the interpretations of these
adjectives. The first and last terms are fairly straightforward. Ambiguity generally occurs between “moderate” and
“considerable” – but fortunately for us, these terms occur in context of other descriptive words on the scale and do not
have to be judged in isolation. Again, this is described further later in this paper.
3. Aircraft Characteristics
The column addressing deficiencies in the aircraft is sometimes a subject of some confusion, but fortunately,
Cooper and Harper made the job easier by using the same term (“Major deficiencies”) for four of the 10 entries. And
for good reason: when handling qualities are that bad, it becomes difficult to sort out just what’s going on with the
aircraft, and it is sufficient to say that its characteristics are deficient beyond description.
Ambiguity typically occurs in the different wording for HQRs of 4, 5, and 6. It is possible, and not uncommon, to
find a pilot conflicted between the “Aircraft Characteristics” and “Demands on the Pilot” columns in Fig. 1. Students
at the Air Force Test Pilot School are instructed to weigh the demands column more heavily, since that is usually more
apparent to the pilot. The differences between HQRs of 5 and 6 are subtle; fortunately, if the pilot is debating the

14
meanings of the words for these ratings, we already know much about our airplane for the given task: it is not
satisfactory without improvement but deficiencies do not require improvement, and desired performance was not
attained. Whether the assigned rating is a 5 or a 6 is of much less importance.
4. Performance
Much has already been written about this word. It is imperative, in our current usage of the HQR scale, that we
define overall performance for the task, and then minimum levels to achieve desired or adequate performance for
application of the scale.
Three lessons to be learned:

1. Performance levels must be based on – but not driven by – the demands of real-world maneuvers and
tasks. Indeed, in order to encourage closed-loop control, those levels must be more stringent than
normally encountered in the real world. This forces the pilot more tightly in the loop, hopefully exposing
deficiencies, and it compensates (at least in part) for other real-world variables that we cannot apply in a
controlled flight test program (for example, flight in turbulence or with crosswinds). For the offset
approach and landing task in Fig. 13, a pilot in operational flight would initiate a go-around, not correct
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

for the offset and try to land. In flight test, the offset has a specific purpose unrelated to the real world –
except to assure high pilot workload.
2. Dimensions between desired and adequate performance must sensible. The pilot should have confidence
that violation of a limit on desired performance still leaves a margin from the limit on adequate
performance. There might be a specialized, rare exception where the margins between desired and
adequate performance are small, but in general, adequate performance should be set about twice the
dimensions of desired. An example is the offset landing task sketched in Fig. 13, where landing speed
and touchdown zone are roughly two-to-one. At the Air Force TPS, the concept is taught in a simulation
exercise. The student is given a very large circle, as shown in the photo in Fig. 10, as the limit of desired
performance; an inner circle is later added just inside the outer and becomes desired performance while
the outer circle remains at its initial size and represents adequate performance. Run by run, the diameter
of the inner (desired performance) circle is decreased until it is quite small. At first, the student sees no
difference from the initial case of Fig. 10, but at some point it becomes impossible to attain the very
stringent limit for desired performance. Adequate – the original desired – is trivial to attain. The student’s
ratings will change from very good (sometimes HQR 1 but often 2 or 3) to bad (HQR 5 or 6). The point
is made that the airplane has not changed at all – it is the task, going from much too easy to exceedingly
hard, that has driven HQRs. At the end of this exercise, two concentric circles of reasonable diameter
become the norm for the remainder of the simulation training session (Fig. 15).

Fig. 15 Display in US Air Force Test Pilot School simulation showing concentric circles
denoting desired (inner) and adequate (outer) task performance.

15
3. Test pilots are not allowed to change performance limits ad hoc. A commonly heard lament in flight
testing and flight research goes something like this: “those performance requirements are just too tight,
and my HQR is based on what they should be, not what they are.” This is a mistake that often requires a
junior flight test engineer standing up to a senior test pilot. In theory, the parameters of an evaluation task
are set well in advance of the formal test, have been checked on paper and in a simulator, and are approved
by everyone on the test team, including the pilots. It is troublesome to have a pilot decide, real-time, to
ignore a limit because it seems wrong, and to give a rating based on what the task should have been, not
what it was. If there is a problem with the definitions of a task, it means advance homework was not done
well; if a pilot can’t meet the parameters of a task, it may mean there is something deficient in the aircraft,
or the pilot has not been briefed properly. The solution is to revisit the task definition or brief the pilot
more thoroughly, not to redefine the task on the fly.

IX. Misunderstood Locations on the Cooper-Harper Scale

There are several points on the scale that have confused users. We will look at four such points.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

A. The Perfect Rating: 1


There are experienced test pilots who claim that they have never given, and will never give, an HQR of 1. Their
argument typically hinges on the perception that a “1” aircraft is perfect. This is certainly a misunderstanding of the
meaning of the rating. An HQR of 1 is not a reflection of the perfect aircraft; it is an aircraft that has no deficiencies
warranting improvement (the decision tree breakpoint on Fig. 1), and that, for the selected task, has excellent and
highly desirable characteristics and does not require pilot compensation for desired performance. There is nothing
about “perfection” on the scale.
It is entirely possible that a test pilot will never assign an HQR 1 in his or her flying career. The reason should not
be because the pilot misunderstands the meaning of the rating.

B. The Big Confusion: 4 and 5


This is the one area where the scale is really not well-worded: HQR 4 applies to an aircraft that has minor but
annoying deficiencies and 5 is for moderately objectionable deficiencies. If the test pilot is able to discern between
those aircraft characteristics, that’s great; but the words describing demands on the pilot make things confusing, as
two descriptors change at once.
HQR 4: Desired performance requires moderate pilot compensation
HQR 5: Adequate performance requires considerable pilot compensation
Ignoring the questions about what constitutes “moderate” versus “considerable” pilot compensation, the ratings
also indicate a change in achievable performance, from desired to adequate. What happens (as it often does) if desired
is not attainable with maximum compensation, but adequate is achievable with little or no compensation?
The obvious solution is a half-rating: 4-1/2. Cooper and Harper state in their report that such a rating is acceptable,
though they observe that such ratings “generally indicate reluctance of the evaluation pilot to assign either of the
adjacent ratings…” They add that “[a]ny finer breakdown than half ratings is hardly ever justified.”
At the Air Force Test Pilot School, as at many organizations that use the Cooper-Harper scale, half-ratings are not
permitted: the pilot should decide on a single ordinal number. Despite this, the confusion with multiple changing
adjectives between 4 and 5 persists, and it makes sense to accept a rating of 4-1/2 to resolve that confusion.

C. The Decision Point for HQR 7


In applying the scale, once the pilot has determined that the aircraft is controllable, the next decision point asks “is
adequate performance attainable with a tolerable pilot workload?” There are really two questions here. 1) Is adequate
performance attainable?” If NO, move to the right; if YES, answer 2) “If adequate performance is attainable, is pilot
workload tolerable?” If NO to this question, again move to the right. If the answer to both questions is YES, move up
to the next decision point.

D. The Uncontrollable Aircraft: 10


While a rating of 10 is (hopefully!) rare in practice, there is an ongoing question of whether 10 really implies loss
of control, or just loss of control within the task. As an example, in a degraded-modes simulation of the V-22 [26], it

16
was found that certain failures in forward flight made it impossible to fly close to a refueling basket, yet aircraft control
was not in doubt. Is this an area where “control will be lost” (HQR 10) or simply where “adequate performance is not
attainable” (HQR 7, 8, or 9)?
Given the purpose of the HQR scale is to rate an aircraft in context of a given task, a rating of 10 is appropriate,
as uncomfortable as it sounds. An argument can be made that simply keeping an aircraft under control while not being
able to perform the assigned task is effectively an out-of-control condition. That one might be able to divert to another
task (for the V-22 example, switch from attempting to refuel to simply flying formation) should not exonerate the
aircraft and prevent the assignment of a rating of 10.

X. Conclusions

As the Cooper-Harper handling qualities rating scale passes its fifty-year anniversary, it continues to be the primary
tool for assessing aircraft handing qualities.
This is not because the scale is perfect, by any means; indeed, as this paper has demonstrated, there are flaws in
the scale. It continues to be the tool of choice because it has proven effective in spite of the flaws, that is, that its
strengths greatly outweigh its weaknesses.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

Though not discussed here, there have been attempts to revise the scale, or to replace it altogether. Every such
attempt has proven less than fruitful, though there are alternative rating scales that assist in judging such elements as
physical or mental workload. The scale survives because it is effective.
Everyone who works in or related to the fields of flying and handling qualities, flight research, and flight testing,
owes a debt of gratitude to the memories of Messrs. George Cooper and Bob Harper for their collaborative genius 50
years ago.

References

[1] Cooper, G. E., and Harper, R. P., Jr., “The Use of Pilot Rating in the Evaluation of Aircraft Handling Qualities,” NASA TN-
D 5153, 1969.
[2] Mitchell, D. G., and Aponso, B. L., “Reassessment and Extensions of Pilot Ratings With New Data,” AIAA Paper 90-2823,
Aug. 1990.
[3] Harper, R. P., Jr., and Cooper, G. E., “A Revised Pilot Rating Scale for the Evaluation of Handling Qualities,” presented at the
AGARD Specialists’ Meeting on Stability and Control, Cambridge, England, Sept. 1966; also CAL Report No. 153, Cornell
Aeronautical Lab, Buffalo, NY, 1966.
[4] Mitchell, D. G., Aponso, B. L., and Hoh, R. H., “Minimum Flying Qualities, Volume I: Piloted Simulation Evaluation of
Multiple Axis Flying Qualities,” WRDC-TR-89-3125, Vol. I, Jan. 1990.
[5] Stevens, S. S., “The Psychophysics of Sensory Function,” Sensory Communication, edited by Walter A. Rosenblith, MIT Press,
Cambridge, MA; Wiley, NY, 1961, pp. 1-33.
[6] Cooper, G. E., “Understanding and Interpreting Pilot Opinion,” Aeronautical Engineering Review, Vol. 16, No. 3, Mar. 1957,
p. 47–51, 56.
[7] https://www.calspan.com/company/history/, accessed 2 Oct. 2018.
[8] Moore, N. B., “Artificial Stability Flight Tests of the XF-88A Airplane,” WADC TR 52-298, July 1954.
[9] Harper, Jr., R. P., “Flight Evaluations of Various Longitudinal Handling Qualities in a Variable-Stability Jet Fighter,” WADC
TR 55-299, July 1955.
[10] DiFranco, D. A., “Flight Investigation of Longitudinal Short Period Frequency Requirements and PIO Tendencies,” AFFDL-
TR-66-163, June 1967.
[11] Borchers, P. F., Franklin, J. A., and Fletcher, J. W., “Flight Research at Ames, 1940-1997,” NASA SP-3300, 1998.
[12] Chalk, C. R., Neal, T. P., Harris, T. M., Pritchard, F. E., and Woodcock, R. J., “Background Information and User Guide for
MIL-F-8785B(ASG), ‘Military Specification – Flying Qualities of Piloted Airplanes’,” AFFDL-TR-69-72, Aug. 1969.
[13] “Military Specification, Flying Qualities of Piloted Airplanes,” MIL-F-8785B(ASG), Aug. 1969.
[14] Hoh, R. H., Mitchell, D. G., Ashkenas, I. L., Klein, R. H., Heffley, R. K., and Hodgkinson, J., “Proposed MIL Standard and
Handbook -- Flying Qualities of Air Vehicles. Volume I: Proposed MIL Standard; Volume II: Proposed MIL Handbook,”
AFWAL-TR-82-3081, Nov. 1982.
[15] Hodgkinson, J., Aircraft Handling Qualities, AIAA Education Series, Reston, VA, 1999.
[16] Kidd, E. A., and Bull, G., “Handling Qualities Requirements as Influenced by Pilot Evaluation Time and Sample Size,” Report
No. TB-1444-F-1, Cornell Aeronautical Laboratory, February 1963.
[17] Wilson, D. J., and Riley, D. R., “Cooper-Harper Pilot Rating Variability,” AIAA 89-3358-CP, AIAA Flight Mechanics
Conference Proceedings, Aug. 1989, pp. 96-105.
[18] Smith, R. E., “Effects of Control System Dynamics on Fighter Approach and Landing Longitudinal Flying Qualities,” AFFDL-
TR-78-122, Mar. 1978.

17
[19] Klyde, D. H., and Mitchell, D. G., “Handling Quality Demonstration Maneuvers for Fixed-Wing Aircraft, Volume II:
Maneuver Catalog,” WL-TR-97-3100, Oct. 1997.
[20] Neal, T. P., and Smith, R. E., “An In-Flight Investigation to Develop Control System Design Criteria for Fighter Airplanes,”
AFFDL-TR-70-74, Volume I, Dec. 1970.
[21] Randall, P. A., LT, USN, et al., “A Limited Inflight Investigation of the Neal-Smith Criteria,”AFFTC-TLR-94-27, 1994.
[22] Kish, B. A., et al., “A Limited Flight Test Investigation of Pilot-Induced Oscillation due to Elevator Rate Limiting, AFFTC-
TR-97-12, June 1997.
[23] Smith, R. E., “Effects of Control System Dynamics on Fighter Approach and Landing Longitudinal Flying Qualities,” AFFDL-
TR-78-122, Mar. 1978.
[24] Deppe, P. R., Chalk, C. R., and Shafer, M. F., “Flight Evaluation of an Aircraft with Side and Center Stick Controllers and
Rate-Limited Ailerons,” NASA CR-198055, Nov. 1996.
[25] Peters, P. J., Tom, P. M., Wolf, D. B. B., Evensen, F. A., and Dversdall, D. A., “Limited Investigation of the Effects of Elevator
Rate Limiting and Stick Dynamics on Longitudinal Pilot Induced Oscillations (HAVE GRIP),” AFFTC-TR-96-37, Dec. 1996.
[26] Weakley, J. M., Kleinhesselink, K. M., Mason, D. H., and Mitchell, D. G., “Simulation of V-22 Degraded-Mode Flying
Qualities,” presented at the American Helicopter Society 59th Annual Forum, Phoenix, AZ, May 2003.
Downloaded by David Mitchell on January 10, 2019 | http://arc.aiaa.org | DOI: 10.2514/6.2019-0563

18

View publication stats

You might also like