Download as pdf or txt
Download as pdf or txt
You are on page 1of 119

DESIGN OF A SMALL-TURBOJET COMPRESSOR TEST FACILITY

Thesis

Submitted to

The School of Engineering of the

UNIVERSITY OF DAYTON

In Partial Fulfillment of the Requirements for

The Degree of

Master of Science in Mechanical Engineering

By

Justin T. Reinhart

UNIVERSITY OF DAYTON

Dayton, Ohio

August, 2021
DESIGN OF A SMALL-TURBOJET COMPRESSOR TEST FACILITY

Name: Reinhart, Justin T.

APPROVED BY:

Markus P. Rumpfkeil, Ph.D. Andrew P. Murray, Ph.D.


Advisory Committee Chairman Committee Member
Associate Professor, Professor, Department of
Hans von Ohain Endowed Chair, Mechanical and Aerospace
Department of Mechanical Engineering
and Aerospace Engineering

David H. Myszka, Ph.D.


Committee Member
Professor, Department of Mechanical
and Aerospace Engineering

Robert J. Wilkens, Ph.D., P.E. Margaret F. Pinnell, Ph.D.


Associate Dean for Research and Innovation Interim Dean, School of Engineering
Professor
School of Engineering

ii
© Copyright by

Justin T. Reinhart

All rights reserved

2021
ABSTRACT

DESIGN OF A SMALL-TURBOJET COMPRESSOR TEST FACILITY

Name: Reinhart, Justin T.


University of Dayton

Advisor: Dr. Markus P. Rumpfkeil

Validation of compressor aerodynamic design can be obtained through in-engine

testing with considerable investment of time and money or in rig testing, wherein a range

of interchangeable designs can be tested without exposure to effects of other engine

components. To more rapidly explore centrifugal and mixed-flow compressor designs for

small, thrust-producing aviation gas-turbine engines, called turbojets, the Air Force

Research Laboratories (AFRL) initiated development of a test apparatus to be integrated

into an existing test facility at Wright-Patterson Air Force Base (WPAFB). Design of the

inlet flow conditioning and measuring system, mechanical systems, exhaust discharge

valve, and exhaust collector is accomplished through sizing calculations, aerodynamic and

mechanical analysis. This thesis project presents the final design of this small-turbojet

compressor test facility, capable of testing virtually the entire operating map of

compressors in the desired size range. Fabrication of the test rig is underway as of the

conclusion of this thesis; component testing, assembly, and initial test article testing is

planned thereafter.

iii
ACKNOWLEDGMENTS

I would like to thank my advisor, Dr. Markus Rumpfkeil, and advisory committee

members, Dr. Andrew Murray and Dr. David Myszka, who not only provided advice and

expertise, but also the flexibility to work primarily under AFRL direction. I felt as though

I had the perfect balance of support and freedom from both sides throughout the project.

I would also like to thank my brother, Jonathon Reinhart. Your meticulous editing

saved me hours of proofreading and provided me with confidence from knowing my

writing had been checked by another experienced engineer. This thesis is better because of

you.

My special thanks to everyone at AFRL and ISSI for the support that was provided

to me every day working on this project; I simply would not have been able to complete it

without you. This includes (in alphabetical order, because I could never begin to qualify

your contributions): Jacob Baranski, Greg Bloch, Matt Boehle, Jesse Coffman, Danny

Gillaugh, Tim Janczewski, Mike List, Chase Nessler, Rolf Sondergaard, Ernest Thompson

and Trevor Tomlin.

DISTRIBUTION A.

Approved for public release: distribution unlimited.

Case number AFRL-2021-2452.

iv
TABLE OF CONTENTS

ABSTRACT....................................................................................................................... iii

ACKNOWLEDGMENTS ................................................................................................. iv

LIST OF FIGURES .......................................................................................................... vii

LIST OF TABLES ........................................................................................................... xiii

CHAPTER 1 INTRODUCTION ........................................................................................ 1

1.1. Compressors .............................................................................................. 1


1.2. Gas Turbines ............................................................................................. 7
1.3. Compressor Testing................................................................................. 11
1.4. Motivation ............................................................................................... 14
CHAPTER 2 FACILITY .................................................................................................. 17

2.1 Capabilities .............................................................................................. 17


2.2 Requirements ........................................................................................... 21
CHAPTER 3 INLET SYSTEM ........................................................................................ 24

3.1 Mechanical Design .................................................................................. 24


3.2 Flow Suppression Effects ........................................................................ 29
3.3 Flow Measurement .................................................................................. 30
CHAPTER 4 MECHANICAL SYSTEMS ....................................................................... 34

4.1 Mechanical Design .................................................................................. 34


4.2 Bearings ................................................................................................... 38
4.3 Shaft ........................................................................................................ 48
4.4 Rotordynamics Analysis ......................................................................... 54
CHAPTER 5 DISCHARGE VALVE ............................................................................... 62

5.1 Operating Principal ................................................................................. 62


5.2 Sizing....................................................................................................... 65
5.3 Load Analysis .......................................................................................... 68
5.4 Actuation ................................................................................................. 75

v
CHAPTER 6 EXHAUST COLLECTOR ......................................................................... 83

6.1 Mechanical Design .................................................................................. 83


6.2 Aerodynamic Analysis ............................................................................ 87
CHAPTER 7 CONCLUSIONS AND FUTURE WORK ................................................. 96

7.1 Inlet System Design ................................................................................ 97


7.2 Mechanical Systems Design.................................................................... 98
7.3 Discharge Valve Design .......................................................................... 99
7.4 Exhaust Collector Design ...................................................................... 100
7.5 Future Work .......................................................................................... 100
BIBLIOGRAPHY ........................................................................................................... 102

vi
LIST OF FIGURES

Figure 1.1: Typical application ranges of compressor types [1]. ........................................ 1

Figure 1.2: A three-stage, single acting reciprocating compressor (courtesy of

Ingersoll-Rand) [1].................................................................................................. 2

Figure 1.3: Ideal compression cycle pressure-volume (PV) diagram [2]. .......................... 3

Figure 1.4: A 14-stage axial-flow rotor (courtesy of Elliot Company) [1]. ........................ 4

Figure 1.5: Pressure-volume diagram of compression processes [3]. ................................ 5

Figure 1.6: An open-cycle gas-turbine engine diagram [3]. ............................................... 8

Figure 1.7: (a) P-v and (b) T-s diagrams for the ideal Brayton cycle [3]. .......................... 9

Figure 1.8: Schematic (left) and image (right) of Whittle turbine [4]. ............................. 11

Figure 1.9: Pratt & Whitney F100-PW-220 engine [6]. ................................................... 11

Figure 1.10: Typical centrifugal compressor performance map [4]. ................................ 14

Figure 2.1: Air Force Prize drive stand. ............................................................................ 17

Figure 2.2: Air Force Prize dynamometer and gearbox, power and torque capabilities

with speed. ............................................................................................................ 18

Figure 2.3: Test cell control room..................................................................................... 18

Figure 2.4: Data acquisition and controls test-cell interface cabinet. ............................... 20

Figure 2.5: Test cell high-temperature exhaust ducting.................................................... 21

Figure 2.6: Compressor power requirement for a given pressure ratio and mass

flowrate. Dry air at typical Dayton, OH inlet conditions (14.3 psi, 59 °F),

isentropic efficiency of 75%. ................................................................................ 22

vii
Figure 2.7: Achievable mass flow and pressure ratio with power draw of 181 hp. Dry

air at standard-day temperature (59 °F), isentropic compression efficiency of

75%. ...................................................................................................................... 23

Figure 3.1: Inlet system: (A) inlet screen holder, (B) screen (5x), (C) butterfly valve,

(D) valve actuator, (E) flow diverter, (F) flow barrel, (G) honeycomb (2x), (H)

fairing, (I) seal/damper, and (J) bellmouth/nozzle. ............................................... 24

Figure 3.2: Geometric chart for components under external pressure or compressive

loadings (for all materials) [18]. ........................................................................... 26

Figure 3.3: Chart for determining shell thickness of components under external

pressure developed for austenitic stainless steel [18]. .......................................... 27

Figure 3.4: Inlet system mounting structure: (K) pipe clamps (2x), (L) trolley system,

and (M) radial alignment jack screws (4x). .......................................................... 28

Figure 3.5: ASME 19.5-2004 low β nozzle [20]. ............................................................. 33

Figure 4.1: Mechanical systems: (A) impeller nut, (B) hex lug, (C) impeller, (D) shaft

insert, (E) retaining ring (3x), (F) honeycomb seal (3x), (G) front bearing

housing, (H) radial air bearing (2x), (I) radial air bearing supply fitting (2x),

(J) button load cell (8x), (K) thrust air bearing (2x), (L) thrust air bearing

supply fitting (2x), (M) shaft insert bolt, (N) thrust piston supply fitting (4x),

(O) thrust piston, (P) thrust piston nut, (Q) spline coupling, (R) bearing

housing flange, (S) aft bearing housing, (T) thrust piston seal spacer, (U)

radial bearing cavity vent (2x), (V) wave spring (8x), (W) dowel pin (16x), (X)

thrust bearing cavity vent, (Y) main shaft, and (Z) front radial bearing vent

plug. ...................................................................................................................... 36

viii
Figure 4.2: Common aerostatic bearing configurations and pressure profiles [26]. ......... 39

Figure 4.3: Radial bearing loaded area. ............................................................................ 41

Figure 4.4: Mechanical systems gas supply system diagram. .......................................... 42

Figure 4.5: Radial bearing air gap shear. .......................................................................... 44

Figure 4.6: Thrust bearing air gap shear. .......................................................................... 47

Figure 4.7: Shaft insert. ..................................................................................................... 50

Figure 4.8: Finite-element analysis of main shaft and shaft insert in the nominal

thermal case. ......................................................................................................... 52

Figure 4.9: Shaft fit with cold clearance and interface temperature. ................................ 53

Figure 4.10: Rotordynamic model of rotor system. .......................................................... 55

Figure 4.11: Mode 1 – Rigid-body pitch mode, 19,745 rpm, 26% strain energy. ............ 56

Figure 4.12: Mode 2 – Rigid-body bounce mode, 42,505 rpm, 20% strain energy. ........ 57

Figure 4.13: Mode 3 – 1st bend mode, 92,257 rpm, 85% strain energy. ........................... 57

Figure 4.14: Critical speed map. ....................................................................................... 58

Figure 4.15: Front bearing load vs. rotational speed. ....................................................... 59

Figure 4.16: Aft bearing load vs. rotational speed. ........................................................... 60

Figure 4.17: Rotor proximity sensor placement. .............................................................. 61

Figure 4.18: Rotor deflections vs. rotational speed. ......................................................... 61

Figure 5.1: Discharge valve flow path. ............................................................................. 62

Figure 5.2: Discharge valve – exploded view: (A) stator ring, (B) gasket, (C) rotating

ring, (D) drive pin, (E) drive link, (F) bearing array, (G) bearing ring, (H)

motor mount, (I) motor shaft seal, (J) stepper motor actuator, and (K) encoder.

............................................................................................................................... 63

ix
Figure 5.3: Discharge valve – full-open position section views: through flow sector

(section A) and through bearing sector (section B). ............................................. 65

Figure 5.4: Single sector of discharge valve sizing model. .............................................. 66

Figure 5.5: Discharge valve flow-area vs. valve position. ................................................ 67

Figure 5.6: CFD sector model mesh of discharge valve. .................................................. 69

Figure 5.7: Discharge valve CFD results of upstream pressure vs. stator-ring gap

analysis. ................................................................................................................. 70

Figure 5.8: Discharge valve CFD results at full-closed position, 0.025 in. stator-

rotating ring gap: flow trajectories colorized with Mach number (A) and

pressure, normalized to upstream value (B). ........................................................ 71

Figure 5.9: Discharge valve CFD results at full-closed position, 0.005 in. stator-

rotating ring gap: flow trajectories colorized with Mach number (A) and

pressure, normalized to upstream value (B). ........................................................ 71

Figure 5.10: Axial load vs. valve open percentage. .......................................................... 72

Figure 5.11: Discharge valve CFD results of pressure-ratios across components vs.

valve position with third-order polynomial fits. ................................................... 73

Figure 5.12: Discharge valve CFD results at 3% open position: flow trajectories

colorized with Mach number (A) and pressure, normalized to upstream value

(B). ........................................................................................................................ 74

Figure 5.13: Discharge valve CFD results at 31% open position: flow trajectories

colorized with Mach number (A) and pressure, normalized to upstream value

(B). ........................................................................................................................ 75

x
Figure 5.14: Discharge valve CFD results at full-open position: flow trajectories

colorized with Mach number (A) and pressure, normalized to upstream value

(B). ........................................................................................................................ 75

Figure 5.15: Discharge valve actuation, downstream view: (A) full-closed position,

(B) mid-span position, (C) full-open position....................................................... 76

Figure 5.16: Disk friction. [34] ......................................................................................... 77

Figure 5.17: Discharge valve actuation diagram. ............................................................. 79

Figure 5.18: Valve position vs. motor position................................................................. 79

Figure 5.19: Drive-motor torque required to actuate valve with arm and link design

and capabilities of various stepper motor and driver combinations vs. motor

position.................................................................................................................. 80

Figure 5.20: Torque load to rotate valve arm and capabilities of various stepper motor

and driver combinations vs. valve open percentage. ............................................ 81

Figure 5.21: Valve-control resolution with MLA10641 driver. ....................................... 82

Figure 6.1: Exhaust collector: (A) exit transition, (B) main body, (C) exit flange, and

(D) entrance flange (2x)........................................................................................ 84

Figure 6.2: Collector main body geometry with equal-volume lofted channels............... 85

Figure 6.3: Downstream exhaust system: (A) facility exhaust duct, (B) intermediate

duct mount, (C) flex joint, (D) surge relief valve, and (E) surge relief port. ........ 86

Figure 6.4: TurboFlex coupling [36]. ............................................................................... 86

Figure 6.5: Collector-only CFD mesh. ............................................................................. 87

xi
Figure 6.6: Collector-only CFD results: (A) side-view and (B) iso-view of flow

trajectories colorized with pressure (normalized to downstream static value),

and (C) and (D) with Mach number. ..................................................................... 89

Figure 6.7: Collector-only CFD results: upstream view of circumferential pressure-

gradient (normalized to downstream static value). ............................................... 89

Figure 6.8: Full exhaust-system CFD mesh. ..................................................................... 90

Figure 6.9: Full-exhaust CFD results: (A) side-view and (B) iso-view of flow

trajectories colorized with pressure (normalized to downstream static value),

and (C) and (D) with Mach number. ..................................................................... 91

Figure 6.10: Full-exhaust CFD results: BDC splitter interaction asymmetry. ................. 92

Figure 6.11: Full-exhaust CFD results: upstream view of circumferential pressure-

gradient (normalized to downstream static value). ............................................... 93

Figure 6.12: Full-exhaust CFD results: discharge valve downstream-face pressure

gradient (normalized to downstream static value), upstream view....................... 94

Figure 6.13: Discharge valve downstream face pressure distribution (normalized to

downstream static value)....................................................................................... 94

Figure 6.14: Full-exhaust CFD results: upstream view of circumferential pressure-

gradient at compressor exit-plane (normalized to the average value). ................. 95

Figure 7.1: Complete compressor test facility assembly, sectioned top view. ................. 97

xii
LIST OF TABLES

Table 1: Rotordynamics analysis results........................................................................... 56

Table 2: Discharge valve CFD results: component pressure ratios. ................................. 73

xiii
CHAPTER 1

INTRODUCTION

1.1. Compressors

Mechanical compression of a gas is useful for a wide variety of applications across

virtually all industries. The mechanisms used to accomplish this task, naturally referred to

as compressors, come in an extensive range of sizes and employ numerous types of

compressible gases, chosen for their specific purpose. Pressure ratio – the ratio of discharge

pressure to supply pressure – and flowrate through the device are the standard parameters

for which a compressor is chosen, and ranges of them are shown for various types of

compressors in Figure 1.1.

Figure 1.1: Typical application ranges of compressor types [1].

Instances found in everyday life include compression of air in internal combustion

engines, refrigerant in air conditioners, and carbon dioxide (CO2) in carbonated drink

1
production. Compressors function by means of one of two distinct principles: positive

displacement or dynamic compression. Both types are commonly used in multistage

compression, where one stage’s exit is connected to the inlet of the next, increasing the

system’s pressure-rise capability.

Positive displacement compressors increase the pressure of a gas by the forced

reduction of its volume, caused by the movement of some mechanism. A classic example

of this type of machine is the reciprocating compressor, which features a piston and

cylinder arrangement with valves, as shown in Figure 1.2. The example compressor is

considered single-acting because compression takes place one side of the piston, and

multistage because it features multiple cylinders [1].

Figure 1.2: A three-stage, single acting reciprocating


compressor (courtesy of Ingersoll-Rand) [1].

A pressure-volume (PV) diagram of an ideal compressor cycle is shown in Figure

1.3 to illustrate the operating principals. Starting at state 1, gas in the cylinder is at the

2
suction pressure (denoted by ps) and the piston is forced upward by the rotation of a shaft;

the volume in the cylinder decreases which results in a pressure and temperature rise. A

valve opens at state 2 when the gas in the cylinder has reached the discharge pressure (pd),

allowing the it to escape into the discharge line. The discharge valve closes at state 3 as the

piston approaches its highest point – called top dead center (TDC) – and the piston begins

to fall, increasing the volume in the cylinder and decreasing the pressure and temperature

of the gas trapped inside. At state 4, the suction valve opens to allow a fresh charge of gas

to fill the cylinder. As the piston approaches its lowest point – called bottom dead center

(BDC) – the suction valve closes, sealing the cylinder and restarting the cycle at point 1

[2]. Other types of positive displacement compressors include screw, lobe, scroll, and

diaphragm, which also reduce the volume of a gas to increase its pressure, but do so via

differing mechanisms.

Figure 1.3: Ideal compression cycle pressure-volume (PV)


diagram [2].

3
Dynamic compressors, also called turbocompressors, are steady-flow machines

which convert the kinetic energy of the gas to static pressure via diffusion. A rotating

impeller draws in gas and accelerates it, causing a rise in kinetic energy. The high-energy

gas is then fed through a static diffuser, where an increase in flow-area decelerates the flow

resulting in a static pressure rise. A housing around both components creates a sealed

volume allowing it to function [2]. The rotor of a large dynamic compressor is shown in

Figure 1.4.

Figure 1.4: A 14-stage axial-flow rotor (courtesy of Elliot Company) [1].

There are two main dynamic compressor types: centrifugal and axial flow.

Centrifugal compressors convert axial incoming flow to a radial exit whereas axial

machines maintain average flow-direction throughout. Centrifugal compressors are applied

in a broad range of industries in all sizes because of the simplicity, relatively large single-

stage pressure-ratio capability, and low vibration [1]. Axial compressors are most often

found in multi-stage configurations in large-aircraft turbine-engines and ground-based-

4
power turbine-engines. These machines often process significant amounts of flow to satisfy

large power requirements, as shown in Figure 1.1. Dynamic compressors ultimately

achieve the same result as positive displacement – pressure rise of a gas in order to use it

to do work. The difference between the two exists in the thermodynamic process by which

the pressure rise is realized.

Figure 1.5 shows PV diagrams of various compression processes, which covers the

range of thermodynamic possibilities for converting change in volume to pressure rise. The

polytropic index, n, is a constant for a given process used to describe how pressure changes

with volume in the presence of heat transfer, where the relation is:

𝑃𝑉 𝑛 = 𝐶𝑜𝑛𝑠𝑡𝑎𝑛𝑡 (1)

As denoted in the diagram, isothermal processes have a polytropic index equal to 1, while

isentropic processes have a polytropic index equal to k, the specific heat ratio (often labeled

as γ). Polytropic processes are a hybrid of the two and therefore have a polytropic index

between 1 and k.

Figure 1.5: Pressure-volume diagram of compression processes [3].

5
Though not practical in real applications, an isentropic compression process

requires no heat or gas escapes the system and no friction or other losses occur, and thus it

is adiabatic and reversable. Examples of loss mechanisms in compressors include: heat

transfer through walls, pressure drop across valves due to high velocity, pressure

fluctuations due to upstream and downstream components, flow disruptions due to

geometrical inadequacies, leakage across seals, and leakage through clearances between

rotating and static parts (referred to as tip clearances). Because there are no sources of

energy loss, isentropic compression is the most efficient. Isothermal compression is

accomplished by rejecting heat as the compression process progresses, thus resulting in no

temperature rise. Though energy is lost, isothermal compression minimizes the gas specific

volume throughout the process, which reduces the amount of work required per unit mass

(specific work) [2]. This phenomenon is also observed by integrating the area to the left of

the curve in Figure 1.5 to find the work required. Again, isothermal compression requires

the least amount of work while isentropic requires the most [3].

For most applications, the goal of a compressor is to achieve the desired pressure

rise while using the least amount of power [2]. Though isothermal compression requires

the least amount of specific work, the minimization of specific volume at a constant

volumetric flowrate results in maximized mass flowrate. Because power is the time rate of

work, the power required does not follow the same trend; isothermal compression almost

always requires more power than isentropic, and for that reason, compressor performance

is universally measured by the isentropic efficiency [2] [3]:

𝐼𝑠𝑒𝑛𝑡𝑟𝑜𝑝𝑖𝑐 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑜𝑟 𝑤𝑜𝑟𝑘 𝑤𝑠


𝜂𝑐 = = (2)
𝐴𝑐𝑡𝑢𝑎𝑙 𝑐𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑜𝑟 𝑤𝑜𝑟𝑘 𝑤𝑎

6
An expression for isentropic compressor work, 𝑤𝑠 , can be derived by first

considering the relationship between pressure and volume, shown in Equation 1, where

again, n is equal to k, the specific heat ratio. From this and the equation of state of an ideal

gas (PV=nRT), we are given the relationship between the pressure (PR) and temperature

(TR) ratios experienced in an isentropic process [3],


𝑘
𝑃2 𝑇2 𝑘−1 (3)
=( ) = 𝑃𝑅
𝑃1 𝑇1

From the first law of thermodynamics and definition of enthalpy [3],

𝑊 = 𝐻2 − 𝐻1 = 𝑐𝑝 (𝑇2 − 𝑇1 ) (4)

Substituting and rearranging gives the following equation for the work required to

compress an ideal gas isentropically, per unit mass, w [3],

(𝛾−1)
𝑤𝑠 = 𝑐𝑝 𝑇1 (𝑃𝑅 𝛾 − 1) (5)

Positive displacement and dynamic compressors behave differently with respect to

heat transfer due to the time duration of the process. The slower positive displacement

machines rely on heat rejection to minimize the work requirement per unit mass while

dynamic compressors more efficiently convert the energy to raise the pressure by

minimizing heat loss. The application determines the requirement.

1.2. Gas Turbines

Gas turbines are versatile gas power cycle machines used across numerous

industries to power land, air, and sea vehicles, compressors, pumps, and electrical

generators. Boyce’s Gas Turbine Engineering Handbook defines six categories of gas

7
turbines: frame type heavy-duty (power generation, 3-480 MW), aircraft-derivative

(aircraft propulsion, 2.5-50 MW), industrial-type (petrochemical compression drive trains,

2.5-15 MW), small gas turbine (0.5-2.5 MW), microturbine (20-350 kW), and vehicular

(225-1120 kW) [4]. Figure 1.6 illustrates how gas turbines function: (1) air is drawn in by

a compressor where it experiences a pressure and temperature rise, then (2) enters a

combustion chamber where fuel is added and burned, then (3) it is expanded across a

turbine, which drives the compressor via a common shaft, and (4) finally is exhausted [3].

Figure 1.6: An open-cycle gas-turbine engine diagram [3].

Gas turbines operate on the Brayton cycle, as illustrated in Figure 1.7 with (a) P-v

diagrams and (b) temperature-entropy (T-s) of the ideal cycle. When the gas entering the

engine is rejected as exhaust, it is referred to as an open cycle, and a closed cycle when the

flow is recirculated; all aerospace turbine engines are open cycle. From state (1) to (2), the

air is compressed. As discussed in section 1.1, dynamic compressors rapidly compress the

air and heat loss is minimal, thus the ideal process is isentropic. Heat is added via

combustion with fuel from states (2) to (3) at constant pressure. Isentropic expansion across

8
the turbine occurs between states (3) and (4), decreasing the pressure back to that of the

inlet. Heat is then rejected in the hot exhaust gas at constant pressure from state (4) to (1),

completing the cycle [3].

(a) P-v diagram.

(b) T-s diagram.

Figure 1.7: (a) P-v and (b) T-s diagrams for the ideal Brayton cycle [3].

9
A common application of gas turbines in the aircraft-derivative category are called

turbojets, in which flow exiting the turbine is accelerated through a nozzle in order to

generate force, called thrust. In other common engine architectures, work may be extracted

from the main shaft or a secondary shaft/turbine exposed to the combustor exit flow.

Arrangements that utilize shaft power include turbojets (small amounts for auxiliary

components such as generators, pumps, etc.), turboshafts (to rotate vehicle drivetrains,

electric generators, compressors or pumps, etc.), turboprops (to drive a thrust-producing

propellor), and turbofans (to drive a ducted fan to provide core-bypassing flow for thrust

and cooling). Turbofans are classified as either low or high-bypass, based on the ratio of

thrust generated by the fan to that generated by the core. Many aerospace gas turbines also

feature afterburners, or large ducts between turbine and nozzle, where additional fuel is

added to the exhaust flow and burned for supplementary thrust [5].

Gas turbines employ dynamic compressors – both centrifugal and axial – in a wide

range of combinations, sizes, numbers of stages, pressure ratios, and flowrates. Figure 1.8

and Figure 1.9 are two examples of turbojets. The former shows (a) a schematic and (b) an

image of the Whittle turbine – the first modern gas-turbine, built in 1930 by English

inventor Frank Whittle. This engine features a single-stage centrifugal compressor and

radial inflow turbine [4]. The latter shows the F100-PW-220 – a low-bypass afterburning

turbofan built by Pratt & Whitney and used in the Boeing F-15 and Lockheed-Martin F16

fighter jets [6]. A significantly more complicated engine than the Whittle turbine, the F100

features a dual-spool axial compressor with 3 fan and 10 compressor stages and 2 low and

2 high-pressure turbine stages [7].

10
Figure 1.8: Schematic (left) and image (right) of Whittle turbine [4].

Figure 1.9: Pratt & Whitney F100-PW-220 engine [6].

1.3. Compressor Testing

Because of the relatively high speeds at which dynamic compressors operate and

high exit pressures and temperatures involved, rigs for testing come with complexity not

necessarily present with positive displacement compressors. Load management – pressure,

centrifugal, and thermal – as well as rotordynamic considerations must be made to ensure

functionality and integrity of the rig. Compressor test rigs typically feature a significant

amount of instrumentation throughout the flow path, to effectively analyze performance.

As size decreases, the effect that the instrumentation has on the compressor itself becomes

greater. Small compressors additionally require smaller machining tolerances on

11
components due to the increased effect of tip clearances, which also adds complexity to the

design.

To provide energy to the compressor, a drive system must be employed, capable of

delivering the power required to pressurize the air at a very high speed. Small compressors

found in automotive turbochargers and small gas-turbine engines can reach speeds in

excess of 200,000 rpm. As discussed in Section 1.1, dynamic compression occurs

isentropically and thus, exit temperatures increase with pressure rise. The test rig design

process must include considerations to prevent damage from thermal stress and damage

due to contact from thermal growth. Pressure loads, though not unique to dynamic

compressors, must also be factored into the test rig design. In order to load the compressor,

an area restriction downstream of the exit must be present to provide what is referred to as

back-pressure. Back-pressuring of a compressor test rig is often achieved by

implementation of a valve located downstream of the compressor exit, referred to as a

discharge valve.

Another critical aspect of dynamic compressor testing is management of adverse

conditions which occur at operating limits, called choking and surge. Choking occurs when

flow at a given condition has reached a maximum, and its velocity is sonic at location of

minimum area. While choking is not necessarily dangerous to the rig, compressor

performance will sharply decrease when this condition is met. Surge is an aerodynamic

instability that arises at low-flow conditions, in which rapid flow and pressure fluctuations

occur as a result of flow separation inside the compressor. Significant vibration and noise

are experienced in surge conditions, which can cause catastrophic damage to the

compressor. Predicting the onset of surge is done only by high-speed data sampling of

12
pressure in the compressor to identify these fluctuations [2]. Measuring vibration of the rig

can be used to determine when surge is occurring and a valve to vent the pressurized

volume to the exhaust – referred to as a surge relief valve – is a typical method of damage

prevention.

With a functioning test rig, compressor performance can be characterized by

adjusting the impeller speed and downstream flow-area, in order to vary mass flow and

pressure ratio. These two measured values are then used to calculate isentropic efficiency

using Equations 2-5, giving the three parameters most often used to characterize

compressors. Typically, the corrected mass flowrate is presented, which is the amount of

mass that is processed by the compressor if the inlet were exposed to a standard pressure

and temperature. Standard-day sea-level ambient conditions are 14.696 psi and 59 °F,

which are used in Equation 6 [4] to calculate corrected mass flow 𝑤√𝜃/𝛿, where 𝑚̇ is the

physical mass flow, 𝜃 is the ratio of the temperature to the standard, and 𝛿 is the ratio of

the pressure to the standard.

𝑇 𝑃
𝑚̇√𝜃/𝛿 = 𝑚̇√ /( ) (6)
518.67 14.696

𝑁/√𝜃 = 𝑁/√𝑇/518.67 (7)

Plotting the efficiency as a function of the corrected mass flowrate and pressure ratio

generates what is referred to as a compressor performance map, which can be used to

compare any compressors because of the independence of inlet conditions. A generalized

compressor map is presented in Figure 1.10. As shown in this example, lines of constant

aerodynamic speed (also called corrected speed) are often plotted as well, giving a

comprehensive picture of the operating limits of the compressor. Aerodynamic speed,

13
similar to corrected flow, is simply the speed at which the machine would rotate at standard

temperature, calculated using Equation 7 [4].

Figure 1.10: Typical centrifugal compressor performance map [4].

1.4. Motivation

Small-turbojet engine performance has been an ongoing field of research at the Air

Force Research Laboratories (AFRL) in the last decade, particularly that of commercial-

off-the-shelf (COTS) hobbyist engines in the hundreds-of-pounds thrust range. Relatively

14
low cost and respectable performance has driven the interest in these engines. Modified

component integration [8] [9], performance comparison [10] [11], simulated altitude

testing [12] [13], recuperation integration [14] [15], and more has been studied in an effort

to determine the utility of these small engines for the needs of the Air Force. Traditionally,

engine development for military application has been contracted to large aerospace defense

companies. Recently however, AFRL undertook the in-house development of an engine in

this thrust class with goals of development of design tools, the ability for rapid modification

and improvement, and to ease development and procurement costs.

Because a single-stage compressor typically provides sufficient pressure rise and

flow for engines of this size, centrifugal or mixed-flow architectures are typically used as

opposed to the multi-stage, axial-flow compressors found in larger gas-turbine engines.

Centrifugal compressors in general offer greater pressure-rise per stage than axial machines

due to the inlet/outlet diameter variance and turning of the flow [4]. Though most small-

turbojets studied at AFRL do feature a radial-exit architecture (i.e., centrifugal or mixed

flow), design of this type of compressor in this class had not been done in-house. For this

reason, design tools were developed relatively from scratch.

Engine testing with in-house compressor designs has been performed to collect data

to validate the design process and to provide insight for future iterations. While engine-test

data is valuable for design-tool validation and system characterization, rotating component

design iterations often require disassembly of the majority of the engine, which is a

significant time and financial commitment. Furthermore, instrumentation integration is

complicated in an engine environment due to additional considerations such as heat,

vibration, and non-compressor-component geometry. Alternatively, component testing in

15
a specialized rig provides a more controlled test environment and enables more rapid

testing of design iterations. While multiple compressor test facilities exist at AFRL’s

primary research facility – Wright-Patterson Air Force Base (WPAFB) – none are on the

scale required to test compressors in the range discussed here. Existing infrastructure at

this site however does feature a drive-system, instrumentation provisions, and exhaust

capability to enable development of such a facility.

16
CHAPTER 2

FACILITY

2.1 Capabilities

A drive system featuring a liquid-cooled, regenerative AC dynamometer and

gearbox with 10:1 input/output shaft is in place in the Small Engine Research Lab (SERL)

at WPAFB. The drive stand, shown in Figure 2.1, was originally procured for the Air Force

(AF) Prize – a competition to develop and build a small, lightweight, fuel-efficient turbine

engine [16]. It is capable of absorbing or generating up to 325 horsepower and spinning up

to 100,000 rpm (non-simultaneously). The torque curve for the system at the high-speed

shaft is shown in Figure 2.2. The motor/generator is controlled by a variable frequency

drive (VFD) and programmable logic controller (PLC) that is remotely operated inside the

test cell control room, shown in Figure 2.3.

Figure 2.1: Air Force Prize drive stand.

17
Figure 2.2: Air Force Prize dynamometer and gearbox, power and torque
capabilities with speed.

Figure 2.3: Test cell control room.

The main-spool speed (up to 10,000 rpm) is measured by an encoder on the

opposite-drive end (ODE) of the motor/generator shaft. A torque transducer is located

between the motor/generator and the gearbox coupling, which provides a torque

18
measurement of the low-speed main shaft. Torque (T, measured in ft-lbf) and speed (N,

measured in rpm) are used to calculate the power being drawn or absorbed using Equation

8,

𝑇𝑁
𝑃= (8)
5252

The low and high-speed gearbox input/output shafts also feature torque transducers in order

to measure at that location, which will only vary from the main shaft reading by the amount

lost in the gearbox and coupling. However, the high-speed transducer is only rated up to

45,000 rpm and must be removed for applications requiring higher speed. For this reason,

testing is completed to characterize these losses and allow power to be accurately measured

at the high-speed shaft by a correction of the main shaft measurement.

Data acquisition (DAQ) and controls systems in the facility are designed to allow a

variety of measurements to be taken at various sample rates, to enable testing of many types

of engines and test rigs. Current infrastructure and planned expansion features the

following capabilities: 128 pressure channels at a minimum of 25 kHz sample rate, 96

thermocouple channels at 90 Hz, 48 analog input channels at 1 MHz for various auxiliary

sensors, 16 high-speed analog input channels at 60 MHz (used for accelerometers,

proximity sensors, and any other shaft-speed relevant measurements), 16 analog output

channels at 250 kHz, 4 high-speed analog output channels at 2.86 MHz, 224 digital

input/output channels up to 100 MHz, and 24 frequency counter channels at 100 MHz.

Measurements are viewed and controls operated using a LabView program which designed

and written in house. The system is operated inside the test cell control room, also shown

in Figure 2.3, along with the test-cell monitoring and recording system. This system

features 4 cameras located throughout the test cell to offer visual monitoring of all

19
equipment. An image of the test-cell interface cabinet, where all instrumentation is

connected to the facility, is shown in Figure 2.4.

Figure 2.4: Data acquisition and controls test-cell interface cabinet.

Compressed shop air is supplied to the test cell at up to 125 psi to be used for

pneumatically-actuated valves, pressurization of engine or rig components, cooling, etc.

Nitrogen (N2) tanks with regulators are located just outside the test cell. These are plumbed

into the test cell to be used for any higher-pressure requirements. Pressure and flow control

systems for the air and nitrogen supplies are implemented as required by the specific test

configuration.

Because the test cell was originally designed for testing turbine engines, a high-

temperature exhaust system is in place. Configurable ducting in the test cell rated for 1400

°F flow exits through the ceiling and extends upward to the roof of the facility, where a

high-temperature industrial fan pulls the gases out at over 9,500 cfm, or 3.4 lbm/s at the

20
rated temperature. The system is controlled and monitored via the PLC, operated inside the

test cell control room. The test cell ducting configured to extend the exhaust of a test article

is shown in Figure 2.5.

Figure 2.5: Test cell high-temperature exhaust ducting.

2.2 Requirements

The amount of power required by a compressor is a function of the work it is doing

(a function of the inlet temperature, pressure ratio across it, and the efficiency of the

process, as shown in Equation 5) and the mass flowrate through the device (a function of

the inlet density and volumetric flowrate, which is determined by the size of compressor

and speed at which it is operating). As illustrated in Figure 2.2, the motor and gearbox

available to drive the compressor test rig is limited to 325 horsepower between

approximately 30,000 and 60,000 rpm, with the available power decreasing with speed to

21
181 horsepower at the maximum rated speed of 100,000 rpm. To determine the compressor

operating ranges capable of being tested by this drive system, the power required as a

function of pressure ratio and mass flowrate is plotted in Figure 2.6, assuming an isentropic

compression efficiency of 75% and dry air at typical Dayton, OH inlet conditions (14.3 psi,

59 °F). The power limits at the aforementioned speeds are highlighted to illustrate

operating points that are capable of being tested (to the left of the curve) at the specified

inlet conditions.

Figure 2.6: Compressor power requirement for a given pressure ratio and mass
flowrate. Dry air at typical Dayton, OH inlet conditions (14.3 psi, 59 °F),
isentropic efficiency of 75%.

Because the power required is a function of the physical mass flowrate, which is a

function of the inlet density, inlet suppression (decreasing the inlet pressure by

intentionally restricting the flow entering the rig) offers the ability to reduce the power

22
requirement. Corrected mass flowrate is used to normalize the data. With standard inlet

temperature of 59 °F and assumed efficiency of 75%, the achievable compressor pressure

ratios and mass flowrates are plotted in Figure 2.7, calculated with the max-speed

horsepower limit of 181 hp and reduced inlet pressures.

Figure 2.7: Achievable mass flow and pressure ratio with power draw of
181 hp. Dry air at standard-day temperature (59 °F), isentropic
compression efficiency of 75%.

23
CHAPTER 3

INLET SYSTEM

3.1 Mechanical Design

The inlet system design presented here serves three main purposes: to restrict,

condition, and measure flow entering the compressor. A cross-sectional view of the system

design for small-turbojet compressor testing is presented in Figure 3.1. As discussed in

Section 2.2, candidate test articles require more power than the drive stand is capable of

delivering at sea level inlet conditions. To collect data at the upper portions of compressor

maps (i.e., higher corrected mass flowrates and pressure ratios), the physical flowrate

through the system is restricted and the compressor inlet is decreased to sub-ambient

pressure. Pipe sizing is done according to ASME standard B31.3-2020, section 304.1.3

[17].

Figure 3.1: Inlet system: (A) inlet screen holder, (B) screen (5x), (C) butterfly valve, (D)
valve actuator, (E) flow diverter, (F) flow barrel, (G) honeycomb (2x), (H) fairing, (I)
seal/damper, and (J) bellmouth/nozzle.

24
Implementation of a butterfly valve (C) controlled by an electric positioning

actuator (D) accomplishes the required flow suppression by simply blocking the open area

at the inlet to the inlet system. Upstream of the valve is a plate (A) that holds a coarse

screen (B), which acts as a filter for large particles to prevent damage to the valve and

downstream flow conditioning components. The inlet screen holder features a curved

transition from its forward face to the flow path to reduce separation common with abrupt

forward-facing steps.

Downstream of the suppression valve is a flow diverter (E), made from a capped

pipe with 156 holes drilled along the outer diameter and axial face. The net area through

the holes is approximately equal to that of the incoming pipe. The flow diverter’s purpose

is to radially fill the much larger diameter section, referred to as a flow barrel (F), in a short

axial distance. This design functions as a diffuser, dramatically slowing the flow to reduce

turbulence in preparation for the flow condition section, which is made up of a series of

fine screens and honeycomb flow straighteners (G). Screens are constrained between

flanges of the flow barrel, and honeycomb is held between the screens.

After the flow is conditioned, another flow barrel section leads to a nozzle (J),

which is calibrated to measure the flow passing through it to the compressor inlet. A plastic

fairing (H), or curved shroud, is located in the corner of the flow barrel to reduce

recirculation which would otherwise form in this region and distort the flow measurement.

Between the nozzle and flow-barrel flange to which it mounts is a rubber seal (I), which

prevents flow leaking around the nozzle and dampens vibration transmitted from the rig.

All components other than the fairing and seal are made of stainless steel.

25
To confirm the chosen pipe can withstand the external pressure loading, the A factor

is found using the ASME Boiler and Pressure Vessel Code (BPVC) geometric chart for

components under external or compressive loadings [18], shown in Figure 3.2. This is a

non-dimensional value that is a function of the ratio of a pipe’s outer diameter to its

thickness and the ratio of the unstiffened length of pipe to the outer diameter. The 14.5 inch

long, 16.0-inch diameter, schedule 10 pipe has a resulting DO/t value of 85.1 and L/DO of

0.9, resulting in an A factor of 0.0015.

Figure 3.2: Geometric chart for components under external pressure or compressive
loadings (for all materials) [18].

26
Figure 3.3 is used to determine the B factor, which is a function of the geometrical

A factor and the service temperature of the pipe. Because the inlet system does not

experience any heating other than negligible conduction from the rig, room temperature

conditions are expected, resulting in a B factor of approximately 18.9 ksi (130 MPa). Using

Equation 9 [17], the maximum allowable external pressure, Pa is determined to be 295.4

psi. With an expected minimum internal pressure of 6 psi and ambient external pressure

(assumed to be 14.3 psi for Dayton, OH), a safety factor of 35.6 is determined.

4𝐵
𝑃𝑎 = (9)
3(𝐷𝑂 ⁄𝑡)

Figure 3.3: Chart for determining shell thickness of components under external pressure
developed for austenitic stainless steel [18].

27
Because the drive gearbox and compressor rig generate heat and the inlet system

does not, thermal growth must be taken into account and sufficient degrees of freedom

between the two must be allowed. The inlet system is attached to the test stand’s mounting

plate with the structure presented in Figure 3.4. Pipe clamps (K) hold the inlet flow barrel

against tangential struts that are mounted to axial tracks. These tracks rest on top of wheels,

forming a trolley system (L) with over 6 inches of travel, providing the axial degree of

freedom. Axial growth of less than 0.120 inches is expected, based on predicted thermal

conditions. Radial alignment of the trolley system is achieved by four jackscrews (M),

which press against the under-side of the trolley system mounting strut. To allow for radial

growth, large clearance holes in the nozzle mounting flange and the compliance of the

seal/damper permit the nozzle to move slightly within the flow barrel opening. To

determine the expected radial deflection in operation, the distance from the output shaft

axis to the test article mounting plate is measured, the drive system turned on and operated,

then the measurement repeated; deflection of less than 0.001” is determined.

Figure 3.4: Inlet system mounting structure: (K) pipe clamps (2x), (L) trolley system, and
(M) radial alignment jack screws (4x).

28
3.2 Flow Suppression Effects

While flow suppression is beneficial from a power requirement standpoint, doing

so does have a negative effect on compressor operation. As physical flowrate and pressure

decrease with flow suppression with a constant corrected flowrate, so too does the density.

Because physical flowrate decreases proportionally with the change in density, volumetric

flowrate and therefore velocity remain constant. The effect on compressor performance

can be explained by examining the resulting Reynolds number. Equation 10 [19] shows

that the decrease in density, 𝜌, with constant hydraulic diameter (effective diameter if the

inlet was an open circle), 𝑑ℎ , velocity, u, and viscosity, 𝜇, results in a decreased Reynolds

number:

𝜌𝑢𝑑ℎ
𝑅𝑒 = (10)
𝜇

Because the Reynolds number is a measure of the ratio of inertial forces to viscous

forces in a flow [19], a decrease means that viscous forces play a larger role as the inlet is

suppressed. In regards to compressor operation, this means that larger shear stresses will

occur at the walls (shroud, impeller blades, diffuser vanes, etc.), resulting in stronger wakes

and vortices. Because energy is dissipated in such flow structures, compressor efficiency

is expected to decrease. Additionally, wakes and vortices create effective blockages in the

flow field, which will likely affect flowrates and pressure ratios. Complex 3-dimensional

computational fluid dynamics (CFD) is necessary to fully understand these effects within

the compressor, though not conducted for initial testing.

29
3.3 Flow Measurement

To accurately measure flow, conditions upstream of the metering device must be

fully developed, uniform, and free from swirl [20]. These requirements ensure that the

physical situation closely matches the assumptions made in the theory of typical flow

measuring devices; velocity is everywhere parallel to the flow axis and uniform in

magnitude. Though a perfectly uniform, one-dimensional flow field is not practical in

application, measures can be taken to remove swirl and equally distribute the axial velocity

profile; these are called flow conditioning.

Calibration, in addition to flow conditioning, is commonly used to increase the

measurement accuracy. Calibration involves exposing the test measurement device

(including all flow conditioning and straight piping, upstream and downstream) to a known

flowrate to determine the discharge coefficient. This non-dimensional number is the ratio

of the actual measurement to the theoretical, which is then used to calculate in-situ

flowrates [20].

Flow conditioners come in many forms; commonly used methods include tube

bundles, crossed-axial plates, perforated plates, screens, or some combination of them.

Perforated plates and screens are most effective at reducing non-uniformity and distributing

the velocity profile across the diameter of the pipe. Tube bundles and crossed-axial plates

form channels parallel to the axis of the pipe which straighten the flow and reduce swirl

[21]. Flow straighteners, often called honeycomb, can be readily procured with cells of

various shapes, sizes, and materials. Increased solidity and wetted surface area of

conditioning elements results in greater effectiveness, as well as greater viscous forces

30
resulting in dynamic pressure loss. Balance of these two effects must be chosen in design,

dependent on the application [20].

Because suppression of inlet flow and pressure are intentionally generated with a

valve at the entrance of the inlet system, pressure loss is of little concern for the design

presented. The large diameter of the flow barrel also contributes to the reduction of losses,

as velocity is very low and so too is dynamic pressure. Screen and honeycomb sizing and

positioning is chosen based on best-practice in wind tunnel design from NASA and The

Royal Aeronautical Society [21] [22]. The flow conditioning design is as follows: three

screens, increasing in solidity in the downstream direction, constrain two honeycomb flow

straightener sections, followed by the final, most-solid screen. Screens are spaced

approximately 0.2 diameters apart, with the length of the honeycomb filling the two

upstream inter-screen areas.

Expected velocity at the inlet to the flow conditioning section is 26.9 ft/s and

dynamic pressure is 0.006 psi, based on maximum achievable flowrates of candidate test

articles. Using models from NASA [21], presented in Equations 11 and 12, losses across

each mesh screen and honeycomb, respectively are estimated.

2
𝐴𝐹𝑙𝑜𝑤 𝐴
𝐾𝑆𝑐𝑟𝑒𝑒𝑛 = 𝐾𝑅𝑁 𝐾𝑀𝑒𝑠ℎ (1 − )+( − 1) (11)
𝐴 𝐴𝐹𝑙𝑜𝑤

𝐿 𝐴 2 𝐴 2
𝐾𝐻𝑜𝑛𝑒𝑦𝑐𝑜𝑚𝑏 = 𝜆 (3 + )( ) +( − 1) (12)
𝐷ℎ 𝐴𝐹𝑙𝑜𝑤 𝐴𝐹𝑙𝑜𝑤

The value K is the total pressure loss coefficient, where ∆𝑃𝑇 is the total pressure drop across

the conditioning element, and 𝑞 is the dynamic pressure.

31
∆𝑃𝑇
𝐾= (13)
𝑞

Reynolds number sensitivity factor, 𝐾𝑅𝑁 is equal to 1 for Reynolds numbers in the inlet

system presented here. The mesh screen-type loss parameter, 𝐾𝑀𝑒𝑠ℎ is equal to 1.3 for

average circular metal wire, 1.0 for new metal wire, and 2.1 for silk thread. Surface

roughness, 𝜆 of the honeycomb channels is assumed to be that of a common sheet metal

finish, 20 μin [23]. Honeycomb geometry is defined by length, L and hydraulic diameter,

𝐷ℎ . Area of the flow barrel is denoted as A and flow area through each element 𝐴𝐹𝑙𝑜𝑤 .

The resulting losses across each element are calculated for an inlet pressure of 14.3

psi (as is common in Dayton, OH), and confirmation is given that the very small dynamic

pressure has a negligible effect: 0.0030 psi across the first screen, 0.0003 psi across the

first honeycomb, 0.0066 psi across the second screen, 0.0005 across the second

honeycomb, 0.0152 psi across the third screen, and 0.0375 psi across the final screen. The

total pressure loss across the entire flow conditioning section is 0.0630 psi, or 0.44%.

Downstream of the flow conditioning section, mass flow is measured by a

calibrated nozzle, designed according to ASME PTC 19.5-2004 [20]. This design, shown

in Figure 3.5, is chosen for its simplicity and minimal axial length. This shape, which is

often referred to as a bellmouth, aims to minimize unrecoverable pressure loss. The

reduction in area through the bellmouth accelerates the flow and causes a decrease in static

pressure. By measuring the pressure differential from upstream of the nozzle to the throat,

∆𝑃, and fluid density entering the meter, 𝜌 (calculated from upstream pressure and

temperature), mass flowrate can be calculated using the general Equation 14 [20], where 𝑛

is a unit conversion factor and 𝜖 is the expansion factor, which corrects for effects of

compressibility.

32
𝜋 2𝜌(∆𝑃)𝑔𝑐
𝑚̇ = 𝑛 𝑑 2 𝐶𝜖√ (14)
4 1 − 𝛽4

Nozzle geometry is input with throat diameter, d, and the ratio of throat to pipe diameter,

β. The discharge coefficient, C is experimentally determined with calibration.

Figure 3.5: ASME 19.5-2004 low β nozzle [20].

33
CHAPTER 4

MECHANICAL SYSTEMS

4.1 Mechanical Design

Transfer of power from the dynamometer and gearbox to the impeller is

accomplished with a mechanical system consisting of a shaft and bearings. Bearings allow

the shaft to rotate while maintaining its position to minimize deflection and unbalance (i.e.,

mass offset at a distance, given in units of g-mm or lbm-in), which result in vibration. Many

types of bearings exist, including ball bearings, roller bearings, journal bearings, magnetic

bearings, and fluid bearings. To provide the ability to actively tune bearing stiffness and to

explore a relatively new technology for AFRL, fluid bearings are chosen for this design.

Specifically, air bearings, or bearings lubricated with pressurized air are employed, which

are discussed in Section 4.2.

Two radial bearings prevent the rotor from deviating from its axis. Meanwhile, two

thrust bearings maintain the axial position of the shaft relative to the static housing, and

react the thrust load acting on the rotor. This load is the result of the pressure differential

across the impeller; leakage around the outer radius causes an elevated pressure acting on

the back-face, while the front of the impeller experiences a pressure gradient with slightly

below-ambient pressures in the inlet. The net sum of these pressures results in a forward-

acting thrust on the impeller and therefore on the entire rotor. To fully utilize the

capabilities of the test rig, the bearing system is designed for speeds and loads exceeding

those for which the dynamometer and gearbox are rated (see Section 2.1). Balancing of the

rotor is required to minimize residual unbalance in components and the assembled system,

which are the result of minor imperfections in fabrication. Available at AFRL facilities are

34
balancing machines, which measure the unbalance and inform the operator of the location

to remove material. Balancing is performed according to ISO 21940: 2016 [24], to the

recommended grade for compressors of G 2.5; this number is a measure of the rotating

unbalance, relative to the mass of the rotor – 2.5 g-mm/s/g or mm/s, equal to 0.098 in/s.

The mechanical system design is presented in Figure 4.1. The primary static

component in this system is the bearing housing, which rigidly attaches the compressor rig

to the dynamometer gearbox flange, reacts the torque load, and positions and retains the

bearings. For assembly purposes, the housing is broken into separate components (G and

S). To allow the assembled rotor to be balanced without disassembly, the larger-diameter

mounting flange (R) is also separate, which allows the assembly to pass through the

mounting face of the test article. The two bearing housing components encase the thrust

bearings (K) and the thrust disk of the main shaft (Y). Sixteen dowel pins (component W;

eight equally spaced around the axial face of each bearing) position the thrust bearings and

prevent rotation. Wave springs (V) are installed around the dowel pins to provide preload

to the thrust bearings, which ensures contact with the button load cells (J) in the static

condition. These eight load cells, positioned at locations between dowel pins on the front

thrust bearing axial face, provide a measurement of the thrust load on the rotor that is

transferred to the bearing. Two radial bearings (H) are also contained in the bearing

housings, located near the ends of the main shaft.

35
Figure 4.1: Mechanical systems: (A) impeller nut, (B) hex lug, (C) impeller, (D) shaft insert,
(E) retaining ring (3x), (F) honeycomb seal (3x), (G) front bearing housing, (H) radial air
bearing (2x), (I) radial air bearing supply fitting (2x), (J) button load cell (8x), (K) thrust air
bearing (2x), (L) thrust air bearing supply fitting (2x), (M) shaft insert bolt, (N) thrust piston
supply fitting (4x), (O) thrust piston, (P) thrust piston nut, (Q) spline coupling, (R) bearing
housing flange, (S) aft bearing housing, (T) thrust piston seal spacer, (U) radial bearing cavity
vent (2x), (V) wave spring (8x), (W) dowel pin (16x), (X) thrust bearing cavity vent, (Y) main
shaft, and (Z) front radial bearing vent plug.

To allow for a range of test articles with varying thrust loads and reduce risk for

initial implementation, a thrust piston (O) is attached to the shaft to react the axial load

generated by the pressure differential across the impeller. The opposing force is achieved

by pressurizing the cavity on the forward face of the thrust piston. Seals with honeycomb

shaped elements (F) oriented radially, constrained by retaining rings (E) contact the outer

rim of the thrust piston and shaft insert, which features radial blades to reduce contact area

and friction. These features create a barrier for air leaking from the cavity, as well as a

labyrinth to cause a pressure drop. Sealing and venting is required to ensure the relative

pressure on each side of the air bearings is a minimum, which is a requirement for its

functionality. For the same reason, an additional seal is located in front of the forward

36
radial bearing, to isolate it from the pressure of the impeller back-face cavity. Venting of

the thrust disk cavity is done through the seal elements, radial slot around the seal, radially

drilled hole, and a fitting (U). To vent the cavity between the front bearing and seal, another

slot is located around the seal. An axial hole is drilled and sealed with a plug (Z) to access

another radial hole and vent fitting. The thrust piston is retained on the shaft by a custom

nut (P). A spacer (T) allows axial clearance for radially inserted proximity sensors, which

are focused on the thrust piston nut.

For misalignment capability, the main shaft is not directly coupled to the gearbox

interface, which is a male straight-splined shaft. Instead, a female splined coupling (Q) is

included, which connects the male crowned-spline portion of the main shaft to the gearbox

shaft. On this coupling, a region of smaller diameter acts as an intentional stress

concentration to prevent damage to other, more expensive components (i.e., gearbox shaft,

gears, rig shaft, bearings, etc.) To allow unmodified impellers from various engines to be

tested with this rig, the front portion of the shaft is made to match engine hardware. The

shaft insert (D) is made specific to each test article, and allows the main shaft to be used

for all impellers in the relevant size range. Section 4.3 provides further detail on the design

and analysis of the shaft components.

For performance characterization as well as health monitoring purposes,

instrumentation sampled at a high rate (up to 60 kHz) is mounted to the bearing housing to

actively monitor shaft deflection, vibration, and speed. Sets of two proximity sensors,

positioned 90 degrees apart, provide a measurement of shaft deflection at two axial

locations. Measuring the deflection on perpendicular planes allows for analysis of the shaft

orbit (radial deviation from the axis). Two axial locations are measured to identify the mode

37
shape of the shaft’s orbit, which requires a large enough span to accurately resolve. To

satisfy this requirement without increasing the length of the shaft, the most accessible

locations are chosen – the central thrust disk and the spline coupling. Vibration is measured

using accelerometers, which are rigidly attached to the outside of the bearing housing.

Speed is measured by another proximity sensor, also positioned on the thrust disk, but at a

different axial location. Here, a small groove is machined into the thrust disk to produce an

obvious change in signal as it passes by the proximity sensor, thus providing a once-per-

revolution signal.

4.2 Bearings

Air bearings are a type of hydrodynamically lubricated, non-contact bearing which

use a thin film of pressurized gas as the lubricant. Two types of air bearings exist: those

that rely on pressurization of the air surrounding the journal by viscous forces from

rotation, called aerodynamic bearings – see foil bearings [25], which are commonly used

in turbomachinery for example – and those that use an external pressure source, called

aerostatic bearings. Two general types of aerostatic bearings can be found: orifice-fed,

which feature one or multiple orifices of a certain shape that provide the supply of air to

the space between the journal and the bearing face, and porous media. As the name

suggests, these bearings make use of a porous material at the bearing face supplied from a

housing, which acts as many small orifices and provides a uniform pressure profile, as

shown in Figure 4.2.

38
Figure 4.2: Common aerostatic bearing configurations and pressure profiles [26].

Because the system described here is a ground-based test facility with neither

weight nor spatial restrictions, aerostatic bearings are employed. By doing so, the load

capacity and stiffness of the bearing can be actively varied to tune the system for a range

of load and rotordynamic requirements. An additional benefit of this system is that the need

for a scavenge pump, oil reservoir, heat exchanger, and any other components found in

traditional liquid-lubricated bearing systems is eliminated. Bearing loads are calculated to

properly size the bearings and determine the appropriate supply pressure. Flow is assumed

to be choked at the bearing face, due to the flow area reduction through the porous graphite.

Supply pressure is calculated by simply dividing the load (force) by the loaded area and

multiplying by 1.893, which is the inverse of the choked-flow critical pressure ratio of air,

calculated by Equation 15 [19], where k=1.4.

𝑘⁄
𝑝∗ 2 (𝑘−1)
=( ) (15)
𝑝0 𝑘+1

From initial test article analysis, maximum expected thrust loading of 330 lbf is

expected in transient conditions (i.e., when accelerating to a condition, prior to adjusting

39
the thrust piston supply pressure), which is present only in the front thrust bearing. The

thrust bearing, preloaded with 30 lbf, which has an outer diameter of 2.75 in and inner

diameter of 1.04 in and resulting surface area of 5.09 in2, is calculated to require a gap

pressure of 70.7 psig and supply pressure of 133.9 psig. These are gauge pressures relative

to the static pressure surrounding the bearing, which is vented to atmospheric pressure.

Radial bearing loading is the sum of static load (i.e., the weight of the rotor) and

the dynamic load (i.e., the centrifugal force from the rotor’s unbalance at an angular

velocity). The rotor mass m is estimated from the model to be 4.2 lbm for the initial test

article, which features an aluminum impeller. For conservatism, the rotor mass is increased

to 5.5 lbm to adjust for possible stainless-steel or other higher-density/higher-temperature-

capable material impellers. A speed N of 100 krpm (maximum allowable for gearbox and

dynamometer) is assumed. Using Equation 16 [24], the unbalance is calculated to be

0.00208 oz-in.

𝐺𝑚 30𝐺𝑚
𝑈= = (16)
𝜔 𝜋𝑁

With the known unbalance, radial dynamic load is calculated using Equation 17 [24], which

is the centrifugal force due to the rotating offset-mass. This is calculated to be 37.0 lbf total,

or 18.5 lbf per bearing.

𝜋2 2
𝐹 = 𝑈𝜔 = 𝑈2
𝑁 (17)
900

The sum of the static and dynamic loads on each bearing is calculated to be 21.3 lbf. This

load is acting on the projected surface of the bearing, shown in Figure 4.3, which is the

shaft diameter, 1.00 inch times its span in the bearing, 1.25 inches. The load divided by the

resulting area, 1.25 in2, is found to require a minimum gap pressure of 17.0 psig and

40
minimum supply pressure of 32.2 psig. To stiffen the bearing and provide margin for load

capacity, the actual supply pressure will be greater than this minimum value.

Figure 4.3: Radial bearing loaded area.

A simple gas control system is required to regulate the pressure and flow to the

bearings to ensure functionality. As noted in Section 2.1, facility compressed air is offered

in the test cell at 125 psi, and nitrogen (N2) bottles are available to serve as a backup source

for the bearing. The facility system consists of a filter, compressor, tank, dryer, and

pressure regulator. The nitrogen backup system features a K-size gas cylinder (commonly

referred to as a K-bottle), pressure regulator, and adjustable check valve which allows flow

to start only when the supply line to the rig pressure falls below a minimum to allow safe

shutdown. The higher-pressure nitrogen supply will also be used to supply the front thrust

bearing, if it is determined that facility air is insufficient during initial checkout testing.

Because supply piping may have rust, an additional filter is added just upstream of the rig

control system. Downstream of the filter, a manifold supplies five electrically-controlled

41
pressure regulators to enable individual remote control of each bearing as well as the thrust

piston. A diagram of the system is shown in Figure 4.4:

Figure 4.4: Mechanical systems gas supply system diagram.

An important consideration in all bearing system designs is heat management. High

relative velocities between rotating and static components results in shearing of adjacent

layers of the lubricating fluid – liquid or gas – which irreversibly converts the kinetic

energy into internal energy in the form of heat. This process is called viscous dissipation

and is highly dependent on the velocity of the fluid and its viscosity. The heating that results

can be shown to be the sum of two terms: the rate of viscous dissipation due to the

contribution of the density variation, and the rate of viscous dissipation for an

incompressible fluid. A fluid’s compressibility and coefficient of thermal expansion (CTE)

42
determine how its density varies with pressure and temperature, and therefore directly

affect the viscous heating [27].

Heat transfer from the fluid to the bearing and rotor results in temperature rise in

the material and subsequent thermal growth. Because fluid bearings rely on a thin film of

working fluid, pressurized in the gap between bearing and rotor, gap thickness is a key

parameter for reliable functionality. Larger thermal growth of rotating components relative

to static ones results in gap closure, which in turn results in a fluid velocity increase and

further viscous dissipation, eventually causing bearing contact and damage [28].

Clearly, the working fluid has a significant effect on the extent of heat generated.

In general, liquids are significantly more viscous than gasses, far less compressible (often

considered incompressible), and have lower coefficients of thermal expansion [27]. These

properties result in liquids experiencing greater viscous dissipation and heating than gasses,

though at high enough speeds, it is still an important consideration for gas-lubricated

bearings. Aerostatic bearings, because they are externally pressurized, allow for maintained

gap thickness by simply increasing the supply pressure. Continuous supply of air to the

bearing and venting allows heat generated by viscous dissipation to be carried away [28].

Further mitigation of gap closure can be accomplished with use of low-expansion

materials.

To estimate the heat generated in the bearing gap and understand the temperatures

therein, analysis is conducted using Equation 18, the first law of thermodynamics. Here,

∆𝑈 is the change in internal energy, W is the work required by the flow to react the load

(equal to pressure times volume, PV), and Q is the heat generated. By definition, viscous

43
heating is the irreversible conversion of kinetic energy to internal energy [27], therefore it

is equal to the change in kinetic energy, ∆𝐾𝐸, due to the shearing of the air in the gap:

∆𝑈 = 𝑄 + 𝑊 → 𝑄 = ∆𝐾𝐸𝑠ℎ𝑒𝑎𝑟 + 𝑊𝑏𝑒𝑎𝑟𝑖𝑛𝑔 (18)

To simplify calculation of change in kinetic energy due to shear, assumptions are

made to reduce the mode for a 2-D analysis. For the radial bearing, fluid velocity in the

gap, with thickness t, will be assumed to be only tangential, neglecting the radial-incoming

and axial-exiting components. Fluid velocity at the outer diameter, d of the shaft (y=0),

rotating at angular velocity ω, is assumed to be equal to the linear velocity, v, and zero at

the bearing surface (y=t), which is illustrated in Figure 4.5.

Figure 4.5: Radial bearing air gap shear.

Shear stress, 𝜏 in a Newtonian fluid (such as air), with dynamic viscosity 𝜇 is obtained

𝑑𝑣𝑥
from Equation 19 [19], where is the velocity gradient:
𝑑𝑦

𝑑𝑣𝑥
𝜏=𝜇 (19)
𝑑𝑦

44
The resulting velocity gradient and shear stress for the simplified system, substituting

angular velocity for linear velocity become:

𝑑𝑣𝑥 𝑣𝑥 − 0 𝑣𝑥 𝜔𝑟 𝜔𝑑
= = = = (20)
𝑑𝑦 𝑡 𝑡 𝑡 2𝑡

𝜔𝑑 (21)
𝜏=𝜇
2𝑡

Equations 22, 23, and 24 are used to calculate the force, torque, power (or time rate of

change in kinetic energy in the gap) required to shear the air in the gap, respectively.

𝜔𝑑 𝜔𝜋𝑑 2 𝐿
𝐹 = 𝜏𝐴 = (𝜇 ) (𝜋𝑑𝐿) = 𝜇 (22)
2𝑡 2𝑡

𝐹𝑑 𝜔𝜋𝑑 3 𝐿 (23)
𝑇 = 𝐹𝑟 = =𝜇
2 4𝑡

𝜔2 𝜋𝑑 3 𝐿 (24)
𝑃 = 𝜔𝑇 = 𝜇
4𝑡

Knowing the load and flowrate through the bearing (estimated by the supplier to be

23.6 ft3/hr, yielding a mass flowrate of 38.6 lbm/hr at 120 psi and 70 °F inlet), bearing work

can be calculated. With work and change in kinetic energy known, the flow heat transfer

equation [29], Equation 25 can be used to calculate the temperature rise. At the temperature

specified, the specific heat of air, 𝑐𝑝 is 187.2 lbf-ft/lbm-R [3], which is assumed to be

constant.

𝑑𝑇 𝑄̇
𝑄̇ = 𝑚𝑐𝑝 = 𝑚̇𝑐𝑝 ∆𝑇 → ∆𝑇 = (25)
𝑑𝑡 𝑚̇𝑐𝑝

For three reasons, iterative solving must be done to determine the final temperature

with the assumptions made: 1) as the gap temperature increases, specific volume increases

45
and along with it the volumetric flowrate, changing the amount of bearing work, 2) thermal

growth in the radial bearing leads to gap closure and increased viscous heating and

therefore change in kinetic energy, and 3) viscosity of the air increases with temperature,

increasing the shear loading. The solving process is as follows: a gap temperature guess is

made, temperature rise computed using the equations above, thermal growth and resulting

gap thickness calculated, updated gap temperature input as the guess, then the process is

repeated until the calculated temperature matches the guess. At the max allowable speed

and initial bearing gap thickness of 0.75 mils (1 mil = 0.001 in), shear work of 52.2 ft-lbf/s

(1 hp = 550 ft-lbf/s, 1 ft-lb/s = 4.626 btu/hr) is calculated, bearing work is 37.2 ft-lbf/s, and

gap temperature of 189.1 °F results.

For the thrust bearing, similar simplification is made for a 2-D analysis by

neglecting the radial gradient of tangential velocity. All radial locations on the face of thrust

disk are considered to have velocity magnitude vc, which is equal to that of the radial

location at which equal area is located inside and outside, called the centroid and denoted

by diameter dc in Figure 4.6.

46
Figure 4.6: Thrust bearing air gap shear.

With the change in orientation of shear forces and constant-radial centroid-velocity

assumption, and again assuming a linear gradient from the rotating shaft surface (v(z=0) =

𝑣𝑐 ) to the static bearing surface (v(z=t) = 0), Equations 19, 22, 23, and 24 become:

𝑑𝑣𝑦 𝑣𝑦 𝑣𝑐 𝜔𝑑𝑐
𝜏=𝜇 = 𝜇 = 𝜇 =𝜇 (26)
𝑑𝑧 𝑡 𝑡 2𝑡

𝜔𝑑𝑐 𝜋𝑑𝑐2 𝜔𝜋𝑑𝑐3


𝐹 = 𝜏𝐴 = (𝜇 ) (2 )=𝜇 (27)
2𝑡 4 4𝑡

𝐹𝑑𝑐 𝜔𝜋𝑑𝑐4
𝑇𝑐 = 𝐹𝑟𝑐 = =𝜇 (28)
2 8𝑡

𝜔2 𝜋𝑑𝑐4
𝑃 = 𝜔𝑇𝑐 = 𝜇 (29)
8𝑡

The bearing gap thickness is assumed to be constant for the thrust bearing, as pressure will

be controlled and thermal growth is allowed via the preload springs. Preload will therefore

change slightly with thermal growth, but this is considered negligible for the thermal

47
analysis. Without thrust load mitigation, the front thrust bearing load of 360 lbf results in

shear work and bearing work of 169.0 ft-lbf/s and 319.1 ft-lbf/s respectively. Again, using

Equation 25 to calculate temperature rise and iterating, the thrust bearing gap temperature

is found to be 571.8 °F. This is understood as a worst case, as thrust load mitigation will

be actively managed while increasing speed. With thrust load controlled by the thrust

piston to maintain 100 lbf, shear work and bearing work of 122.4 ft-lbf/s and 61.8 ft-lbf/s

respectively, resulting in a gap temperature of 259.4 °F.

4.3 Shaft

Invar 36 is a low-expansion material is nickel-iron alloy that has a CTE of

approximately one-tenth of that of carbon steel up to 400 °F [30]. This material is chosen

for the shaft to minimize bearing-gap closure due to viscous heating and allow higher-

speed operation. While beneficial at the bearing-shaft interface, low thermal-expansion of

the shaft at the impeller interface must be managed to prevent the impeller from growing

off of the shaft with thermal and centrifugal loads. For ensuring concentricity, the centering

features of the components are ideally assembled with an interference fit, meaning the male

feature has a larger outer diameter than the female feature’s inner diameter. Starting with

sufficient interference ensures contact between the two (and concentricity assuming the

circular components grow uniformly) even as thermal and centrifugal loads pull them

outward. Interference fits require either a press (mechanical force is used to press the two

together) or shrink (bore is heated and/or shaft cooled to create clearance for assembly)

operation, which are much more intensive processes than simply assembling by hand as

with transitional or slip fits. For this reason, the male centering feature is chosen to be on

48
the component with greater CTE, to ensure the fit only becomes tighter with temperature

and thermal growth.

To allow testing of unmodified engine-relevant impellers and still realize the

benefit for bearing system design, a two-part shaft is designed; the bearing section (referred

to as the main shaft; component Y in Figure 4.1) is made of Invar 36 and is common for

all test articles, whereas the impeller section (referred to as the shaft insert; component D

in Figure 4.1) is made from steel and is specific to the test article. The geometry of the

impeller interface portion of the shaft inserts matches that of the specific-engine shaft,

while the main-shaft interface is common for all test articles. As shown in Figure 4.7, the

main-shaft-interface of the shaft insert features a male hex-shaped feature, which

corresponds to a female hex feature on the main shaft and prevents relative rotation.

Radially, the two shafts are aligned (concentricity sustained) by a tight-fitting bore in the

main shaft and corresponding centering feature on the shaft insert. Axially, the two are held

together by a bolt (component M in Figure 4.1).

49
Figure 4.7: Shaft insert.

To determine the acceptable cold clearance for the shaft-insert centering feature,

finite element analysis (FEA) is conducted, using a built-in solver for SolidWorks. The

modeled gradient in the impeller is produced by prescribing temperatures of three surfaces

along the flow path, increasing generally with the known pressure gradient based on results

from a CFD analysis. At the front of the impeller, the air is at ambient conditions and thus

material temperature is a minimum, whereas at the radial exit, static pressures are equal to

that of the compressor discharge and therefore material temperature is a maximum.

Because the impeller could be made of aluminum, which has a high thermal conductivity

(about 3 times that of steel, 200 times that of liquid water, and 5000 times that of air [3]

[30] [31]), the material temperatures are assumed to be close to the gas temperatures found

in the CFD analysis.

50
All regions in contact by design – the impeller bore and shaft insert centering

feature, impeller aft face and shaft insert forward face, and shaft insert aft face and main

shaft forward face – are set in the model to be thermally bonded, meaning the surfaces in

contact are at the same temperature. Though the shaft insert and main shaft are not

necessarily in contact, thermal bonding is also imposed there. This is determined to be an

acceptable simplification through experimentation – initially, iterations varying the thermal

resistance of the air between the components were made until the temperatures and

resistance converged. However, because of the small gap size, this method only causes a

change on the order of 1 °F.

Temperatures found in the analysis of the impeller and bearing gaps are input into

the FEA model to determine the temperature at the main shaft-shaft insert interface. The

nominal case does not include elevated transient-thrust-load temperatures, only the 100 lbf

thrust load temperatures presented in Section 4.2. The main shaft bearing journal

temperature is assumed to be equal to the calculated bearing gap temperature for simplicity

of the model. Because the interface is located between the impeller and the front radial and

thrust bearings, no other thermal loads are considered, because it is assumed these are the

dominant contributors. Results of the nominal temperature case are presented in Figure 4.8.

It is evident from this plot that the front bearing temperature has the greatest effect on shaft-

interface temperature, as 80% of the length of the centering feature is within 20 °F of the

prescribed temperature at the surface of 189 °F. The maximum interface temperature is 228

°F and the minimum is 191 °F. The average temperature at the interface is 200 °F on the

inner diameter of the main shaft and 203 °F at the outer diameter of the shaft insert, or 14

°F greater than the bearing temperature.

51
Figure 4.8: Finite-element analysis of main shaft and shaft insert in the nominal thermal case.

From the thermal model results, it is concluded: 1) small clearance between

components results in the shaft insert outer diameter and main shaft inner diameter reaching

virtually the same temperature, 2) high-speed operation will result in elevated bearing

temperatures due to viscous heating, and therefore provide thermal growth at the shaft

interface, and 3) because the main shaft is made of Invar 36, which has a lower CTE than

the steel shaft insert, the fit between the two will always become tighter with temperature

and therefore with speed. Low-speed operation – when temperature rise is minimal –

52
requires less interference because centrifugal loads are less, though contact is still desired

for mitigation of rotor unbalance. The fit condition between the parts is plotted in Figure

4.9 as a function of cold clearance and temperature at the interface, assuming the parts are

at the same temperature.

Figure 4.9: Shaft fit with cold clearance and interface temperature.

The black line denotes the conditions at which the two components have the same

diameter (referred to as line-to-line). To the left of this line, hot clearance values are

negative, meaning there is interference and contact is ensured. To the right, hot clearance

values are positive, meaning there is clearance and complete circumferential contact is not

present. The maximum interface temperature is plotted and maximum cold clearance at

which contact is present is determined to be 0.28 mils. Until bearing viscous heating and

53
conduction through the impeller begin to heat the interface, cold clearance will remain, so

little margin exists for conditions less than the maximum. For this reason, a nominal cold

line-to-line fit is required to ensure contact at low speeds or transient conditions.

4.4 Rotordynamics Analysis

Rotordynamics is the study of rotating machinery dynamics and is a critical aspect

to design. Operating at speeds where natural dynamics are excited can lead to excessive

displacements, vibration, and critical failures. Proper placement of bearings with

appropriate stiffness is critical to minimizing the effect of rotordynamics. Predetermining

critical speeds and the level at which the rotor will respond is also important, so that

dwelling at those speeds can be avoided [32]. Commonly, in turbomachinery design and

operation, critical speeds within the operating range are intentionally “placed” at a speed

that can be avoided, such as one at the low end of the operating range, through which can

quickly accelerated. Analysis of the mechanical systems design presented in Section 4.1 is

conducted by AFRL’s Dr. Daniel Gillaugh using DyRoBeS (Dynamics of Rotor-Bearing

Systems) rotordynamic software, created by Rodyn Vibration Analysis, Inc. The model

input into the software is shown in Figure 4.10, which also includes the gearbox high-speed

output shaft and the initial candidate impeller. This impeller is made of aluminum and has

a design speed of approximately 60,000 rpm.

54
Figure 4.10: Rotordynamic model of rotor system.

Results of the rotordynamic analysis are presented in Table 1. Three rotordynamic

modes are found within the operating capability of the drive stand. The critical rotating

speeds at which these modes are excited are listed along with the strain energy, which is a

measure of the potential energy stored in the material as it elastically deforms. Strain

energy of the mechanical system is distributed between shafts, bearings, and support

structures. Traditionally, strain energy of a component is presented in terms of a percentage

of the total in the system. Mode shapes are plotted in Figure 4.11, Figure 4.12, and Figure

4.13, classified by the amount of strain energy and where it is focused. A common criterion

to ensure an acceptable design from a rotordynamic perspective is to keep the strain energy

in the shaft of the system below a specified level for any given mode in the operating

range. If the strain energy is above this specified percentage, then it is classified as a

bending mode. If the majority of the strain energy is found at the bearing stations, with

limited strain energy in the shaft, then it is classified as a rigid body mode.

55
Table 1: Rotordynamics analysis results.

Figure 4.11: Mode 1 – Rigid-body pitch mode, 19,745 rpm, 26% strain energy.

56
Figure 4.12: Mode 2 – Rigid-body bounce mode, 42,505 rpm, 20% strain energy.

Figure 4.13: Mode 3 – 1st bend mode, 92,257 rpm, 85% strain energy.

Because of the damping that bearings provide, critical speeds of a rotor vary with

the stiffness, or the amount of force required to radially deflect the bearing. In this way,

modes can be intentionally set for a certain speed to be avoided [33]. Implementation of

57
air bearings provides the ability to vary this value by simply adjusting the pressure being

supplied. Bearing stiffness of the particular air bearings being used at 60 psi supply

pressure is provided by the supplier, which is input to the rotordynamic model. The planned

bearing pressure is twice this value; however, data is not available for the bearing stiffness

for this elevated supply pressure and will require experimentation to validate the model. A

useful tool for testing is the critical speed map, which gives the critical speeds of the three

modes as a function of bearing stiffness and is presented in Figure 4.14. Typically, critical

speeds are desired to be 10-20% outside of the operating range [33]. However, the severity

of the mode is investigated with a forced response analysis to determine actual bearing

loads, rotor deflections, and stability. These parameters are compared to component

capabilities and clearances to determine whether a critical speed is detrimental to the

system or is acceptable.

Figure 4.14: Critical speed map.

58
Bearing loads are plotted vs. rotational speed for the front bearing and aft bearing

in Figure 4.15 and Figure 4.16, respectively. A maximum load of 11.09 lbf is found in the

front bearing and 3.85 lbf in the aft bearing, both excited near the 1st bend mode frequency

at 94,400 rpm and 97,900 rpm, respectively. At the design supply pressure, this gives a

factor of safety of 5.9 for radial load capacity of the bearings. At all speeds within the

operating range of the initial candidate test article, the factor of safety is a minimum of

24.0.

Figure 4.15: Front bearing load vs. rotational speed.

59
Figure 4.16: Aft bearing load vs. rotational speed.

Radial loads can be difficult to measure accurately due to the lack of space to mount

accelerometers directly to the bearing. Radial deflections, however, can be measured with

implementation of proximity sensors focused on the rotor, as discussed in Section 4.1.

Multiple axial locations, measuring at perpendicular planes, provide indication of the orbit

of the rotor, which can be used to validate the rotordynamic model. Locations shown in

Figure 4.17, at the front of the thrust disk and at the thrust piston nut are chosen to provide

an axial span between sensors, which is required to resolve mode shapes, while managing

spatial limitations due to the relatively small size of the rig. These locations are expected

to provide sufficient indication of rotordynamic modes based on the deflections plotted in

Figure 4.18 vs. rotational speed. Peak deflections due to the 1st bend mode at 93,100 and

94,000 rpm of 0.21 mils and 0.13 mils are found at the spline coupling near the aft

proximity sensor, and at the central thrust disk at the front proximity sensor, respectively.

60
Figure 4.17: Rotor proximity sensor placement.

Figure 4.18: Rotor deflections vs. rotational speed.

61
CHAPTER 5

DISCHARGE VALVE

5.1 Operating Principal

A valve is designed to adapt to a range of sizes of candidate test articles to restrict

downstream flow area and provide back-pressure for the compressor. Due to facility layout

and candidate test-article flow architecture, flow exiting the compressor is required to be

directed away from the drive-stand gearbox, and to provide radial clearance for variable

test-article diameters. The compressed air coming from the annular exit of the compressor

enters a test-article-specific transition duct, where it is directed radially outward, then

turned once more toward the inlet of the compressor, where it enters the discharge valve.

A cross-sectional view of this flow path is shown in Figure 5.1.

Figure 5.1: Discharge valve flow path.

62
The annular flow path of the valve is chosen in order to couple it as close to the

compressor exit as possible. This design allows the pressurized volume to be minimized,

reducing the effects on the compressor upstream. Goals of the valve design include: linear

area-variation, high resolution of area variation, and minimization of axial length. The

design used to satisfy each of these goals is presented in Figure 5.2:

Figure 5.2: Discharge valve – exploded view: (A) stator ring, (B) gasket,
(C) rotating ring, (D) drive pin, (E) drive link, (F) bearing array, (G)
bearing ring, (H) motor mount, (I) motor shaft seal, (J) stepper motor
actuator, and (K) encoder.

The valve features three annular rings with radially oriented flow slots, equally spaced

around the faces, which are perpendicular to the direction of flow. Two rings are stationary

and one rotates; flow slots are coincident for all three rings in the full-open position. As

the valve is actuated, the open areas of the static rings are covered by the closed area of the

63
rotating ring, creating a blockage and reducing the total flow area. Because the valve is

exposed to compressor-exit flow conditions, the rings are made from stainless steel, which

maintains the required strength at expected maximum temperatures.

At the inlet to the valve, the stator ring (A) shields the rotating ring (C) from

incoming flow, reducing the area of high-pressure, and therefore the axial force acting on

it. Downstream of the rotating ring, the bearing ring (G) houses the graphite bearings (F),

which provide a low-friction surface to react the axial force, and are capable of operating

temperatures up to 800 °F. A graphite gasket (B) is positioned between the stator and

bearing rings to seal the valve. An arm is welded to the rotating ring, which is used to

actuate the valve. At the end of the arm, a machined slot interfaces with a dowel pin (D),

which is pressed into the drive link (E). The link is driven by a dual-end-shaft stepper motor

(J), which forces the ring to rotate when actuated. The motor is attached to the bearing ring

via a motor mount (H), which can have water-cooling channels added if testing determines

the motor requires it. To seal the motor shaft/mount interface, a shaft seal (I) is included.

To track the position of the valve, an encoder (K) is attached to the opposite-drive-end of

the motor shaft. Figure 5.3 presents cross-sectional views of the valve in its full-open

position; section-A is cut through the flow opening, and section-B is cut through the

bearing:

64
Figure 5.3: Discharge valve – full-open position section views: through
flow sector (section A) and through bearing sector (section B).

5.2 Sizing

Flow area downstream of the compressor is a critical parameter for operation limits

and the capability of the test rig. Sizing of the flow holes is done based on compressor map

data of candidate test designs, which is obtained from a CFD analysis. Maximum flow area

(the full-open area) is set by assuming an arbitrary, very low Mach number through the

valve. This is done to allow the maximum Mach number at the compressor discharge,

resulting in the compressor choke condition. Geometrical minimum flow area is set to zero,

i.e., the open area of the flow slots is completely covered in the full-closed position.

However, leakage flow between the stator and rotating rings is accepted and controlled by

shimming the stator ring away from the rotating ring, which sets the minimum effective

flow area. The minimum flowrate is experienced when the valve is fully closed, which

corresponds to the stall condition of the compressor, and may be accompanied by

undesirable surge.

65
Figure 5.4: Single sector of discharge valve sizing model.

Geometry of a single sector of the valve is shown in Figure 5.4, with critical

parameters labeled. To fully block the open area in the closed position, the angular span of

the strut (i.e., the portion of the sector not occupied by an opening or a bearing) θstrut is set

equal to that of the flow hole, θflow. The remainder of the angular span of the sector is

occupied by the bearing and the wall that holds it, θB+W (B+W denoting bearing + wall).

Radial length LB and width WB of the bearing are chosen based on available material and

acceptable machining tolerance capability, which is used to find the bearing-plus-wall

angular span. Because the inner diameter of the valve must fit outside of the outer diameter

of the test article, the innermost radial position of the flow hole is chosen based on a

relatively large candidate test article, while incorporating appropriate wall thicknesses.

Wall thickness is set between the ring and the open area, which determines the inner radius

of the flow hole 𝑅𝐹𝑖 . With the inner radius, span, and maximum total flow area 𝐴𝐹 known,

66
the outer radius of the flow hole 𝑅𝐹𝑜 is calculated using Equation 30, where NS is the

number of sectors and 𝑐𝑓 is a constant used to correct for flow area lost with fillets (rounded

corners), which is determined experimentally from the CAD model:

2
2𝐴𝐹 𝑐𝑓
𝑅𝐹𝑜 = √( 2
) + 𝑅𝐹𝑖 (30)
𝑁𝑆 𝜃𝐵+𝑊
𝜋− 2

Because the area of the flow opening is proportional to its angular span (neglecting

the effect of the fillets), variation in area has a linear dependency on open position. Flow

area is measured using the CAD model and plotted in black in Figure 5.5 as a function of

valve angle, along with a second-order polynomial fit. A linear variation is plotted in red

to compare. It is observed that the rounded corners of the flow holes cause a slight deviation

from the desired linear relationship, which is a maximum at mid angular-span, but

otherwise the results are favorable.

Figure 5.5: Discharge valve flow-area vs. valve position.

67
5.3 Load Analysis

The primary function of the discharge valve is to produce a pressure differential

from the compressor-exit volume to the facility exhaust system. As a result, the valve itself

experiences an axial force, equal to the pressure differential across it multiplied by the

wetted area. The normal force acting on the rotating ring is transferred to the bearings,

resulting in friction that must be overcome by the valve actuator. In order to size the

actuator and bearings, the load, and therefore the pressure differential must be determined.

A CFD analysis of the valve is conducted, using SolidWorks Flow Simulation, to quantify

component pressure drops as a function of valve position. This analysis also informs the

spacing between stator and rotating rings, which sets the leakage rate through the valve and

the compressor stall area. Flow conditions are set to those corresponding to the expected

maximum-pressure operating-point for candidate test articles.

A simplified model of the valve is created by reducing it to an axisymmetric sector

section, which allows finer meshing without substantial computational requirements. A

mesh consisting of approximately 300,000 fluid cells is presented in Figure 5.6, with 2

flow passages encompassed. Upstream of the valve, a short duct is added to simulate the

transition duct components and capture recirculation effects there. Downstream, a long

duct is added to allow the flow exiting the valve to fully develop. Though not representative

of actual test rig conditions, straight inlet and exit ducts allow for analysis of the valve

components separate from the upstream and downstream components. At the location of

the valve, the mesh is refined in order to capture the flow physics occurring within the

channels and between the rings. The mesh is relatively coarse and accuracy is sacrificed to

reduce computational time. This allows analysis of many combinations of valve positions

68
and stator ring gaps to fully characterize the valve, as global quantities of pressure ratio

across components are the goal of the simulation.

Figure 5.6: CFD sector model mesh of discharge valve.

Gap thickness between stator and rotating rings tS-R is arbitrarily set and the valve

is positioned fully closed (θValve=0), corresponding to the condition of maximum upstream

pressure. The flow simulation is run and pressures are measured at the upstream face of the

stator ring, PUS, upstream face of the rotating ring, PUS,R, downstream face of the rotating

ring, PDS,R, and downstream face of the bearing ring, PDS. These values are then used to

determine the pressure ratios for each component: across the stator, PRStat, across the

rotating ring, PRRot, across the stator and rotating ring, PRS+R, across the bearing ring,

PRBear, and across the entire valve, PRValve. The stator ring gap is then varied and the

simulation rerun. A nominal design gap thickness of 0.012 in. is chosen based on the results

69
presented in Figure 5.7. A 3rd order polynomial fit line is plotted to find the gap which will

produce approximately 25% greater upstream pressure than the maximum expected with

candidate compressors and the drive stand limitations, PMax,Design. This result provides

confirmation that the valve is capable of providing the pressure differential required, with

margin for error in analysis. Additional margin for error is provided in the ability to shim

the gap in assembly to vary the minimum area and therefore, the maximum upstream

pressure.

Figure 5.7: Discharge valve CFD results of upstream pressure vs. stator-
ring gap analysis.

Flow trajectories colorized with Mach number and pressure, normalized to the

maximum design upstream pressure are presented in Figure 5.8 and Figure 5.9 for the 0.025

in. and 0.005 in. gap cases, respectively. It is evident from these plots that the choke point

70
shifts from the rotating ring-bearing ring gap to the stator ring-rotating ring gap as it closes,

resulting in the bulk of the pressure drop occurring at that location. As the pressure drop

occurs upstream of the rotating ring, it experiences a larger pressure differential across it

and therefore a greater axial force.

Figure 5.8: Discharge valve CFD results at full-closed position, 0.025 in. stator-rotating
ring gap: flow trajectories colorized with Mach number (A) and pressure, normalized to
upstream value (B).

Figure 5.9: Discharge valve CFD results at full-closed position, 0.005 in. stator-rotating
ring gap: flow trajectories colorized with Mach number (A) and pressure, normalized to
upstream value (B).

71
With the ring gap held constant, valve position is varied from full-closed to full-

open. Results of the study are presented in Table 2 and plotted in Figure 5.11. Third-order

polynomial fits are applied to the pressure ratio results, with a coefficient of determination

(R2) of greater than 0.997 for all components. In general, pressure ratios increase with valve

position to a limit of 1.00 for the rotating and bearing rings, meaning the pressure

downstream of each component and of the valve itself approaches the upstream pressure

as the area increases. The stator, stator-plus-rotating ring, and entire-valve pressure ratios

approach a limit of approximately 0.94, meaning a 6% pressure drop across the valve is

the minimum when it is in the full-open position. With fitted models of the component

pressure ratios, the normal force acting on the rotating ring and associated bearing loads

can be estimated at all valve positions, as shown in Figure 5.10. Bearing load capacity is

calculated based on the minimum compressive strength, 4,500 psi [30] and surface area,

which is determined to have a factor of safety of 13.2 with the maximum computed normal

load of 1,163 lbf, considering drive stand limitations, denoted in red in Figure 5.10.

Figure 5.10: Axial load vs. valve open percentage.

72
Table 2: Discharge valve CFD results: component pressure ratios.

NCells,F θValve %θValve tS-R PUS PRStat PRRot PRS+R PRBear PRValve
ID
[°] [in] PMax,Design
1 298,688 0.00 0.0% 0.005 2.31 0.67 0.17 0.11 0.81 0.09
2 297,078 0.00 0.0% 0.010 1.41 0.71 0.26 0.19 0.82 0.15
3 289,548 0.00 0.0% 0.012 1.24 0.72 0.30 0.22 0.79 0.17
4 287,938 0.00 0.0% 0.015 1.05 0.74 0.35 0.26 0.79 0.20
5 289,887 0.00 0.0% 0.020 0.81 0.77 0.44 0.34 0.76 0.26
6 295,868 0.00 0.0% 0.025 0.72 0.81 0.48 0.39 0.75 0.29
7 289,485 0.10 3.1% 0.012 1.03 0.72 0.36 0.26 0.79 0.20
8 289,342 0.15 4.6% 0.012 0.94 0.73 0.39 0.28 0.80 0.23
9 289,187 0.20 6.1% 0.012 0.86 0.73 0.42 0.30 0.82 0.25
10 289,976 0.25 7.7% 0.012 0.79 0.73 0.45 0.33 0.82 0.27
11 285,645 0.50 15.3% 0.012 0.54 0.77 0.59 0.45 0.87 0.39
12 284,248 1.00 30.6% 0.012 0.34 0.82 0.80 0.66 0.94 0.62
13 282,545 1.50 45.9% 0.012 0.28 0.88 0.91 0.80 0.96 0.77
14 281,366 2.00 61.2% 0.012 0.25 0.91 0.96 0.88 0.98 0.86
15 279,801 2.50 76.5% 0.012 0.24 0.93 0.99 0.92 0.99 0.91
16 275,917 3.27 100.0% 0.012 0.23 0.94 1.00 0.94 1.00 0.94

Figure 5.11: Discharge valve CFD results of pressure-ratios across


components vs. valve position with third-order polynomial fits.

73
Flow trajectories are again plotted, colorized with Mach number and pressure

normalized to the maximum design upstream pressure. Valve positions of 3%, 31%, and

100% (full-open) are presented in Figure 5.12, Figure 5.13, and Figure 5.14, respectively.

As the valve opens and flow area increases, the pressure equalizes across it. At 3% open,

the upstream pressure is approximately equal to the maximum expected capability of the

candidate test articles and drive stand. As the valve is closed past this position, compressor

stall is anticipated and surge is likely.

Figure 5.12: Discharge valve CFD results at 3% open position: flow trajectories colorized
with Mach number (A) and pressure, normalized to upstream value (B).

74
Figure 5.13: Discharge valve CFD results at 31% open position: flow trajectories
colorized with Mach number (A) and pressure, normalized to upstream value (B).

Figure 5.14: Discharge valve CFD results at full-open position: flow trajectories
colorized with Mach number (A) and pressure, normalized to upstream value (B).

5.4 Actuation

Actuation of the valve is achieved by rotating the angular position of the rotating

ring via the attached arm. The drive link, driven by the stepper motor, is connected to the

arm through a pin located in a slot at the end of the arm. As the link rotates, the pin is

pressed against the wall of the slot, imposing a force on it. A mechanical advantage is

75
achieved by the differences in radii of the link and the arm, resulting in much less torque

required to rotate the link to generate sufficient force to rotate the rotating ring and

overcome the friction-torque load. The link’s rotational origin is positioned so that the

tangential component of the force acting on the arm – and therefore the torque available to

overcome bearing friction – is maximized at the full-closed position, where the load is also

a maximum. As the link rotates, the direction of force acting on the arm deviates from

tangential, eventually becoming almost entirely radial. This concept is evident in Figure

5.15: a downstream view of the actuation mechanism as the valve progresses from open to

close.

Figure 5.15: Discharge valve actuation, downstream view: (A) full-closed position,
(B) mid-span position, (C) full-open position.

To determine the torque T required to overcome bearing friction on the rotating

ring, Equation 31 [34] is used, where 𝜇𝑆 is the coefficient of static friction between the

bearing and rotating ring face, 𝐹𝑁 is the normal force acting on the ring, ro is the outer radial

location of bearing, and ri is the inner radial location of bearing. This equation is derived

76
from the definition of coefficient of friction (𝜇 = 𝐹𝐹𝑟𝑖𝑐𝑡𝑖𝑜𝑛 /𝐹𝑁𝑜𝑟𝑚𝑎𝑙 ) applied to a rotating

area enclosed by concentric circles:

2 𝑟𝑜3 − 𝑟𝑖3
𝑇 = 𝜇𝑆 𝐹𝑁 ( 2 ) (31)
3 𝑟𝑜 − 𝑟𝑖2 𝐵𝑒𝑎𝑟𝑖𝑛𝑔

This derivation is necessary because a constant force acting at varying radii results in a

moment gradient. As shown in Figure 5.16, a constant normal force – and constant resulting

friction force – produces a gradient of torque over the disk area required to overcome the

constant radial gradient of friction force.

Figure 5.16: Disk friction. [34]

With the friction torque load known, the drive-torque requirement can be calculated

by determining the amount of force delivered to the arm by the drive link, 𝐹𝐴𝑟𝑚 , which is

a function of the radius of the link, 𝑅𝐿𝑖𝑛𝑘 , radius of the arm, 𝑅0,𝐴𝑟𝑚 , and their position.

Figure 5.17 is a model of the geometry used to determine the effective force and resulting

drive-torque requirement. Link and arm origins, (x0,Link, y0,Link) and (x0,Arm, y0,Arm) are related

by:

77
𝑅𝐿𝑖𝑛𝑘
𝑥0,𝐴𝑟𝑚 = 𝑥0,𝐿𝑖𝑛𝑘 + (32)
2

2 𝑅𝐿𝑖𝑛𝑘 2 (33)
𝑦0,𝐴𝑟𝑚 = 𝑦0,𝐿𝑖𝑛𝑘 − √(𝑅0,𝐴𝑟𝑚 − 𝑅𝐿𝑖𝑛𝑘 ) − ( )
2

Position of the pin is known by the drive link angle θLink and radius RLink:

𝑅𝐿𝑖𝑛𝑘
𝑥𝑃𝑖𝑛 = 𝑥0,𝐴𝑟𝑚 − + 𝑅𝐿𝑖𝑛𝑘 sin 𝜃𝐿𝑖𝑛𝑘 (34)
2

2 𝑅𝐿𝑖𝑛𝑘 2 (35)
𝑦𝑃𝑖𝑛 = 𝑦0,𝐴𝑟𝑚 + √(𝑅0,𝐴𝑟𝑚 − 𝑅𝐿𝑖𝑛𝑘 ) − ( ) + 𝑅𝐿𝑖𝑛𝑘 cos 𝜃𝐿𝑖𝑛𝑘
2

With the relative position of the pin known, the resulting angle of the arm θArm and effective

arm radius at which the pin force is acting 𝑅𝐴𝑟𝑚,𝐸𝑓𝑓 can be determined:

𝑥𝑃𝑖𝑛 − 𝑥0,𝐴𝑟𝑚
𝜃𝐴𝑟𝑚 = tan−1 ( ) (36)
𝑦𝑃𝑖𝑛 − 𝑦0,𝐴𝑟𝑚

2 2
𝑅𝐴𝑟𝑚,𝐸𝑓𝑓 = √(𝑥𝑃𝑖𝑛 − 𝑥0,𝐴𝑟𝑚 ) + (𝑦𝑃𝑖𝑛 − 𝑦0,𝐴𝑟𝑚 ) (37)

The position of the valve as a function of link position is presented in Figure 5.18. The

moment acting on the arm MArm can then be calculated as a function of link position, with

a known motor drive torque TDrive:

𝑅𝐴𝑟𝑚,𝐸𝑓𝑓
𝑀𝐴𝑟𝑚 = 𝐹𝐴𝑟𝑚 𝑅𝐴𝑟𝑚,𝐸𝑓𝑓 = 𝑇𝐷𝑟𝑖𝑣𝑒 cos(𝜃𝐿𝑖𝑛𝑘 − 𝜃𝐴𝑟𝑚 ) (38)
𝑅𝐿𝑖𝑛𝑘

78
Figure 5.17: Discharge valve actuation diagram.

Figure 5.18: Valve position vs. motor position.

79
Link and arm radii are chosen based on available space and motor shaft diameter

options. A link radius of 0.582 in. and arm radius (radius to the pin location in the slot at

the full-closed position) of 10.800 in. are used to calculate the required drive torque to

rotate the valve with the calculated friction loads. A coefficient of static friction of 0.10

between the stainless-steel rotating ring and graphite bearing is used [35]. Figure 5.19 plots

the load as a function of motor position, determined from CFD results, along with the

torque output of three perspective motor-driver combinations. Figure 5.20 plots the

moment acting on the arm as a function of valve position, as well as the effective torque

available with the same three motor-driver combinations. The maximum expected load

condition, corresponding to a motor position of 4% open, is denoted by the red-dashed

vertical line.

Figure 5.19: Drive-motor torque required to actuate valve with arm and
link design and capabilities of various stepper motor and driver
combinations vs. motor position.

80
Figure 5.20: Torque load to rotate valve arm and capabilities of various
stepper motor and driver combinations vs. valve open percentage.

The two DC motor-driver combinations – 34Y207 and 34Y214 motors with

MBC12101 driver – are found to provide sufficient torque. However, to provide margin

for the design, the 34Y214 motor with AC-powered MLA10641 driver is chosen. The

ability in testing to decrease compressor inlet pressure, which also decreases the pressure

upstream of the valve, provides additional margin for the selected motor/driver to supply

sufficient torque. Pressure upstream of the valve can also be decreased by reducing the

speed of the compressor in order to reduce the load and actuate the valve if necessary.

The resolution of the positioning capability of the valve is a function of the

actuation mechanism geometry and the resolution of the stepper-motor driver. The

MLA10641 driver is capable of up to 12,800 steps per revolution, or 3,316 steps over the

3.26° span of the designed valve. Figure 5.21 shows the resolution of control of the valve

position over that entire span. The system is found to provide extremely fine control,

81
ranging from 4.15x10-6 °/step at full-closed position to 0.17x10-6 °/step at full-open

position.

Figure 5.21: Valve-control resolution with MLA10641 driver.

82
CHAPTER 6

EXHAUST COLLECTOR

6.1 Mechanical Design

The primary purpose of the exhaust collector is to receive the flow exiting the

throttle, and direct it away from the rig to connect with the facility exhaust system. The

geometry of the collector is designed to accomplish these tasks; an annular channel at the

entrance plane (exit plane of the discharge valve) transitions to a tangential direction, then

to a circular duct, exiting radially. The design is presented in Figure 6.1, which denotes the

two primary components – (A) the exit transition and (B) main body – and the flanges (D)

and (C), which are used to attach upstream and downstream components, respectively. The

primary goals of the design are: mitigation of effect on upstream components, i.e.,

discharge valve and compressor, and simplicity of manufacturing. Because the exhaust

collector is located downstream of the discharge valve, its aerodynamic performance is of

little concern, other than circumferential uniformity, which could have an upstream effect.

Net pressure loss across the device affects only the required position that the valve must be

set to in order to achieve a certain compressor pressure ratio, and the minimum upstream

pressure capability when the valve is fully opened, which have been shown to include

significant margin in the valve analysis.

83
Figure 6.1: Exhaust collector: (A) exit transition, (B) main body, (C) exit flange, and (D)
entrance flange (2x).

The main body is made from a billet of stainless steel, which is machined to produce

the transitioning duct which turns the flow from annular to tangential. The internal

geometry (the fluid volume which is machined out of the billet) is generated with lofted

channels of equal volume, which transition from the annular entrance plane – containing

the cross-sectional area of the entire annulus exiting the discharge valve – to a single

location at the exit plane. By holding the volume of each lofted channel equal, flow is

allowed to uniformly exit without causing a circumferential pressure-gradient due to a

change in cross-sectional area. The equal-volume lofted channels are shown in Figure 6.2

for one-half of the collector main body, which is then mirrored to create the whole

geometry. Upstream of the transitioning section of the main body, a straight length of the

84
full annular cross-section is added to allow the flow exiting the discharge valve to develop

before turning.

Figure 6.2: Collector main body geometry with equal-volume lofted channels.

At the exit of the main body, the exit transition receives the tangential flows from

the two half annuli and combines them in a single, radial, circular duct. The exit-transition

component features complex geometry, including a splitter vane to reduce turbulence as

the two halves rejoin. Because of the complexity, additive manufacturing – specifically 3D

printing of stainless steel via direct metal printing (DMP) – is used to manufacture this

component. The two sections are welded together, along with the mounting flanges, which

are also made of stainless steel.

The exhaust system downstream of the collector is presented in Figure 6.3. As

shown, an intermediate duct transfers the exhaust to the facility exhaust system (A)

85
described in Section 2.1. The intermediate duct features a flexible joint (C), shown in

Figure 6.4, to allow axial and radial thermal growth of the rig, which is required because it

is rigidly attached to the test-article mounting plate with a mount (B), constructed with

channel strut. The intermediate duct has a port to accept flows bypassing the discharge

valve via the surge relief port (E) and surge relief valve (D), as well as air vented from the

mechanical systems, presented in Section 4.1.

Figure 6.3: Downstream exhaust system: (A) facility exhaust duct, (B) intermediate duct
mount, (C) flex joint, (D) surge relief valve, and (E) surge relief port.

Figure 6.4: TurboFlex coupling [36].

86
6.2 Aerodynamic Analysis

A CFD analysis is conducted to ensure the collector design does not generate a

significant circumferential pressure-gradient, which could cause unwanted upstream

effects, such as nonuniform loading of the discharge valve ring. The baseline model, with

a mesh consisting of 448,879 cells, is shown in Figure 6.5, consisting of one half of the

collector and a straight duct extended from the exit plane:

Figure 6.5: Collector-only CFD mesh.

Circumferential locations are labeled for discussion: the location furthest from the exit port

is referred to as bottom dead center (BDC), and the location closest to the exit duct is

referred to as top dead center (TDC). Regions of varying geometry (i.e., throughout the

collector and exit transition) are refined to the smallest cell size in the model in order to

capture the flow physics as the flow is turned therein, denoted by red cells. This model

assumes purely axial, uniform flow entering the collector. Mass flow is set at the inlet plane

of the collector according to a particular candidate test-article compressor map and the

87
available drive stand power. Static pressure is set at the end of the exit duct to slightly

above-ambient pressure, which is done to simulate the pressure drop across the facility

exhaust system.

Flow trajectories colorized with pressure, normalized to downstream static value,

and Mach number determined from the collector-only analysis are shown in Figure 6.6.

Everywhere, the flow is observed to be uniformly traversing the collector geometry without

separation or recirculation. Flow entering the collector is observed to be at a very low Mach

number, which is the result of the lack of upstream area restriction in the model and the

prescribed uniform inlet plane. Figure 6.7 presents the circumferential pressure-gradient,

which is found to linearly vary from 1.16 times the exit pressure at BDC to equal to the

exit pressure at TDC. This trend is expected – flow entering at BDC must travel farther,

resulting in greater viscous losses as it turns against the wall of the collector. BDC-

originating flow also experiences greater effective blockage from flow entering at other

circumferential locations, whereas TDC-originating flow has a relatively direct,

unimpeded path to the collector exit.

88
Figure 6.6: Collector-only CFD results: (A) side-view and (B) iso-view of flow
trajectories colorized with pressure (normalized to downstream static value), and (C) and
(D) with Mach number.

Figure 6.7: Collector-only CFD results: upstream view of circumferential pressure-


gradient (normalized to downstream static value).

89
Flow through a model of the entire rig exhaust system – transition duct, discharge

valve, and collector with extended exit-duct – is simulated to explore the effect of upstream

components on the collector-inlet circumferential pressure-gradient. The full-annular

model is used to capture tangential-flow effects of the discharge valve and transition duct,

though not of the compressor itself. The mesh, consisting of 1,829,040 cells, is presented

in Figure 6.8. Again, red cells are the most refined at the location of the discharge valve in

order to capture the flow physics occurring in between rings and to generate the turbulent

conditions entering the collector. Green cells represent the next level of refinement (coarser

than red), which is applied everywhere in the collector. Turquoise cells are more coarse

than green, denoting refinement in the transition duct entering the discharge valve, and blue

are the coarsest cells, downstream of the collector exit plane. BDC and TDC

circumferential locations again refer to the positions farthest from and closest to the exit

plane, respectively.

Figure 6.8: Full exhaust-system CFD mesh.

90
Results of the full-system CFD analysis are presented in Figure 6.9. Key differences

from the collector-only model are observed in the flow trajectories: 1) significantly higher

Mach numbers are found at the inlet plane of the collector, 2) recirculation zones are found

at the BDC splitter vane, and 3) asymmetry is observed in the two halves of the collector,

with most of the recirculation occurring in the right half. The drastic decrease in flow area

from inclusion of the discharge valve increases the velocity of the incoming flow. As it

interacts with the axial-to-tangential transition occurring in the geometry at BDC, vorticity,

or angular velocity, is generated in the flow, resulting in recirculation.

Figure 6.9: Full-exhaust CFD results: (A) side-view and (B) iso-view of flow trajectories
colorized with pressure (normalized to downstream static value), and (C) and (D) with
Mach number.

91
Because of the relative position of the open-area slots of the discharge valve, which

are shifted toward the right in Figure 6.9, flow exiting the valve on that side of the BDC

splitter vane is forced toward the lower pressure void near the vane. As a result, stronger

vorticity ensues at this location, as shown in greater detail in Figure 6.10. To the left of the

splitter, flow exits the valve nearly tangential to the splitter, which is advantageous for

following the profile of the collector. The pronounced recirculation bubble on the right side

of the splitter acts as a blockage for the incoming flow, increasing the pressure at that

location. The asymmetric pressure-gradient at the inlet plane of the collector can be

observed in Figure 6.11. The average total-pressure at the inlet plane of the collector is

determined to be 1.069 times that at the exit plane, meaning there is a 6.9% total-pressure

loss across it.

Figure 6.10: Full-exhaust CFD results: BDC splitter interaction asymmetry.

92
Figure 6.11: Full-exhaust CFD results: upstream view of circumferential pressure-
gradient (normalized to downstream static value).

Minimization of an asymmetric effect of the collector on the discharge valve and

on the compressor is one of the primary goals of the design. To examine the gradient on

the downstream face of the rotating ring of the discharge valve, pressure is plotted in Figure

6.12, normalized to the downstream static value. Results show approximately constant

radial pressure-distribution, so the pressure is measured at the centroid and plotted in

Figure 6.13 to observe the circumferential distribution. Sector averages are plotted, along

with sector averages normalized by the overall average pressure along the centroid. A

maximum normalized sector average of 1.055 is observed at sector 24, and a minimum of

0.949 at sector 2. Figure 6.14 shows the pressure gradient:

93
Figure 6.12: Full-exhaust CFD results: discharge valve downstream-face pressure
gradient (normalized to downstream static value), upstream view.

Figure 6.13: Discharge valve downstream face pressure distribution (normalized to


downstream static value).

94
Figure 6.14: Full-exhaust CFD results: upstream view of circumferential pressure-
gradient at compressor exit-plane (normalized to the average value).

95
CHAPTER 7

CONCLUSIONS AND FUTURE WORK

This thesis presented the initial design of a small-turbojet compressor test facility:

the inlet system was designed to provide conditioned and measured flow, as well as

optional flow suppression; this feature provides the ability to test elevated corrected-mass-

flowrate and pressure ratio conditions within the power capability of the drive stand. The

mechanical system of the rig was designed to easily adapt to a multitude of test articles; air

bearings were chosen to provide adaptive load and damping capabilities, and the shaft and

mounting interfaces were designed with features to adapt to test articles of varying sizes

with minimal requirement for test-article-specific components. To throttle and back

pressure the compressor, a close-coupled discharge valve was designed. Downstream of

the discharge valve, a collector accepts the flow and directs it to the facility exhaust system,

which was designed to minimize upstream pressure-distortion. Figure 7.1 presents the

complete compressor test facility assembly.

Sizing calculations, CAD model generation, and analyses – mechanical and

aerodynamic – were conducted in order to properly design and integrate the rig within

existing facility capabilities. A dynamometer, procured for testing small turboshaft

engines, will be used to drive the test article. The drive stand features an encoder for speed

measurement, and a torque transducer, which are used in concert to measure power

input/output. Ancillary facility provisions include a video monitoring system and data

acquisition and control system.

96
Figure 7.1: Complete compressor test facility assembly, sectioned top view.

7.1 Inlet System Design

Flow through the compressor test rig is regulated by a valve at the entrance of the

inlet system, resulting in sub-ambient pressures within. Flow barrel wall thickness was

determined to be over 35 times more than required for the resulting external pressure

loading. Upstream of the compressor, flow is measured through the calibrated nozzle,

designed according to the well-understood and tested ASME standard. To ensure

repeatability and accuracy of the measurement, multiple stages of flow conditioning were

implemented. Because flow suppression is employed and pressure drop intentionally

produced, losses across the flow conditioning section were considered inconsequential,

though the estimated value of less than 1% is relatively low, regardless. Sufficient degrees

of freedom were supplied by the mounting system, which is unconstrained in the axial

direction and features a seal that is compliant in the radial direction for the minimal

expected growth.

97
7.2 Mechanical Systems Design

To provide adaptive damping and load capability, air bearings were employed,

which eliminated the need for a traditional liquid-lubrication system. A common shaft

interface was designed to reduce the number of test-article-specific components. The shaft,

made of low-expansion material Invar 36, and test-article-specific shaft insert were

designed to maintain contact at their interface in spite of thermal growth. Radial-bearing

load capacity at the planned supply pressure offers nearly 6 times the maximum expected

load, by rotordynamic forced-response analysis. Thrust loading resulting from the pressure

differential across the impeller is not expected to exceed the capability of the thrust bearing

with the available supply pressure. To further mitigate risk, thrust load was minimized by

implementation of a thrust piston. A pneumatic system was designed to supply and regulate

the bearing and thrust piston pressure, utilizing existing facility capabilities.

The designed rotor system transiently passes through two rigid body modes (one

pitch and one bounce mode), as depicted by their mode shapes in Section 4.4. The 1st bend

mode of the system occurs at 48% above the targeted max speed of the initial candidate

test article and 20% above expected future candidate-designs, satisfying the design criteria.

Predicted bearing loads were determined to be far less than the bearing capability, shaft

deflections were predicted to be free of any damaging rotor, shaft, or bearing rubs, and the

system was predicted to exhibit stable operation throughout the speed range. Therefore, the

system satisfies all rotordynamic criteria for a stable operating system.

98
7.3 Discharge Valve Design

The throttling discharge valve was designed with a rotating ring to provide

sufficient open area for testing the compressor-choke condition, and closed area for testing

the compressor-stall condition. Bearing blocks, which allow the rotating ring to rotate

relative to the stator, were sized to provide sufficient surface area to withstand the axial

load. The size of the bearings was determined to be more than sufficient, based on the load-

capacity factor of safety of greater than 13. The temperature capability of the graphite

bearings is also determined to be sufficient for maximum expected thermal loading. The

rotating ring geometry was sized to minimize load while providing sufficient flow area.

The valve’s open area as a function of its position was determined to be nearly linear, which

is desired for setting the area during testing.

The minimum flow area through the valve, set by the rotating ring-stator ring gap

was determined via CFD analyses. Sufficient margin was provided in clearances of the

components to allow adjustment of the gap via shims, in order to vary the minimum

flowrate capability of the valve, if it is determined to be necessary during component

testing. The CFD analysis was also conducted to quantify pressure ratios across the valve

components, which was used to determine the axial load acting on the bearings from the

rotating ring. The determined load was then used to calculate the necessary torque required

to rotate the valve. An actuation mechanism was designed to provide a mechanical

advantage and require less torque to drive the valve when fully loaded. A stepper motor

was chosen to actuate the valve via a drive link and arm attached to the rotating ring. The

motor’s torque capability was determined to be sufficient for the maximum expected

loading, as calculated by the pressure differential across the rotating ring. The profile of

99
the position of the valve relative to the position of the motor and the resolution of the

controls were determined to inform the developer of control logic for operation of the valve

during testing.

7.4 Exhaust Collector Design

Though aerodynamic optimization was not conducted, the presented collector

design showed acceptable performance in allowing the flow to exit the rig, without having

a significant upstream effect. The circumferential pressure-gradient resulting from the

collector geometry and its interaction with the discharge valve was determined to be

sufficiently small to avoid adverse loading to the discharge valve, with variation along the

circumference from -5.1% to +5.5% of the average. Total pressure loss of 6.9% across the

collector was deemed acceptable because it is located downstream of the throttle. Based on

the results, optimization of valve clocking and splitter angle would offer the ability to

reduce the circumferential pressure-gradient and provide more uniform loading of the

discharge valve. More intensive aerodynamic analysis can also be done to inform design

iterations to more efficiently exhaust the air, though the effect of doing so on performance

of the compressor is likely negligible.

7.5 Future Work

Component fabrication will be finalized and assembly of subsystems will be

completed, including implementation of instrumentation. Mechanical systems assembly

will include component and subassembly balancing. Prior to installation, calibration and

checkout testing of the inlet system will be done in a separate facility at WPAFB with

100
greater flow capacity. Component testing of the mechanical subsystem attached to the drive

stand will be conducted to experiment with air bearing supply controls and settings.

Discharge valve checkout testing will be conducted to confirm functionality of the

actuation system and to experiment with stepper motor controls. No component testing is

planned for the exhaust collector. Any modifications determined from component testing

will be made, and assembly of the rig and initial test article will ensue. Upon completion

of rig assembly and integration to the facility, preliminary testing to confirm controls,

instrumentation, data acquisition, and emergency shutdown procedures will begin.

Following component and system confirmations, compressor mapping will commence.

101
BIBLIOGRAPHY

[1] R. N. Brown, Compressors: Selection and Sizing, 3rd Edition, Houston, TX: Gulf
Publishing Company, 2005.

[2] Hanlon, Compressor Handbook, New York, NY: McGraw-Hill, 2001.

[3] Y. A. Cengel and M. A. Boles, Thermodynamics,An Engineering Approach 5th


Edition, McGraw-Hill, 2006.

[4] M. Boyce, Gas Turbine Engineering Handbook, Butterworth-Heinemann, 2011.

[5] G. Oates, Aircraft Propulsion Systems Technology and Design, Washington, DC:
American Institute of Aeronautics and Astronautics, Inc., 1989.

[6] D. Kane, "U.S. Fighter," 8 October 2001. [Online]. Available:


https://usfighter.tripod.com/F100-PW-220.htm. [Accessed 23 May 2021].

[7] F. Camm, "The Development of the F100-PW-220 and F110-GE-100 Engine: A Case
Study of Risk Assessment and Risk Management," RAND, Santa Monica, CA, 1993.

[8] J. R. Holden, T. M. Caley, B. Heberling, C. Cantor, E. Wesseling, A. A. Hamed, M.


G. Turner, P. J. Litke and N. D. Grannan, "Novel Design and Fabrication of JetCat
P90 Diffuser using Parametric Design and Optimization Tools," in 54th AIAA
Aerospace Sciences Meeting, San Diego, CA, 2016.

[9] A. Bauer, F. R. Schauer, G. Walker, D. Gillaugh, R. Kemnitz, B. T. Bohan, A. T.


Holley and J. Hoke, "Design, Analysis, and Testing of a Low-Cost, Additively-
Manufactured, Single-Use Compressor," in AIAA Scitech 2020 Forum, Orlando, FL,
2020.

102
[10] N. D. Grannan, M. J. McClearn, P. J. Litke, J. Hoke and F. Schauer, "Trends in
JetCAT Microturbojet-Compressor Efficiency," in 55th AIAA Aerospace Sciences
Meeting, Grapevine, TX, 2017.

[11] R. DePaola, F. R. Schauer, M. D. Polanka and N. D. Grannan, "Micro-Turbine


Performance Study," in AIAA Scitech 2020 Forum, Orlando, FL, 2020.

[12] N. D. Grannan, K. J. Moosmann, J. L. Hoke, M. J. McClearn and F. R. Schauer,


"Small Turbojet Altitude Test Facility with Two Stage Turbocharger Inlet Air
Cooling," in 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, 2018.

[13] N. D. Grannan, A. M. Knisely, K. Y. Cho, J. Hoke, R. Huff and A. T. Holley, "Small


Turbojet Altitude Test Facility," in AIAA Scitech 2020 Forum, Orlando, FL, 2020.

[14] K. Moosmann, N. D. Grannan, J. Hoke and F. Schauer, "Recuperator Integration with


Small Turbine Engine," in AIAA Scitech 2019 Forum, San Diego, CA, 2019.

[15] K. Moosmann, J. Reinhart, J. Hoke and A. T. Holley, "Small Engine Recuperator


Testbed," in AIAA Scitech 2020 Forum, Orlando, FL, 2020.

[16] Wright-Patterson Air Force Base Public Affairs, "$2M Air Force Prize for
development of a small, efficient turboshaft engine," 6 May 2015. [Online].
Available: https://www.af.mil/News/Article-Display/Article/587765/2m-air-force-
prize-for-development-of-a-small-efficient-turboshaft-engine/.

[17] The American Society of Mechancal Engineers, Process Piping, ASME B31.3-2020,
New York, 2020.

[18] The American Society of Mechanical Engineers, ASME Boiler and Pressure Vessel
Code, Section II: Materials, Part D: Properties, New York, 2019.

[19] B. R. Munson, D. F. Young, T. H. Okiishi and W. W. Huebsch, Fundamentals of


Fluid Mechanics 6th Edition, Wiley, 2009.

103
[20] The American Society of Mechanical Engineers, Flow Measurement, ASME 19.5-
2004, New York, 2005.

[21] W. Eckert, K. Mort and J. Jope, "Aerodynamic design guidelines and computer
program for estimation of subsonic wind tunnel performance," National Aeronautics
and Space Administration, Washington, D.C., 1976.

[22] R. D. Mehta and P. Bradshaw, "Design Rules for Small Low Speed Wind Tunnels,"
The Aeronautical Journal of The Royal Aeronautical, vol. 83, no. 826, pp. 443-453,
1979.

[23] "www.worldstainless.org," International Stainless Steel Forum, 2014. [Online].


Available: https://www.worldstainless.org/Files/issf/non-image-
files/PDF/Euro_Inox/RoughnessMeasurement_EN.pdf. [Accessed 17 June 2021].

[24] International Organization for Standardization, "ISO 21940 Mechanical vibration -


Rotor balancing," Vernier, Switzerland, 2019.

[25] P. Samanta, N. Murmu and M. Khonsari, "The evolution of foil bearing technology,"
Tribology International, vol. 135, pp. 305-323, 2019.

[26] D. Devitt, "New Way Air Bearings," [Online]. Available:


https://www.newwayairbearings.com/technology/technical-resources/new-way-
techincal-reports/technical-report-1-orifice-vs-porous-media-air-bearings/.
[Accessed 30 April 2021].

[27] G. L. Morini, "Viscous Dissipation, Viscous Heating," in Encyclopedia of


Microfluidics and Nanofluidics, Boston, MA, Springer, 2013, pp. 2155-2169.

[28] D. Devitt, "Air Bearings for High-Power Turbomachinery," [Online]. Available:


https://www.newwayairbearings.com/technology/technical-resources/new-way-

104
techincal-reports/technical-report-6-air-bearings-for-high-power-turbomachinery/.
[Accessed 2021].

[29] J. H. Lienhard IV and J. H. Lienhard V, A Heat Transfer Textbook 5th Edition,


Cambridge, MA: Phlogiston Press, 2019.

[30] "MatWeb," 1996. [Online]. Available: http://www.matweb.com/. [Accessed 2021].

[31] M. L. V. Ramires, C. A. N. d. Castro, Y. Nagasaka, A. Nagashima, M. J. Assael and


W. A. Wakeham, "Standard Reference Data for the Thermal," Journal of Physical
and Chemical Reference Data, vol. 24, no. 3, pp. 1377-1381, 1995.

[32] B. Murphy, F. Y. Zeidan and J. M. Vance, Machinery Vibration and Rotordynamics,


Wiley, 2010.

[33] E. J. Gunter, Introduction to Rotor Dynamics - Critical Speed and Unbalance


Response Analysis, Charlottesville, VA: Rodyn Vibration Analysis, Inc., 2001.

[34] D. Baker, "9.7 Disc Friction," in Engineering Statics, 2020.

[35] "Engineering Toolbox," 2001. [Online]. Available:


https://www.engineeringtoolbox.com/friction-coefficients-d_778.html. [Accessed
13 June 2021].

[36] "TurboFlex Couplings with Interlock Liner," Vibrant Performance, 2016. [Online].
Available:
https://vibrantperformance.com/catalog/index.php?cPath=1527_1064_1114.
[Accessed 20 June 2021].

105

You might also like