Download as pdf
Download as pdf
You are on page 1of 526
Probabilistic Theory of Structures Second Edition Isaac Elishakoff Professor of Mechanical Engineering Florida Atlantic University DOVER PUBLICATIONS, INC. Mineola, New York Copyright Copyright © 1983, 1999 by Isaac Elishakoff All rights reserved under Pan American and International Copyright Conventions. Published in Canada by General Publishing Company, Ltd., 30 Lesmill Road, Don Mills, Toronto, Ontario. Published in the United Kingdom by Constable and Company, Ltd., 3 The Lanchesters, 162-164 Fulham Palace Road, London W6 9ER. Bibliographical Note This Dover edition, first published in 1999, contains the unabridged, slightly corrected contents of the text entitled Probabilistic Methods in the Theory of Structures, originally published in 1983 by John Wiley & Sons. A new preface by the author and a listing of additional references that have appeared since the 1983 edition, have been specially prepared for this edition, and several typographical errors have been corrected. Library of Congress Cataloging-in-Publication Data Elishakoff, Isaac. Probabilistic theory of structures / Isaac Elishakoff.—2nd ed. pcm. Slightly rev. and updated ed. of: Probabilistic methods in the theory of structures, New York : Wiley, 1983. Includes bibliographical references and index. ISBN 0-486-40691-1 (pbk.) 1. Structural analysis (Engineering) 2. Probabilities. I. Elishakoff, Isaac. Probabilistic methods in the theory of structures. IL. Title. TA646.E44 1999 624.1'71—dc21 99-22871 CIP Manufactured in the United States of America Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y. 11501 To my father, Ben Zion Elishakoff, z"1 To my mother, Leah (Margaret) Elishakoff, z"l Preface to the Second Edition I am delighted that Dover Publications is presenting to readers Probabilistic Theory of Structures, the second, revised edition of my first book, Probabilistic Methods in the Theory of Structures. It is both an exciting and humbling event to have a book published in a series that also includes works by Lord Rayleigh and A. E. H. Love, amongst other clas- sics, which have contributed to present-day theoretical and applied mechanics. Perhaps this is the proper occasion for telling the story of how the first edition of this book came to be written, some twenty years ago. The idea for a book on probabilistic methods originated with the late Professor Alexander Kornecki, my then colleague at the Technion-Israel Institute of Technology. It was the fall of 1972, the year I emigrated from the former Soviet Union and was hired by the Technion as a Lecturer. Professor Kornecki, who was spending a sabbatical at Princeton University, wrote me expressing his opinion that existing books on probabilistic methods all seemed to start somewhere in the middle, reviewing the probability theo- ry and random processes briefly and with such speed that no taste was left in the mouth of the hungry reader; or else they dispensed with introduc- tions and started with the beef itself. My senior colleague urged me to write a book that could be not only read but also understood. I was great- ly flattered by this suggestion, but regarded it as impracticable. As a brand-new hireling in a new-ancient country, I knew that one must first publish extensively (without perishing) and only then, given the desire and the energy, embark on such a monumental undertaking. Six years later, having in the meantime mastered Hebrew, my new old language, and taught, along with numerous undergraduate courses, two specialized graduate courses—“Introduction to Probabilistic Methods” and “Random Vibrations”—I became entitled to a sabbatical leave. This vi PREFACE TO THE SECOND EDITION vii custom of taking an academic “time out” may seem a superfluous luxury at first glance, but, as my experience bears out, it provides a valuable opportunity for renewal and fruitful interaction with other environments and scientists. At any rate, following the Biblical concept of the sabbatical year, and availing myself of the generosity of the Technion, I started to look for an institution that would be interested in my research. Fortunately, several universities in North America and Europe offered guest appointments for the 1979/80 academic year. We chose to go to Europe; I had always liked manageable distances, especially since my dri- ving experience was limited to tiny Israel. Of the European institutions that were interested, I chose the Delft University of Technology, mainly because of my desire to conduct specific research on initial imperfection sensitivity, whose deterministic aspects had been so vividly uncovered at Delft some 35 years earlier by Professor Dr. Ir. Warner Tjardus Koiter in his famous Ph.D. dissertation, and then had been pursued analytically, numerically and experimentally—with great success—by Professor Dr. Johann Arbocz. I was dreaming of combining my recently developed probabilistic techniques with the numerical codes developed at Delft. Upon my arrival, I found that Delft was to pose two unexpected chal- lenges. First, Professor Arbocz himself told me that he did not believe in probability, though he said I was welcome to try to prove my own belief. Second, to my astonishment, Ir. Dijkshoorn appeared in my office during my first week there and said: “You will be returning to your home uni- versity in a year, but we would like you to leave some scientific trace here. Please prepare written lecture notes on probabilistic methods, so that future generations of students and researchers at our university can use them.” I was not enthusiastic about such a task. Not only had I been unaware of this traditional Delft requirement, but preparing the lecture notes would disrupt my plan to devote all of my time to research. However, thanks to several elements unique to Delft I was able to meet this challenge. These elements included the extremely friendly atmosphere created by Johann Arbocz, his wife Margot, and the entire “Vakgroep C” of the Aerospace Department. The warm hospitality of the Arbocz, Koiter, van der Neut, Schijve, van Geer and other families is unforget- table. The extremely fast and skillful typing of Marijke Schillemans and the beautiful artwork of draftsman Willem Spee were additional bless- ings. The exceptional situation these elements yielded would eventually produce active scientific cooperations between myself and J. Arbocz, A. Scheurkogel, J. Kalker, W. Verduyn, J. van Geer, T. van Baten, S. van Manen, P. Vermeulen, and in later years with W. T. Koiter. Another beautiful Delft tradition I had not known about was the dis- tribution of faculty members’ memoranda to faculty in other depart- viii. PREFACE TO THE SECOND EDITION ments. This, too, was to take me in a direction I had not anticipated. In March 1980 a conference organized by W. T. Koiter took place at the Mechanical Engineering Department. Professor Koiter, as its chair, met every participant at the “Centraal Hotel.” He asked me if I would accom- pany him home, once all the participants had arrived. So we waited together until nearly midnight, and we had several glasses of jonge jenev- er. At that time, Professor Koiter told me that he had read my memoran- dum on probabilistic methods, which had been distributed to his depart- ment. He complimented me on its explanatory style and contents, and urged me to base a book on these materials. I was very excited. With this blessing from Professor Koiter (who was characterized by Bernard Budiansky—himself a champion of American mechanics—as “the sage from Delft”), my energies received a tremendous boost. I decided to devote the remaining months of my sabbatical to advancing this project as far as I could. Within three years, the manuscript was completed at the Technion and had been accepted by Wiley. Upon the book’s initial publication I refrained from mentioning the fact that Professor Koiter’s enthusiastic encouragement was the main driving force behind the book. As I later explained to him, I had felt that I should not use his approval to promote my work, that the work must stand on its own feet. Fortunately, the book was well received by the reviewers as well as my colleagues. I especially cherish the reviews of Frank Kozin, Jim Yao, and John Robson, and the letters of Steve Crandall, Masanobu Shinozuka, Niels Lind, Giinther Natke, and many others. The first edition was adopted as a textbook at, among many other places in the U. S. and elsewhere, Columbia University, where Professor Alfred Freudenthal had developed proba- bilistic approaches in mechanics, following his pioneering work at the Technion. Credit is due to this last institution which, with its unique sys- tem of teaching and research, of offering seminars in which almost every possible criticism is heard and the best challenges are posed to the pre- senter of new ideas, I consider to have been my true academic launcher. This environment helped me to search out the clearest presentation—one that, following the advice of the late Professor Kornecki, would enable both students and faculty to understand the material. During the 1985/86 academic year, while serving as Frank Freimann Visiting Chair Professor at the University of Notre Dame, I found the time to prepare a detailed solutions manual for the text. (If you are an instructor who has adopted this text for your course and you would like to receive this manual, please contact me at the email address: ielishak@me.fau.edu.) Thanks are extended to Dr. Gabriel Cederbaum of the University of Beer-Sheva in the Negev who during this period provid- PREFACE TO THE SECOND EDITION ix ed a number of the solutions used in the manual, and painstakingly checked most of them; to Professor Michael Dimentberg who, while using the text at the Worchester Polytechnic, brought some misprints to my attention. I am also indebted to Professors Yehuda Stavsky and Menachem Baruch of the Technion-Israel Institute of Technology, for discussions on a number of probabilistic topics over the years. I appreciate the cordial cooperation of John Grafton at Dover in the preparation of this second edition of the book. Heartfelt gratitude goes to Eliezer Goldberg, formerly of the Technion, for providing unmatched editorial assistance with 100% reliability. I am forever indebted to my great teacher Academician V. V. Bolotin of Moscow for his inspirations over the entire nine-year period that I spent at the Moscow Power Engineering Institute and State University, and for providing a superb model of the deepest forms of critical thinking. I also want to thankfully mention the exceptional educational system in Georgia, former Soviet Union, and especially my elementary school teacher Aneta Zekvava, who taught her students to love science and to be kind. I have been very fortu- nate in having wonderful family, teachers, and friends on many continents. I recall our insightful secretary at the Technion, Mrs. Dvora Zirkin, asking me in the final stages of preparing the manuscript: “Why do you dedicate the book to your parents, rather than to your wife and children?” My response had been quick: “Because I met my parents earlier! I'll ded- icate my second book to my wife and children.” My parents taught me to admire appropriateness, humility and truth, while my wife and children provided love and tolerance without which no one could dream of com- pleting a big project. Last but not least, thanks to my students at the Technion, at Delft University of Technology, at the University of Notre Dame, at the Naval Postgraduate School, and at Florida Atlantic University, who taught me how to teach them ever better. Isaac Elishakoff Boca Raton, FL September 1998 Preface to the First Edition This book is written both to serve as a first-course text on probabilistic methods in the theory of structures and to provide a more advanced treatment of random vibration and buckling. It is intended in particular for the student in aeronautical engineering, mechanical engineering, or theoretical and applied mechanics, and may also be used by practicing engineers and research workers as a reference. In fact, it combines the features of a textbook and a mono- graph. Probability theory and random functions are playing an ever more promi- nent role in structural mechanics due to the growing realization that many mechanical phenomena can be satisfactorily described by probabilistic means only. In the last 25 years, much work has been done and many studies have been published on this subject. However, despite significant advances, the probabilistic approach to the theory of structures has not yet found its proper place in engineering education. Chapter 1 introduces the role of probabilistic methods in the theory of structures, Chapters 2 through 4 deal exclusively with elements of the theory of probability for a single random variable. This apparent preoccupation with the single random variable stems from my own feeling that it would be unfair to offer the reader a mere taste of the theory of probability and then immediately confront him or her with a wide range of applications. Chapter 5 is devoted to the reliability of structures described by a single random variable. Chapter 6 discusses elements of the theory of probability of two or more random variables, while Chapter 7 examines the reliability of such multivariable structures, Chapter 8 introduces the theory of random functions. Chapter 9 deals with random vibration of single- and multidegree-of-freedom structures, and Chapter 10 with random vibration of continuous systems. These chapters concentrate on the role of modal cross correlations in random vibration analysis, usually overlooked in literature, as well as treat point-driven struc- PREFACE TO THE FIRST EDITION xi tures, and random vibration and flutter. These chapters constitute, among others, a prerequisite to study the fatigue life of structures—a topic which is outside the scope of this book. The reader interested in this subject is referred to other sources where it is adequately treated. Finally, Chapter 11 is devoted to the Monte Carlo method for treating problems incapable of exact solution. Special emphasis is placed on buckling of nonlinear structures where random imperfections may be responsible for drastic reduction of the buckling loads. Ample examples are included in the book, because it is my experience that much of the material in question may be taught most effectively by this means. An additional purpose of the examples is to examine the validity of some widely accepted simplifying assumptions concerning the probabilistic nature of the output quantities and to observe the errors that these assumptions may cause. Numerous exercises are provided with each chapter, to deepen the reader’s grasp of the subject and widen his or her perspectives. The material in Chapters 1 to 5, together with Sections 11.1-11.3, are suitable for a one-semester, first-level course at the junior or senior level. Prerequisite courses for this part are calculus, differential equations, and mechanics of solids. For departments whose curriculum requires a course in the theory of probability, the first four chapters may be rapidly recapitulated, in which case the one-semester course may include also Chapters 6 and 7, as well as Sections 11.4 and 11.5. The material in Chapters 8 through 11 is open to both the analytically minded senior and the graduate student and may form an advanced course on random vibration and buckling. The additional prere- quisite for this part is knowledge of matrix theory and the basics of vibration and buckling of structures, although the necessary material is reviewed at the beginning of each chapter. It is my agreeable duty to thank the Department of Aerospace Engineering of Delft University of Technology for their invitation to present a series of lectures (from which this text grew) to their students and scientific staff during my sabbatical leave in the academic year 1979—1980—an experience of endless Dutch courtesy and good will. My sincere thanks are due to the Dean, Prof. Ir. Jaap A. van Ghesel Grothe, and to Professor of Aircraft Structures Dr. Johann Arbocz for their constant encouragement and help. Appreciation is expressed to the staff members and the students of Delft, and especially to Ir. Johannes van Geer, Ir. Willie Koppens, and Ir. Kees Venselaar for their able assistance in a number of calculations and constructive suggestions concerning the lecture notes. I acknowledge the help of Ir. J. K. Vrijling of Delft in writing Sec. 4.18. I also thank the Department of Aeronautical Engineering, Technion-Israel Institute of Technology, in whose encouraging atmosphere I was able to bring this work to completion. I am also most indebted to Eliezer Goldberg of Technion, for his kind help in editing the text, to Marijke Schillemans and Dvora Zirkin for typing much of the original manuscript, to Alice Aronson and Bernice Hirsch for typing Chapter 9, and to Willem Spee and Irith Nizan for preparing the drawings. Technion City, Haifa Isaac ELISHAKOFF January 1983 Contents 1. Introduction 1 2. Probability Axioms 8 2.1. Random Event, 8 2.2. Sample Space, 12 2.3. Probability Axioms, 17 2.4. Equiprobable Events, 21 2.5. Probability and Relative Frequency, 23 2.6. Conditional Probability, 25 2.7. Independent Events, 28 2.8. Reliability of Statically Determinate Truss, 31 2.9. Overall Probability and Bayes’ Formula, 34 Problems, 36 3. Single Random Variable 39 3.1. Random Variable, 39 3.2. Distribution Function, 40 3,3. Properties of the Distribution Function, 42 3.4. | Mathematical Expectation, 49 3.5. Moments of Random Variable; Variance, 52 3.6. Characteristic Function, 58 3.7. Conditional Probability Distribution and Density Functions, 61 3.8. Inequalities of Bienaymé and Tchebycheff, 64 Problems, 65 xiii xiv CONTENTS 4, Examples of Probability Distribution and Density Functions. Functions of a Single Random Variable 68 4.1. Causal Distribution, 61 4.2. Discrete Uniform Distribution, 69 4.3, Binomial or Bernoulli Distribution, 70 4.4. Poisson Distribution, 72 45, Rayleigh Distribution, 74 4.6. Exponential Distribution, 75 4.7. x? (Chi-Square) Distribution with m Degrees of Freedom, 76 4.8. Gamma Distribution, 76 4.9. Weibull Distribution, 77 4.10. Normal or Gaussian Distribution, 78 4.11. Truncated Normal Distribution, 83 4.12, Function of a Random Variable, 84 4.13. Moments of a Function of a Random Variable, 85 4.14, Distribution and Density Functions of a Function of a Random Variable (Special Case), 86 4.15. Linear Function of a Random Variable, 87 4.16. Exponents and Logarithms of a Random Variable, 88 4.17. Distribution and Density Functions of a Function of a Random Variable (General Case), 90 4.18. Example of Application of the Probabilistic Approach in an Engineering Decision Problem, 97 Problems, 101 5. Reliability of Structures Described by a Single Random Variable 104 5.1, A Bar under Random Force, 104 5.2, A Bar with Random Strength, 113 5.3. A Bar with a Random Cross-Sectional Area, 114 5.4. A Beam under a Random Distributed Force, 115 5.5. Static Imperfection-Sensitivity of a Nonlinear Model Structure, 120 5.6. | Dynamic Imperfection-Sensitivity of a Nonlinear Model Structure, 133 5.7. Axial Impact of a Bar with Random Initial Imperfections, 145 Problems, 160 6. Two or More Random Variables 174 6.1. Joint Distribution Function of Two Random Variables, 174 6.2. Joint Density Function of Two Random Variables, 179 CONTENTS xv Conditional Probability Distribution and Density Functions, 183 Multidimensional Random Vector, 186 Functions of Random Variables, 187 Expected Values, Moments, Covariance, 198 Approximate Evaluation of Moments of Functions, 209 Joint Characteristic Function, 211 Pair of Jointly Normal Random Variables, 215 Several Jointly Normal Random Variables, 220 Functions of Random Variables, 224 Complex Random Variables, 231 Problems, 233 Reliability of Structures Described by Several Random Variables 236 7.1, Fundamental Case, 236 7.2. Bending of Beams under Several Random Concentrated Forces, 251 7.3. Bending of Beams under Several Random Concentrated Moments, 259 7.4. The Central Limit Theorem and Reliability Estimate, 262 Problems, 266 Elements of the Theory of Random Functions 271 8.1. Definition of a Random Function, 271 8.2. First- and Second-order Distribution Functions, 273 8.3. Moment Functions, 275 8.4. Properties of the Autocovariance Function, 276 8.5. Probability Density Function, 277 8.6. Normal Random Function, 278 8.7. Joint Distribution of Random Functions, 279 8.8. | Complex Random Functions, 281 8.9. Stationary Random Functions, 284 8.10. Spectral Density of a Stationary Random Function, 288 8.11. Differentiation of a Random Function, 297 8.12. Integration of a Random Function, 304 8.13. Ergodicity of Random Functions, 306 Problems, 314 Random Vibration of Discrete Systems 317 9.1. | Response of a Linear System Subjected to Deterministic Excitation, 317 xvi CONTENTS 9.2. Response of a Linear System Subjected to Random Excitation, 324 9.3. Random Vibration of a Multidegree-of-Freedom System, 349 9.4. Illustration of the Role of Modal Cross Correlations, 361 Problems, 378 10. Random Vibration of Continuous Structures 384 10.1. Random Fields, 384 10.2. Normal Mode Method, 389 10.3, Determination of Joint and Cross Acceptances, 402 10.4. Case Capable of Closed-Form Solution, 406 10.5. Crandall’s Problem, 408 10.6. Random Vibration Due to Boundary-Layer Turbulence, 416 10.7, Analytic Approximations for Pressure Fluctuations in a Turbulent Boundary Layer, 419 10.8. Flutter and Random Vibration of Beams— Approximate Solution, 421 Problems, 429 11. Monte Carlo Method 433 11.1. Description of the Method, 433 11.2. Generation of Random Numbers, 436 11.3, Simulation of Continuous Random Variables, 438 11.4. Simulation of Random Vectors, 441 11.5. Method of Linear Transformation, 442 11.6. Simulation of Random Functions, 446 11.7. Buckling of a Bar on a Nonlinear Foundation, 450 Problems, 466 Appendix A. Evaluation of Integrals (4.15) and (4.22) 469 Appendix B. Table of the Error Function 471 Appendix C. Calculation of the Mean Square Response of a Class of Linear Systems 472 Appendix D. Some Autocorrelation Functions and Associated Spectral Densities 477 Appendix E. Galerkin Method 477 Additional References (1984-1998) 483 Index 486 Probabilistic Theory of Structures Second Edition chapter Introduction For an adequate description of structural behavior, probabilistic methods must be resorted to. Properly speaking, an element of probability is embodied even in the deterministic approach, which claims to “simplify” the structure by eliminating all aspects of uncertainty. Under the deterministic approach, external loading and the properties of the structure are represented as though they were fully determined, and available (often highly sophisticated) tools yield, with sufficient accuracy, the strains and stresses in systems with complex configurations. At the same time, these stresses are compared with allowable ones obtained by dividing their ultimate levels by a “safety factor,” so as to yield a level below that of failure, a practice that recognizes the uncertain, and random, features of the stress distribution in the material. This is how a probabilistic consideration is admitted “ via the back door”; indeed, the safety factor has often been referred to as the “ignorance factor.” The quality of “randomness” is characteristic both of loads borne by structures and of the properties of the structure themselves. No two structures, even if they have been produced by the same manufacturing process, have identical properties. Thin-walled structures are often sensitive to imperfec- tions—deviations from their prescribed geometry—in the sense that the buck- ling load of an imperfect structure may be lower than that of its ideal counterpart by several percentage decades. The shape and magnitude of these initial imperfections vary widely from case to case, since differences are inherent in any manufacturing process, which is itself subject (by its very nature) to a large number of random influences. These and other examples clearly indicate that it is impossible to investigate structural behavior without resorting to probabilistic methods. The need for a probabilistic approach does not obviate the classical treat- ment of the behavior of an ideal structure with given properties, subjected to given loading. In fact, the solution to a deterministic problem may very often prove useful in a probabilistic setting. For example, assume that the properties 2 INTRODUCTION of a structure are fully determined, while the external forces or moments are random. We begin by constructing explicit equations of motion (or equi- librium) in terms of these forces and moments, which are then used as input in determining the probabilistic characteristics of the response (output). Where the exact relationship between input and output is unavailable, or its applica- tion proves too cumbersome, statistical simulation (such as with the Monte Carlo method) is the logical remedy, now that high-speed digital computers are so readily available. The first step of this method consists in simulating the random variable; the second step is numerical solution of the problem for each realization of the random variable; the third and last is statistical analysis (computation of the characteristics of the output by averaging over the ensemble). Thus, one of the cornerstones of the Monte Carlo method is the solution of a deterministic problem. The deterministic and probabilistic approaches to design differ in principle. Deterministic design is based on total “discounting” of the contingency of failure. The designer is trained in the doctrine that with the relevant quantities properly chosen, admissible levels would never be exceeded; it is postulated that, as it were, the structure is immune to failure and will survive indefinitely. This approach dates back to antiquity, when design analysis and control were unknown and everything centered on the personal responsibility of the artisan. Its earliest written record is probably Hammurabi’s Code, according to which, if a house collapses and the householder is killed the builder is liable to the death penalty. Deterministic design has now reached a very high level of sophistication, and modern computation techniques make it possible to determine stresses, strains, and displacements in highly complex structures. However, problems of structural design always involve an element of uncertainty, unpredictability, or randomness: No matter how much is known about the phenomenon, the behavior of a structure is incapable of precise prediction. In these circum- stances there always exists some likelihood of failure, that is, of an unfavorable state of the structure setting in. Even with safety factors—empirical reserve margins—failures did and still do occur. There can in principle be no “never- fail” structure; it is a question only of a higher or lower probability of failure. Accordingly, probabilistic design is concerned with the probability of failures or, preferably, of nonfailure performance, the probability that the structure will realize the function assigned to it—in other words, with reliability. The McGraw-Hill Dictionary of Scientific and Technical Terms gives the following definition of this basic concept: “‘Reliability—the probability that a compo- nent part, equipment, or system will satisfactorily perform its intended func- tion under given circumstances, such as environmental conditions, limitations as to operating time, and frequency and thoroughness of maintenance, for a specified period of time.” The reliability approach was initiated by Maier and Khozialov and carried on by Freudenthal, Johnson, Pugsley, Rzhanitsyn, Shinozuka, Streletskii, Tye, and Weibull. The contributions of Ang and Tang, Augusti, Barrata and Casciati, Benjamin and Cornell, Bolotin, Ferry Borges INTRODUCTION = 3. Fig. 1.1. Section of Hammurabi’s stela at high magnification (photoassembly by courtesy of J. Kogan) and Castanheta, Ditlevsen, Haugen, Kogan, Lind, Moses, Murzewski, Rackwitz, Rosenblueth, Schuéller, and Veneziano should also be mentioned. The development of high-power rocket jet engines and supersonic transport since the 1950s has brought out new problems of mechanical and structural vibration, namely the response of panel-like structures to aerodynamic noise and to a turbulent boundary layer, with the attendant aspects of acoustic fatigue and interior noise, all of which are incapable of deterministic solution. The probabilistic methods for these and other problems are embodied in a new discipline called “Random Vibration,” dealt with by numerous research centers and their offshoots, which have come into being throughout the world in the last 20 years. Of these, the teams of Caughey (Caltech), Crandall (M.LT.), Lin (University of Illinois, Urbana-Champaign), and Shinozuka (Columbia Uni- versity) in the United States; of Bolotin (Moscow Energetics Institute) and Pal’mov (Leningrad Polytechnic) in the Soviet Union; of Clarkson (Southamp- 4 INTRODUCTION ton) and Robson (Glasgow) in the United Kingdom, of Ariaratnam (Waterloo) in Canada; and of Parkus (Vienna) in Austria are perhaps the most well-known. The probabilistic approach proved extremely useful in analysis of flexible buildings subjected to earthquakes (Cornell; Newmark and Rosenblueth; Vanmarcke) or wind (Cermak); offshore structures subjected to random wave loading (BOSS conference); ships in rough seas (Ekimov; Price and Bishop); structures undergoing fatigue failure (Bogdanoff, Freudenthal, Gumbel, Payne, Weibull); structures subjected to environmental temperatures (Heller); struct- urally inhomogeneous media (Beran, Kréner, Lomakin, Shermergor, Volkov); stability of stochastic systems (Khas’minskii, Kozin, Kushner); probabilistic identification of structures (Hart, Ibrahim, Masri and Caughey); and other fascinating problems. GENERAL REFERENCES* Ang, A. H.-S., and Tang, W H., Probability Concepts in Engineering Planning and Design, Vol. 1, Basie Principles, John Wiley & Sons, New York, 1975. Ariaratnam, S. T,, “Dynamic Stability of Column under Random Loading,” in G. Herrmann, Ed., Dynamic Stability of Structures, Pergamon, New York, 1967, pp. 255-266. ____, “Stability of Mechanical Systems under Stochastic Parametric Excitations,” in R. F. Curtain, Ed., Lecture Notes in Mathematics, No. 294 Springer-Verlag, Berlin, 1972, pp. 291-302 Augusti, G., Barrata, A., and Casciati, F., Probabilistic Methods in Structural Engineering, Chap- man and Hall, London, in press. Benjamin, J. R., and Cornell, C. A., Probability, Statistics and Decision for Civil Engineers, McGraw-Hill, New York, 1970. Beran, M. J., Statistical Continuum Theories, (Monographs in Statistical Physics and Thermody- namics, Vol. 9), Wiley-Interscience, New York, 1968 Bogdanoff, J L., “A New Cumulative Damage Model,” Part 1, ASME J. Appl. Mech., 100, 246-250 (1978). Bolotin, V. V., Statistical Methods in Structural Mechanics, State Publ. House for Building, Architecture and Building Materials, Moscow, 1961 (2nd ed., 1965, translated into English by S. Aroni, Holden-Day, San Francisco, 1969) . Application of the Methods of the Theory of Probability and the Theory of Reliability to Analysis of Structures, State Publ. House for Buildings, Moscow, 1971 (English translation FTD-MT-24-771-73, Foreign Technol. Div., Wright Patterson AFB, Ohio, 1974). , Random Vibrations of Elastic Bodies, “Nauka” Publ. House, Moscow, 1979. ___, “Reliability of Structures,” in JF. Besseling, and A. M. A. Van der Heijden, Eds , Trends in Solid Mechanics, (Proc Symp. Dedicated to the 65th Birthday of W. T. Koiter), Delft Univ. Press, Sijthoff and Noordhoff Intern. Publ., 1979, pp. 79-91. BOSS 1976, Proceedings of an International Conference on the Behavior of Offshore Structures, Norwegian Inst. Technol., Trondheim, 1976. *Many highly interesting studies simply could not be mentioned here, since the subject is much too vast. Since a complete bibliography on probabilistic methods in mechanics could fill by itself a hefty volume, I have confined myself mostly to books and reviews, so as to give some idea of what has been done. A list of cited references and of recommended further reading is given at the end of each chapter. GENERAL REFERENCES 5 Caughey, T. K., “Response of a Nonlinear String to Random Loading,” ASME J. Appl. Mech., 26, 341-349 (1959), , “Derivation and Application of the Fokker-Planck Equation to Discrete Nonlinear Dynamic Systems Subyected to White Random Excitation,” J. Acoust. Soc. Am., 35 (11), 1683-1692 (1963) , “Equivalent Linearization Techniques,” ibid., 35 (11), 1706-1711 (1963). , “Nonlinear Theory of Random Vibrations," Advan. Appl. Mech., 11, 209-253 (1971). Cermak, J E., “Applications of Fluid Mechanics to Wind Engineering—A Freeman Scholar Lecture,” J. Fluids Eng., 97, 9-38 (1975). Clarkson, B, L,, “Stresses in Skin Panels Subjected to Random Acoustic Loading,” J. Roy. Aeronaut. Soc., 72, 1000-1010 (1968). , and Ford, R D., “The Response of a Typical Aircraft Structure to Jet Noise,” ibid., 66, 31-40 (1962) , and Mead, D, J., “High Frequency Vibration of Aircraft Structures,” J. Sound Vibration, 28 (3), 487-504 (1973). , Ed , Stochastic Problems in Dynamics, Pitman, London, 1977. Cornell, C. A, “Probabilistic Analysis of Damage to Structures under Seismic Loads,” in D. A. Howells et al., Eds., Dynamic Waves in Civil Engineering, John Wiley & Sons, London, 1971, Chap. 27. Crandall, § H., Ed., Random Vibration, Vol. 1, Technology Press, Cambridge, MA, 1958; Vol. 2, MLL. Press, Cambridge, MA, 1963. . and Mark, W. D., Random Vibration in Mechanical Systems, Academic Press, New York, 1963. , Wide-band Random Vibration of Structures (Proc. Seventh U.S. Nat. Congr. Appl. Mech.), ‘ASME, New York, 1974, pp. 131-138. . Random Vibration of Vehicles and Structures (Proc. Seventh Can. Congr. Appl. Mech.), Sherbrooke, Ont., 1979, pp. 1-12. Ditlevsen, O., Uncertainty Modeling, McGraw-Hull, New York, 1981. Ekimov, V. V., Probabilistic Methods in the Structural Mechanics of Ships, “‘Sudostroenie” Pub. House, Leningrad, 1966. Ferry Borges, J., and Castanheta, M., Structural Safety, 2nd ed., National Civil Eng, Lab., Lisbon, Portugal, 1971 Freudenthal, A. M., “Safety of Structures,” Trans. ASCE, 112, 125-180 (1947), . “Safety and Probability of Structural Failure,” Trans. ASCE, 121, 1337-1375 (1956), Proc Paper 2843. , and Gumbel, E. J, “Physical and Statistical Aspects of Fatigue,” Advanc App! Mech., 4, 117-158 (1956) , “Statistical Approach to Brittle Fracture,” in H Liebowitz, Ed., Fracture—An Advanced Treatise, Vol. 2, Academic Press, New York, 1968, pp. 592-621. Gumbel, E. J., Statistics of Extremes, Columbia Univ, Press, New York, 1958 Hart, G. C., ed., Dynamic Response of Structures: Instrumentation, Testing Methods and System Identification, ASCE/EMD Specialty Conference, UCLA, 1976. Haugen, E. B, Probabilistic Approaches to Design, John Wiley & Sons, London, 1968. . Probabilistic Mechanical Design, John Wiley & Sons, New York, 1980. Heller, R. A., “Temperature Response of an Initially Thick Slab to Random Surface Tempera- tures,” Mechanics Research Communications, 3, No. 5, 379-385 (1976). , “Thermal Stresses as a Narrow-Band Random Load,” J Eng. Mech Div , ASCE, EMS, ‘No. 12450, 787-805 (1976). Holand, I, Kalvie, D., Moe, G., and Sigbjérnsson R., Eds., Safety of Structures under Dynamic Loading, papers presented at Intern. Res. Seminar, June 1977, Norweg, Inst. Technol., Tapir Press, Trondheim, 1978. 6 INTRODUCTION Ibrahim, $. R., “Random Decrement Technique for Modal Identification of Structures,” J. Spacecraft and Rockets, 14, 696-700 (1977). , “Modal Confidence Factor in Vibration Testing,” ibid, 15, 313-316 (1978). Johnson, A. I., Strength, Safety and Economical Dimensions of Structures, National Swedish Institute for Building Research, Document D7, 1971 (Ist ed., 1953) Khas'minskii, R. Z., Stability of Systems of Differential Equations with Random Parametric Excitation, “Nauka” Publ. House, Moscow, 1969. Khozialov, N. F., “Safety Factors,” Building Ind., 10, 840-844 (1929). Kogan, J., Crane Design, John Wiley & Sons, Jerusalem, 1976. Kozin, F., “A Survey of Stability of Stochastic Systems,” Automatica, 5, 95-112, (1969). , “Stability of Linear Stochastic Systems,” in R. F. Curtain, Ed., Lecture Notes in Mathematics, No. 294, Springer-Verlag, New York, 1972, pp. 186-229. Kréner, E,, Statistical Continuum Mechanics, Intern, Centre for Mechanical Sciences, Udine, Italy, Course No. 92, Springer-Verlag, Vienna, 1973. Kushner, H., Stochastic Stability and Control, Academic Press, New York, 1967. Lin, Y. K., Probabilistic Theory of Structural Dynamics, McGraw-Hill, New York, 1967 , “Response of Linear and Nonlinear Continuous Structures Subject to Random Excitation and the Problem of High-level Excursions,” in A. M. Freudenthal, Ed , International Con- ference in Structural Safety and Reliability, Pergamon Press, 1972, pp. 117-130 “Random Vibrations of Periodic and Almost Periodic Structures,” Mechanics Today, 3, 125 (1976). “Structural Response under Turbulent Flow Excitations,” in H. Parkus, Ed., Random Excitation of Structures by Earthquakes and Atmospheric Turbulence, Springer-Verlag, Vienna, 1977, pp. 238-307 Lind, N., Ed., “Structural Reliability and Codifiel Design,” SM Study No. 2, Solid Mech. Div., Univ. Waterloo Press, 1970. ___., “Mechanics, Reliability and Society,” Proc. Seventh Can. Cong. Appl. Mech., Sherbrooke, 1979, pp. 13-23. Lomakin, V. A., Statistical Problems of the Mechanics of Solid Deformable Bodies, “Nauka” Publ. House, Moscow, 1970. Maier, M., Die Sicherheit der Bauwerke und Thre Berechnung nach Grenzkriiften anstatt nach Zuliissigen Spannungen, Springer-Verlag, Berlin, 1926. 9 Masri, S. F., and Caughey, T. K., “A Nonparametric Identification Technique for Nonlinear Dynamic Problems,” ASME J. Appl. Mech., 46, 433-447 (1979). Moses, F., “Design for Reliability Concepts and Applications,” in R. H. Gallagher and D. C. Zienkiewicz, Eds., Optimum Structural Design, John Wiley & Sons, New York, 1973, pp. 241-265. Murzewski, J., Bezpieczenstwo Konstrukeji Budowlanych, “Arkady” Publ. House, Warsaw, 1970. Newmark, N. M., and Rosenblueth, E, Fundamentals of Earthquake Engineering, Prentice-Hall, Englewood Cliffs, NJ, 1971. Pal’moy, V. A., “Thin Shells Acted by Broadband Random Loads,” PMM J. Appl. Math. Mech , 29, 905-913 (1965). , Vibrations of Elasto-Plastic Bodies, Nauka” Publ. House, Moscow, 1978. Parkus, H., Random Processes in Mechanical Sciences, Intern. Centre Mech. Sci, Udine, Italy, Course No. 9, Springer-Verlag, Vienna, 1969. , Ed,, Random Excitation of Structures by Earthquakes and Atmospheric Turbulence, Intern. Centre Mech. Sci., Udine, Italy, Course No. 225, Springer-Verlag, Vienna, 1977. Payne, A. O., “The Fatigue of Aircraft Structures,” in H. Liebowitz, Ed., Progress in Fatigue and Fracture, Pergamon Press, Oxford, 1976, pp. 157-203. Pugsley, A., “A Philosophy of Airplane Strength Factors,” Brit. A.R.C. R & M, 1906, 1942. GENERAL REFERENCES 7 ___, The Safety of Structures, Edward Arnold Publ., London, 1966. Price, W. G, and Bishop, R. E. D., Probabilistic Theory of Ship Dynamics, Chapman and Hall, London, 1974, Rackwitz, R., “Non-Linear Combination for Extreme Loadings,” in Technical University of Munich, Rept. No. 29, 1978. Robson, J. D., An Introduction to Random Vibration, Edinburgh at the University Press, 1963. . Dodds, C. J., Macvean, D. B., and Paling, V. R., Random Vibrations, Intern, Centre Mech. Sci., Udine, Italy, Course No. 115, Springer-Verlag, Vienna, 1971. Rosenblueth, E., “The Role of Research and Education in Risk Control of Structures,” in T. Moan and M. Shinozuka, Eds., Structural Safety and Reliability, Elsevier, Amsterdam, 1981, pp. 1-18. Rzhanitsyn, A. R., Calculation of Structures with Materials Plasticity Properties Taken into Account, State Publ. House for Buildings, Moscow, 1954. , Theory of Reliability Analysis in Building Constructions, State Publ. House for Buildings, Moscow, 1978, Shermergor, T. D., Theory of Elasticity of Micro-Inhomogeneous Media, “Nauka” Publ. House, Moscow, 1977. Shinozuka, M., “Safety, Safety Factors and Reliability of Mechanical Systems,” Proc. Ist Symp. Eng. Appl. Random Function Theory Probability, Purdue Univ., Lafayette, IN., 1962, 3. L. Bogdanoff and F. Kozin, Eds., John Wiley & Sons, New York, 1963, pp. 130-162. ____., “Methods of Safety and Reliability Analysis,” in A. M. Freudenthal, Ed., International Conference in Structural Safety and Reliability, Pergamon Press, New York, 1972, pp. 11-45. , “Time and Space Domain Analysis on Structural Reliability Assessment,” in H. Kupfer, M. Shinozuka, and G. I. Schuéller, Eds., Second Intern. Conf. Structural Safety and Reliability Werner-Verlag, Diisseldorf, 1977, pp. 9-28. . “Application of Digital Simulation of Gaussian Random Processes,” in H. Parkus, Ed., Random Excitation of Structures by Earthquakes and Atmospheric Turbulence, Springer-Verlag, Vienna, 1977, pp. 201-237. Schuller, G. L., Einfiihrung in die Sicherheit und Zuverlassigkeit von Tragwerken, W. Ernst und Sohn, Berlin, in press. Streletskii, N. S., Foundations of Statistical Account of Factor of Safety of Structural Strength, State Publ. House for Buildings, Moscow, 1947. Tye, W., “Factors of Safety—or of Habit?,” J. Roy. Aeronaut. Soc. 48, 487-494 (1944). , “Basic Safety Concepts,” Aeronaut. J., 81, 271-275 (1977). Vanmarcke, E. H., “Structural Response to Earthquakes,” in C Lomnitz and E, Rosenblueth, Eds,, Seismic Risk and Engineering Decisions, Elsevier, Amsterdam, 1976, Chap. 8, pp. 287-337. ‘Seismic Safety Assessment,” in H. Parkus, Ed., Random Excitation of Structures by Earthquakes and Atmospheric Turbulence, Intern. Centre Mech. Sci., Udine, Italy, Course No. 225, Springer-Verlag, Vienna, 1977. Veneziano, D., “Synopsis of some Recent Research on Reliability Formats”, in I, Holand, Ed., “Safety of Structures, under Dynamic Loading”, Norwegian Institute of Technology (Trondheim) Press, 1978, pp. 595~605. Volkov, S. D., Statistical Strength Theory (translated from Russian), Gordon and Breach, New York, 1962. Weibull, W., “A Statistical Theory of the Strength of Materials,” Proc. Roy. Swedish Inst. Eng. Res., Stockholm, 151, 1-45 (1939). “Scatter of Fatigue Life and Fatigue Strength of Aircraft Structural Materials and Parts,” Proc. Intern. Conf. Fatigue Aircraft Structures, Columbia Univ., New York, 1956. chapter Probability Axioms 2.1. RANDOM EVENT We will associate mechanical phenomena with a complex of conditions under which they may proceed, assuming that this complex is realizable (or rather reproducible, at least conceptually) an arbitrarily large number of times in essentially identical circumstances, with an observation or a measurement taken at each such realization. Such a process of observation or measurement will be referred to as a trial or an experiment. In this sense an experiment may consist in checking whether stresses in a structure exceed some specified value, or in determining the profile of imperfections of its surface, or else (in modern supersonic aircraft) in determining the noise level. We define an event as an outcome, or a collection of outcomes, of a given experiment (a positive or a negative conclusion; readings of the scanning mechanism; the final result of a highly complex calculation), The outcome of a deterministic phenomenon is totally predictable and is, or can be, known in advance: Deterministic phenom- ena are either certain or impossible, depending on whether, inevitably, they do or do not occur in the course of the given experiment. For example, consider a perfectly elastic beam with symmetric uniform cross section (section modulus 5), subject to given constraints under a given transverse load resulting in a maximal bending moment M,,,, (“complex of conditions,” Fig. 2.1a and 2.1b). The maximum bending stress, according to the theory of strength of materials, is then given by Onay = (2.1) Another example is a perfectly cylindrical shell made of perfectly elastic material, with radius R, length /, thickness h, Young’s modulus £, and Poisson’s ratio », under uniform axial compression with ends simply supported RANDOM EVENT 9 7 (a) tb) Malx) aya Mz max* P aysay (ce) % Fig. 2.1, (a) Elastic beam simply supported at its ends. (b) Bending moment diagram. (c) Elastic cylindrical shell under uniform axial compression. 10 PROBABILITY AXIOMS (“complex of conditions,” Fig. 2.1c). For shells that are not too short, the buckling stress is given by Eh i RyY3(1 — v?) If the conditions specified in both examples are realized, the maximal stress in the beam and the buckling stress in the shell will be determined by Eqs. (2.1) and (2.2), respectively. The statement of impossibility of some event under a given complex of conditions reduces readily to one of certainty of the opposite event. An event that is neither certain nor impossible is referred to as random, signifying that it may or may not occur under given essentially identical conditions, in other words, that the outcome of the experiment is not known in advance, before it has taken place. Consider as an example, in more detail, a cylindrical shell manufactured by electroplating from pure copper, and tested on a controlled end-displacement- type compression testing machine (“complex of conditions,” Fig. 2.2) of which (2.2) Fig. 2.2. Compression testing machine for buckling tests on thin-walled structures. (Designed by W. D. Verduyn, Delft University of Technology.) RANDOM EVENT 11 Load Cell Testing Machine Traveling Pick Up Circumterential Drive oe) Lead Screw Axial Orive Test Specimen Pick Up Leads (b) Fig. 2.3. (a) Testing machine and data acquisition equipment. (b) Cylindrical shell testing configuration (Courtesy of J Arbocz) 12° PROBABILITY AXIOMS a suitable stock may be visualized as available. Due to the very nature of the manufacturing process, each realization of the shell will have a different initial shape that cannot be predicted in advance. The imperfections (deviations of the initial shape from the ideal circular cylinder), amounting to a fraction of the wall thickness, can be picked up and recorded by the special experimental setup (Fig. 2.3) developed by Arbocz at the California Institute of Technology. The scanning device, moving in both the axial and circumferential directions, yields a complete surface map of the shell. As illustrated by the examples in Figs. 2.4 and 2.5, the two shells produced by the same manufacturing process have totally different imperfection profiles, and it is intuitively obvious that even when tested on the same machine, they would generally have different buckling loads. These differ considerably from the classical buckling load of the perfect cylindrical shells as per Eq. (2.2): 0.736 6, for shell A9, and 0.673 6, for shell A12.* 2.2 SAMPLE SPACE Although not absolutely essential, some mathematical preliminaries are given below. The axiomatic foundation of the theory of probability was laid by Kolmogorov, according to whom the primary notion is not the random event, but rather the sample space. When an experiment or phenomenon gives rise to one of a totality of mutually exclusive events, we denote it by the Greek letter w and refer to it as an elementary event, an elementary outcome, or finally a sample point. The totality of all possible sample points are denoted by Q and referred to as the sample space, of which the sample points are the elements. A sample point is indivisible, in that it embodies no distinguishable outcomes. Sample spaces are usually classified according to the number of elements they contain. If such a space contains a countable number of elements, that is, a finite number or a denumerable infinity, so that its elements can be put in one-to-one correspondence with positive integers, it is referred to as discrete; otherwise, it is said to be continuous. We will claim that event A is associated with the experiment (or the elementary outcome w) if, according to each elementary outcome w, we know precisely that A does or does not take place. Denote by the same letter A the totality (or set) of all w’s as a result of which A takes place. Obviously, A takes place when and only when one of the w’s does; in other words, instead of speaking of A, we may speak of an event of an elementary outcome w (which belongs to A) happening. Events are thus simple subsets of the sample space Q. *This effect of small imperfections considerably reducing the buckling load of a cylindrical shell was pointed out by Koiter in his pioneering doctoral thesis, an achievement, in terms of its impact on the course of the general theory of structural stability, that any scientist, conceited or modest, should dream of. 13 SAMPLE SPACE, ssauayy nays (oqry ‘f Jo Asaimod) “(6y Tags) aTyord uonsayradu am yo uonezeOYy “yz “BL (SNVIQVY) STONY IWILNSYSSWNIUI SAMPLE SPACE 15 A certain or a sure event A, which takes place as a result of any outcome of the experiment under consideration, is formally identified with the whole sample space 2, while an impossible event (denoted by ©) is treated as an empty set, not containing any of the w’s. If w is an elementary outcome belonging to [included in] A, we write w € A; if w is not an element of A, we write w € A. Two events A and B are said to be equal, A = B, if and only if [denoted iff] every element of A is also an element of B and vice versa—every element of B is also an element of A; that is, iff A C B, B C A. (Read: “A is contained in B” and “B is contained in A.”) Events A and B are referred to as mutually exclusive (or disjoint) if they have no sample points in common, that is, if they cannot take place simultaneously. The union or sum of two events A, and A, is defined as an event A signifying realization of at least one of the events A,, A: A=A,UA, (A=A,+4,) where U is the special symbol of union. That is, A’s elements are all the elements of A,, or A>, or both. A union of multiple events 4,, A,,..., 4, is defined in an analogous manner and denoted by A = Uj_,4, or A = Ly.) Ay. Sample spaces and events are conveniently represented by Venn diagrams. The sample space Q is represented by a rectangle, whereas the events are represented by a region (or part of one), within the rectangle. Many relation- ships involving events can be demonstrated by this means (see Fig. 2.6). The intersection or product of two events A, and A, is an event A, signifying realization of both A, and A,: A = A, M A, (or A,A2), where M is the special symbol of intersection. The product of ‘multiple events A), Ao,...,4, is defined in an analogous manner and denoted by A = Nj_,A, (or A= Tp 14,)- The difference A of events A, and A, is an event signifying realization of A, but not of A,: A = A, \ A, (or A = A, — Az). The complement of an event A with respect to the sample space @, denoted by A or A’, is an event signifying that A does not take place: A= Q\ A ord =Q- A. It is readily shown that IfA=A,+A,, thenA=4,A, and IfA =A,A,, then A = A, + A, 16 = PROBABILITY AXIOMS Lid S Fig. 2.6. (a) @ shaded (b) A, and A, are mutually exclusive events (c) A, is contamed in A. (d) Aj is contained in Ay. (¢) A, and Az are equal. (f) Sum A, + A, shaded. (g) Sum A, + A, shaded. (h) Product 4,A, shaded. (i) Product A, A, of mutually exclusive events is an impossible event. (j) Difference A; ~ Az shaded. (k) A, is a complement of A>. also, If A, C Ay, then A, > A, the symbol > signifying “contains.” These are known as De Morgan’s laws, signifying that if there is some link between given events, then the link obtained from the original one by transfer to the complementary events, by formally replacing the symbols of union U, intersection 9, and inclusion C by -, U, and 3, respectively, are likewise valid. PROBABILITY AXIOMS = 17 Observe also that Q=2, B=2 and therefore the interlinkage will also be preserved if the following formal substitution is resorted to in addition to the above Q> wo, @-2 A collection of events A,, A3,..., A, is said to partition the sample space Q iff they are pairwise mutually exclusive and their sum equals the sample space, that is, A\A,= 8, i*j iy and L4,=2 2.3 PROBABILITY AXIOMS Consider now the discrete sample space 2, denoting as before the set of all possible outcomes of a random experiment. We now formulate axioms, defin- ing the concept of probability. Axiom 1 (“Nonnegativity” Axiom) To each event A, there can be assigned a nonnegative real number P(A) > 0 called its probability. Axiom 2 (“Normalization” Axiom) The probability of a certain event equals unity: P(Q)=1 Axiom 3 (“Additivity” Axiom) If A, A3, A3,... is a countable sequence of mutually exclusive events of &, then P(A, + A, + Ay + +++) = P(A,) + P(A,) + P(A) + +++ 18 PROBABILITY AXIOMS From these axioms, the following conclusions can be drawn immediately: 1. From the obvious equality, Q=2+B and Axiom 3, we conclude P(Q) = P(@) + P(2) so that P(@)=0 that is, the probability of the impossible event is zero. 2. For any event A, P(A) =1- P(A) (2.3) To prove this, we note that an event and its complement are mutually exclusive: AA=@ and the sum of an event and its complement represents the sample space A+A=Q Then, P(A +A) = P(Q) By Axiom 3 we have P(A +A) =P(A)+ P(A) On the other hand, by Axiom 2 we have P(A+A)=P(2)=1 which together yield P(A)+P(A)=1 or P(A)=1- P(A) 3. For any pair of events A, and A, in a sample space Q P(A, — A,) = P(A,) — P(A, 42) P(A, ~ A,) = P(A2) ~ P(A, Ap) PROBABILITY AXIOMS = 19. For proof, we note that each of the events A, and A, can be represented as A, = (A, ~ Az) + (442) Ay = (Az ~ A) + (4,42) where the events A, — A,, A,A2, and A, — A, are mutually exclusive. Then by Axiom 3 we have P(A,) = P(A, ~ Az) + P(A, A2) P(A) = P(A, — A,) + P(A, A) Furthermore, we conclude that if A, C A, then A, A, = A, and P(A,) = P(A2) ~ P(Az ~ Ay) < P(A2) that is, if event A, is contained in A,, then P(A\) < P(A2) 4. The sum A, + A, of events A, and A, can be represented as the sum of the following mutually exclusive events A, + Ay = (A, — Az) + (Ag — Ay) + (A142) and therefore P(A, + A) = P(A, — A.) + P(A, — A,) + P(A, A2) = [P(A,) — P(A,42)] + [P(A,) - P(A, 42)] + P( 4,42) = P(A,) + P(A,) — P(A, A) (2.4) If A, and A, are mutually exclusive events, that is, if 4A, = @, then P(A,A,) = 0 and we are back with Axiom 3. Due to the nonnegativity of P(.A,A,), we also conclude from (2.4) that P(A, + A,) < P(A) + P(A2) 5. Let A), A2,..., A,, be events in sample space 2. We seek to calculate the probability of their sum. Denote ne Bray, maSetad). BEER Maa (2.5) 20 PROBABILITY AXIOMS where 1] | Ea) = P(A,) + >| ba) - o( ae 4A) im] in? i=2 non nna = EP )- LY P(4,4,) + LEE P44, Ay) — Isi then P(A) = P(E, + Ey +++ + Ey) = pm = ™ Consequently, if a random experiment can result in mutually exclusive and equiprobable outcomes, and if in m of these outcomes the event A occurs, then P(A) is given by the fraction P(A) == (2.7) and these m outcomes are “favorable” to A. The above cannot be used as such to define probability, since the procedure would be circular. The classical definition of probability (Laplace, 1812), is, however, very similar to (2.7). It states: If a random experiment can result in n mutually exclusive and equally likely outcomes, of which m are favorable to A, then the probability of A equals the ratio of these favorables to the total number of outcomes n. This definition reduces the notion of probability to that of “equilikelihood.” Axiom 1 is satisfied, since the fraction m/n cannot be negative, and so are Axiom 2, since n outcomes are favorable to the sample space and P(Q) = n/n = 1, and Axiom 3. Assume that m, elementary events are favorable to an event A, and m, elementary events to an event A,. A, and A, are mutually exclusive, the events E, favorable to one of them are different from those favorable to the other. Thus there are m, + m, events E, favorable to one of the events A, or A, that is, favorable to the event A, + A, = A. Consequently, my +m, _ mm - 5 + —2 = P(A,) + P(Az) QED. P(A)= Equation (2.7) for the probability of an event composed of equiprobable events has many useful applications wherever symmetry considerations are involved. The probabilities of a homogeneous, balanced die, properly thrown, turning up each face are equal, P(E,) = {. The probabilities of an “honest” coin, properly tossed, turning up heads or tails are the same, P(E,) = 4, and the probability of any card being drawn from a properly shuffled deck is P(E;) = 33. In these cases, 6, 2, and 52 are, respectively, the numbers of outcomes. Example 2.3 The number of heads turned up in one toss of a pair of coins equals 2, 1, or 0, and we seek the probability of each event. The three events are not equiproba- PROBABILITY AND RELATIVE FREQUENCY 23, ble, although they partition the sample space. However, in order to invoke the classical definition of probability the sample space must be represented as the sum of elementary, equiprobable events. We first tabulate the possible pairs: First Coin Second Coin heads heads heads tails tails heads tails tails The four outcomes are natural equiprobables, being mutually exclusive and covering the sample space. Considering only the number of heads in each pair, we invoke the classical definition of probability, obtaining: Probability of two “heads” equals 4 Probability of one “heads” equals 3 = 4 Probability of no “heads” equals } 2.5 PROBABILITY AND RELATIVE FREQUENCY Consider a sequence of n identical experiments in each of which occurrence or nonoccurrence of some event A is recorded. A natural characteristic of A appears to be the relative frequency of its occurrence, defined as the ratio of its occurrences to the total number of trials. Denote by P(A) the relative frequency of A. We have P(A) = n(A) (2.8) where n(A) is the number of occurrences of event A in n trials. Note that the relative frequency is bounded between zero and unity: o< MA gj 7 since the number of times n(A) of the event A occurring in n trials is bounded between zero and n. If A, and A, are mutually exclusive, and if in n experiments A, occurred n(A,) times and A, occurred n(A,) times, then the union A, + A, occurred n(A,) + n(A,) times, and its relative frequency is given by P(A, + Aa) = E[n(Ay) + 06 4)] 24 =~ PROBABILITY AXIOMS However, the relative frequencies of A, and A, are n(A,)/n and n(A,)/n, respectively, and the last equation can be rewritten as P(A, + Ay) = P(A) + P(A2) Past experience has shown remarkable conformity, which imparted a deep significance to the probability notion. It turned out that in different series of experiments the corresponding relative frequencies n(A)/n practically coincide at large values of n and are concentrated in the vicinity of some number. For example, if a die is made of a homogeneous material and represents a perfect cube (an “honest” die), then the relative frequencies 1, 2, 3, 4, 5, or 6 turning up oscillate in the vicinity of 4. Table 2.1 lists the relative frequencies of a simple toss of a coin turning up tails in experiments (total number 10,000) conducted in discrete series of 100 and 1000, respectively. It is seen that the relative frequencies n(A)/n in the “1000” series differ surprisingly little from the probability P(A) = } (the relative frequency in series of 10,000 experiments is 0.4979). This stability of the relative frequency could be interpreted as a manifestation of an objective property of the random event, namely existence of a definite degree of its possibility. Formally, Eq. (2.8) should be understood in the following way: P(A) = tim 24) noo oN (2.9) Realization of an infinite number of trials is only feasible conceptually, whereas in a physical experiment the number n may be large but always remains finite. Accordingly, definition (2.9) refers to the existence of a limit. TABLE 2.1 Relative Frequency in Series of 1000 Relative Frequencies in Series of 100 Experiments Experiments 0.54 0.46 0.53 0.55 0.46 0.54 0.41 048 0.51 0.53 0.501 0.48 0.46 0.40 0.55 0.49 0.49 0.48 0.54 0.53 0.45 0.485 0.43 0.52 0.58 0.51 0.51 0.50 0.52 0.50 0.53 0.49 0.509 0.58 0.60 0.54 0.55 0,50 0.48 0.47 0.57 0.52 0.55 0.536 0.48 0.51 0.51 0.49 0.44 0.52 0.50 046 0.53 0.41 0.485 0.49 0.50 0.45 0.52 0.52 0.48 0.47 047 047 0.51 0.488 0.45 0.47 0.41 0.51 0.49 0.59 0.60 0.55 0.53 0.50 0.500 0.53 0.52 0.46 0.52 0.44 0.51 0.48 0.51 0.46 0.54 0.497 045 0.47 0.46 0.52 0.47 0.48 0.59 0.57 0.45 0.48 0.494 0.47 0.41 0.51 0.59 0.51 0.52 0.55 0.39 0.41 0.48 0.484 CONDITIONAL PROBABILITY 25 For large n, however, (2.9), or more properly (2.8), may be used as an estimate of probability. Although Kolmogorov’s axiomatic method is superior, the relative frequency definition (due to von Mises) is suitable for physical applications and is by no means incompatible with Kolmogorov’s axiomatics. In these circumstances, results obtained in terms of relative frequency are often generalized to the appropriate probabilities. 2.6 CONDITIONAL PROBABILITY When analyzing some phenomenon, the observer is often concerned as to how occurrence of an event A is influenced by that of another event B. The simplest modes of intercorrelation of such a pair of events are (1) occurrence of B necessarily results in that of A, or, on the contrary, (2) occurrence of B eliminates that of A. In the theory of probability, this intercorrelation is characterized by the conditional probability P(.A|B) of event A, it being known that B (whose own probability is positive) actually took place: P(AB) P(A|B) = P(B) (2.10) We shall illustrate this on the example of an experiment with a finite number of equiprobable outcomes w. Let n be the total number of outcomes; n(B), the number favorable to B; n( AB), the number favorable to both A and B. Then, Fig. 2.7. Conditional probability: Probability of event A given that the event B has taken place: P(A|B) = P(AB)/P(B). 26 = = PROBABILITY AXIOMS so that the conditional probability is n(AB) _ n(AB)/n _ P(AB) P(A|B) = 7B) = (B)/n = P(B) (2.11) where n(B) is the number of all elementary outcomes w when B occurs and n(AB), those favorable to A. Recalling (2.8), Eq. (2.11) determines the proba- bility of A under new conditions, which arise when B occurs. Conditional probability retains all the features of ordinary probability. Axiom 1 is satisfied in an obvious manner, since for each of the events A and B the nonnegative function P(A|B) is defined according to (2.10). If A equals B, then according to the definition, P(BIB) = an = a = and, therefore 0 < P(A|B) <1 If the occurrence of B eliminates that of A, then P(AB) = 0 and therefore P(A|B) = 0. If occurrence of B necessarily results in that of A(B C A), then AB = B and P(AB) = P(B), which means P(A|B) = 1. If A is a union of mutually exclusive events A,, A,,...,A,, then the product AB represents a union of mutually exclusive events A,B, A,B,...,A,B, and according to Axiom 3 Pap) = P(A,B) =1 and P(A|B) equals rap) = 2B Eagan = BPE) & aca Example 2.4 A pair of ordinary dice are thrown. What is the probability of the sum of spots on the upward-landing faces being 7 (event A), given that this sum is odd (event B)? CONDITIONAL PROBABILITY 27 The sample space is composed of 36 outcomes: 2 = (1,1), (1,2), (1, 3), (1,4), (1,5), (1, 6), (2, 1); (2,2), (2,3), (2, 4), (2,5), (2, 6), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6), (4, 1); (4,2), (4,3), (4,4), (4,5), (4,6), (5, 1); (5, 2), (5, 3), (5, 4), (5, 5); (5, 6), (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)) The number of outcomes favorable to A is 6, and hence the unconditional probability is P(A) == 4 If B has taken place, then one of 18 events took place (a “new” sample space with 18 points), and the conditional probability is P(A|B) = = 4 The probability of event B is P(B)=8=4 and P(A|B) is also obtained from the general formula (2.10): P(AB) __ & Pia) 43 P(A|B) = Note that the definition of conditional probability enables us to find the probability of a product. From Eq. (2.10), it follows immediately that P(AB) = P(A|B) P(B) (2.12) That is, the probability of product AB is calculated by constructing the product of the conditional probability, under event A, of event B occurring and the (unconditional) probability of event B. On the other hand, P(AB) = P(B\A) P(A) 28 PROBABILITY AXIOMS and P(AB) = P(A|B) P(B) = P( BJA) P(A) (2.13) That is, the probability of the product of two events is calculated by construct- ing the product of the conditional probability of one of these events under the condition of another event occurring and the (unconditional) probability of the latter event. Formula (2.12) is readily extended by induction to n events Aj, Agy.005 Ani P(A\Ag As ++ Ay 1Ay)= P(AilAgAs *** Ay) P(AZAS © > An) = P(A\|AQA3 +++ Ay) P(A2|A3 * ++ An) XP(As +++ A,) = P(A\|A2A3 --+ Ay) P(Aal Ag *** Ay) XP(Ag|4g «++ Ay) ++ P(A, 114) P(A,) (2-14) Equation (2.14) is known as the multiplication rule. 2.7 INDEPENDENT EVENTS. Events A and B are called independent if P(A|B) = P(A) (2.15) that is, if occurrence of B does not affect the probability of A. Note that mutually exclusive events are dependent. In fact, if AB = @, then P(A|B) = P(@)=0 * P(A) unless A = ©. If event A is independent of B, then, according to Eqs. (2.13) and (2.15) we have P(A)P(BIA) = P(B)P(A|B) = P(B)P(A) Thus, P(B|A) = P(B), implying that event B is equally independent of A or, in other words, that the property of independence is mutual. The probability of a product of independent events is readily calculated: P(AB) = P(A|B)P(B) = P(A) P(B) This is often used as a definition of independence. INDEPENDENT EVENTS 29 nevents, A,, A2,..., A, are individually independent if and only if P(A,A,) = P(A,)P(Aj), 1S P(A,AjAy) = P(A;)P(A))P( Ag), EHD KEK (14) =T1P(4) (2.16) Thus, pairwise independence is not sufficient for the component events to be individually independent. This is illustrated by the following example, due to N. S. Bernstein. Example 2.5 Given events A, B, C such that P(A) = P(B) = P(C) =4 and P(AB) = P(AC) = P(BC) = P(ABC) =} These events are pairwise independent: P(AB) = P(A) P(B) P(AC) = P(A) P(C) P(BC) = P(B)P(C) but not individually independent, since P(ABC) + P(A)P(B)P(C). Note: A, B, and C have a “physical meaning” —as in the case of an “honest” tetrahedron, with one face red (event A), another green (event B), the third blue (event C), and the fourth in all three colors. Example 2.6. Series System A device consists of n components (Fig. 2.8a) connected in series, that is, all components are so interrelated that failure of any one of them implies failure of the entire system. These component failures are taken as independent 30 © PROBABILITY AXIOMS a (b) Fig. 2.8. (a) Series system. (b) Parallel system. random events A,. Denoting the reliability, the probability of nonfailure performance, of element £, by R,, the reliability of the entire system is then R= P(A\A, + A,) = P(A,)P (Az) +++ P(A,) = RR, °°: R, = TLR, (2.17) As reliability in principle does not exceed unity, multiplication of the component reliabilities makes for a decrease of the overall system reliability R as the number of components increases. In fact, R cannot exceed the reliability of the weakest component: R < min(R,, Ry,..., R,) Example 2.7. Parallel System Consider now the same elements as above, this time connected in parallel (see Fig. 2.85). In this case the system fails when a// components fail; the probabil- ity of failure is given by P(A\A, «++ Ay) = P(A,)P(42) °+ PCA,) = (I= R= Ry) (LR, RELIABILITY OF STATICALLY DETERMINATE TRUSS 31 and the reliability of the entire system is n R=1-(1—R,)(1-R,)-+ (1-8, ) = 1 - He — R,) (2.18) Note that series or parallel systems do not refer to physical series or parallel connection between the components. For example, let the components (n = 2) in Fig. 2.8 be valves, whose probability of proper performance is 0.9. If proper performance of the entire system is defined as allowing flow through it, then the reliability of the physical series system is 0.9? = 0.81, and of a physical parallel system is 1 — (1 — 0.9)? = 0.99; that is, in this case the physical parallel system is the more reliable. However, if proper performance is defined as preventing flow through the entire system, then the reliability of the physical series system is 0.99, whereas that of the physical parallel system is 0.81. In this case the physical series system is the more reliable. 2.8 RELIABILITY OF STATICALLY DETERMINATE TRUSS Consider a statically determinate truss consisting of n bars under given specified deterministic loading (Fig. 2.9a,b), with failure (overloading beyond the yield-stress level) of a single bar implying failure of the entire truss. Assuming that the failures of component bars are random events, the reliabil- ity of the truss is given by ~ Prob (any bar failing) 1 — Prob (A, + A, +++ + A,) n non nnn =1- = P(A) + XY P(4.4)) - EEE PAL) fee (2.19) where P(A,A,) = P(A,|A;) P(A,) P(A,AjAy) = P(Aj\A Ag) P( Aj Ag) P( Ag): ete For example, for the case n = 2 (Fig. 2.9c): R=1- [P(A,) + P(A2) — P(A,42)] = 1 - [P(A,) + P(A.) — P(A|A2) P(A2)] (2.20) In order to find the reliability, we must know the conditional probability. 32° PROBABILITY AXIOMS. Lt 1 2 3 (b) n Pp Pp a |p P tc) (d) 2PcosB 2Pcos ¥ P r P (el (n Fig. 2.9. Truss systems: (a,b) With different unconditional and conditional probabilities of bar failure (c) n = 2. (d) n = 3. (e, f) Constituent bars have the same probabilities of failure. RELIABILITY OF STATICALLY DETERMINATE TRUSS 33 For n = 3 (Fig. 2.9d), we have R=1-[P(A,) + P(A2) + P(A3) — P(A, 42) — P(A, 43) —P(A,A,) + P(A,4243)] = 1—[P(A,) + P(Az) + P(A) — P(Ay|A2) P(Az) — P(Ay1A3) P(A3) —P(Aq|A3) P( As) + P(AyA243) P(Ap|A3) P(A5)] (2.21) In this case we must know the conditional probabilities of failure P(A\|A2), P(A,|A3), P(A2|A3), and P(A,|A,A,). Two extreme cases are possible: (1) Bar failures represent individually independent random variables. (The bars can be visualized as manufactured at different plants, using different processes.) In this case it suffices to know the reliabilities of individual bars: P(A,A,) = P(A;)P(4;) = (1 — R,)(1 = R,) P(A,A,Ay) = (1 — R,)(1- Ry). — Ry) and Eq. (2.19) becomes R=1-S(1-R) + Ey (1-2) -R,) i=l l,..., A,. Then B= BQ = B(A, + A, +++: +A,), and P(B) = ¥ P(BA,) i=l OVERALL PROBABILITY AND BAYES' FORMULA 35, Using the multiplication rule for each of P(BA,), we arrive at n P(B) = ¥ P(BIA,) P(A,) (2.25) imt which is known as the formula of overall probability. Now, provided P(B) + 0, P(A,|B) can be expressed as P(A,B) __P( BIA) P(A) P(AB) = ; PCB) p(Bia,) P(A) i=l (2.26) This formula is due to Bayes. The unconditional probabilities P(A,) are called a priori probabilities, and the conditional probabilities P(A;|B) are a posteriori probabilities. Example 2.8 Given two boxes, the first containing a white and b black balls and the second c¢ white and d black balls. One ball is removed at random from the first box and placed in the second, after which one ball is removed from the latter. What is the probability of this ball being white? Denote the following events: A = white ball removed from second box, H, = white ball placed in second box, H, = black ball placed in second box: a b PUN) = a5 PU) aE e+] ¢ PAID = Tega PAM) = Tae According to the formula of overall probability, we have P(A) = P(A|H,) P(H)) + P(ALH,) PH) a e+] 5 ¢ Stas) eaee a eat cea i) In the particular case of both boxes containing equal numbers of white and black balls (c = a, d = b), we have a at+l b a a OA) ecg) apet) aeeae ma aes indicating that the probability of a white ball being removed from the second box is unaffected by adding the ball from the first box. Example 2.9 The structure under a time-dependent load, P = P(t), consists of two compo- nents with reliability [defined as nonfailure performance in the time interval 36 © PROBABILITY AXIOMS (0, T)] R, and Rg, respectively; nonfailure of both is required for nonfailure of the structure. The structure was inspected at the end of time interval T and found to have failed. Find the probability of only the first component having failed, the second not. Before the experiment, the four following hypotheses were possible: Hy: both components do not fail H;: first component fails, second does not H,: first component does not fail, second does H,: both components fail The respective probabilities of these hypotheses are P(Hy) = R,R, P(A,) = (1 — Ry) Ro P(H,)= RL - R2) PCH) = (1 — Ri — R2) Event A has taken place, the structure failed, hence P(A|Hy) = 0, P(A|H,) = P(A|H3) = P(A|H3) = 1 and Bayes’ formula yields (1= R)Ry = Ri)Ry —R)R,+ (1-R)Ri+(1-R)I- Bz) T= RR, P( HA) = Gi PROBLEMS 2.1. Present a Venn diagram for C, where C = B\ A. 2.2. Verify by means of a Venn diagram that a union and an intersection of random events are distributive, that is, (AUB)NC=(ANC)U(BNC) (AN B)UC=(AUC)N (BUC) 2.3. A telephone relay satellite is known to have five malfunctioning chan- nels out of 500 available. If a customer gets one of the malfunctioning channels on first dialing, what is the probability of his hitting on another malfunctioning channel on dialing again? 2.4, 2.5. 2.6. 2.7, 2.8. 2.9. 2.10. PROBLEMS 37 Let m items be chosen at random from a lot containing n > m items of which p(m < p 1, {w: X(w) < x) = Q = (fail, survive) Consequently, for each x the set (w: X(w) < x) is an event, and X(w) is a random variable. 39 40 = SINGLE RANDOM VARIABLE Example 3.2 Consider a three-dice experiment. The sample space Q contains 6° = 216 points ((i, j,k); i, j,k = 1,2,..., 6}. Let X denote the sum of spots on the upward-landing faces; then X(w) =it+j+k if w = (i, j,k). X is a random variable; the counterdomain of X(w) is an ensemble of positive numbers between 3 and 18. 3.2 DISTRIBUTION FUNCTION The (cumulative) distribution function F,(x) of the random variable X is defined as Fy(x) = P(X < x) = P[(w © Q: X(w) < x}] (3.1) for every real number x. Example 3.3 Consider the random variable described in Example 3.1. The probability distribution is as follows: If x < 0, Fy(x) = 0, since (X < x) is an impossible event. Ifx> 1, Fy(x) = 1, since {X(w) < x} is a certain event, because X(fail) = 1, X(survive) = 0 < x. If0 x, X(survive)= 0 heads, survive — tails, p > 4 Example 3.5 Consider the experiment of throwing a single “honest” die. Let X denote the ial of spots on the upward. landing face. X(w,) = i, i = 1, > 6. P(w;) = 4; Fy(x) =0 for x <1, since the event (X < 1}= , and thus Fy(x) = P(®) = 0. For 1 < x <2 we have Fy(x) = P(X < x) = Plo) = 4 For 2 < x < 3, the event (X < x) equals (,,,), and due to their mutual exclusiveness, we have F(x) = P(w,, 0) = Thus, 0, x 0 The last property follows from the definition of F(x), as P(X < x). It follows from the properties that the distribution function is bounded between zero and unity. Rewriting Eq. (3.3) for a — 0, we have F,(b) — Fy(a — 0) = Prob(a —- 0 < X a + 0, we have Fy(a + 0) — Fy(a — 0) = Prob(a — 0 < X 0) and F(x) is a nondecreasing function. Figure 3.6 shows a probability density function of Example 3.5. The functions (x — x;)x' and 8(x — x,) are indicated in Figs. 3.5 and 3.6 by a spike with an open arrowhead. SHEAR FORCE DIAGRAM DISTRIBUTED FORCE DIAGRAM Ra (x- 1 Ray Fig. 3.5. Beam under concentrated forces: Distributed force diagram representing analogue of probability density of discrete random variable. PROPERTIES OF THE DISTRIBUTION FUNCTION 47, fyb Fig. 3.6. Probability density function of number of spots on upward-landing face in o 1 2 3 4 5 6 single “honest” die-throwing experiment. A random variable X is called mixed if it is neither purely discrete nor purely continuous, in other words, if it possesses the properties of both discreteness and continuity: It contains jumps, equal pj, p2,..., Py» Tespec- tively, at points x,, x2,..., x, but is continuous between them. Subtracting, then, the sum zr piU(x — x;) i=l from the distribution function, we obtain a continuous function. We can thus express the probability distribution function of a mixed random variable as F(x) = F(x) + ¥ p(x - x,) (3.16) m1 where F*(x) is a continuous function, and p; (i = 1,2,..., 2) are the jumps of Fy(x) at points x; (i = 1,2,...,”). If F*(x) is absolutely continuous and differentiable at every x, then the probability density function of X may be expressed as fulx) = f(x) +E palx ~ x) @.17) i=t where f*(x) = ) Unlike a continuous random variable, a mixed random variable has a countable number of possible values, which it takes on with nonzero probabil- ity. An “analogue” of the probability density of a mixed random variable is represented by the following example. The beam, simply supported at its ends, 48 SINGLE RANDOM VARIABLE is subjected to a concentrated moment m (see Fig. 3.7). The bending moment will then be written as M,(x) = ~ Fx + m(x — a)? and the shear force as V,(x) =F — m(x ~ a)g! This expression comprises a continuous part m/L and a singular, discontinu- ous part —m(x — a), ', whose probability counterparts are obviously posi- tive. For clearer understanding of the notions of “discrete,” “continuous,” and “mixed” distributions, the following analogy is useful. Let us visualize a string with a mass distribution such that the entire mass equals unity, and its density is given by the probability density function. The discrete case corresponds to the entire mass being lumped at certain points x,, x,,..., x,; the continuous case, to a distributed (uniformly, or otherwise) mass without concentrations; and the mixed case, to a combination of continuities and concentrations. This m L m m ct cB FREE BODY DIAGRAM Mle BENDING MOMENT DIAGRAM SHEAR FORCE DIAGRAM 3.7. Beam under concentrated moment: Shear force diagram representing analogue of probability density of mixed random variable. MATHEMATICAL EXPECTATION 49 analogy will also be useful in further analysis of random variables. We actually assumed that the distribution function of a continuous random variable has only a countable number of points for which the derivative does not exist. At such a point we assign any positive value to f,(x) so that it becomes defined for all points x. 3.4 MATHEMATICAL EXPECTATION While the distribution function provides a complete characterization of the random variable, it is sometimes possible to make do with a simpler, albeit incomplete, characterization based on few numbers. Suppose that we are concerned with a series of trials whose possible outcomes are x¥,i = 1,2,..., n; the simplest characteristic of the discrete random variable X in the given series would be the arithmetical mean (3.18) If some of the n, values of xf, x3,..., x taken on by the random variable X in n experiments coincide, the coefficient of the common x} value will be n,. Denoting the possible values by x,, x2,..., X,,, and their relative frequencies by n,/n, Eq. (3.18) becomes i (3.19) x1 tl a|= * inl We regard a discrete random variable X as having a mathematical expectation, if xy lxJP(X = x;) < 00 (3.20) ino the expectation being given by the expression oo E(X)= YO x,P(X=x;) (3.21) i= 00 with summation over a finite or countable number of possible values x,, 0 1, the kth moment: m, = E(X*) = f x" fy(x) dx (3.29) The mathematical expectation is thus a first moment of a random variable. The mathematical expectation of [X — E(X)]* is defined as the kth central moment: = E(LX = BOON) = f° [x BO fe(x) de (830) Obviously, the zeroth central moment equals unity, and the first central moment, zero. The second central moment of a random variable is called the variance (provided the integral in (3.30) is absolutely convergent) and denoted by Var(X): Var(X) = B(LX ~ EOP) =f" [x EOP (2) &x 3.31) The mathematical expectation is a measure of the “average” of the values taken on by the random variable, whereas the variance is one of spread. The mean is the center of gravity of density, and variance is the moment of inertia of the same density about an axis through the center of gravity. For a discrete random variable, variance is defined by var(X)= YS [x,— E(x) PP(x= x) (3.32) if the sum on the right is finite. It follows from the definitions (3.31) and (3.32) that Var( X) = wa = {LX - £(X)]’} m, ~ 2[E(X)) + [E(X)P = m, — [E(X)P ne fate MOMENTS OF RANDOM VARIABLE; VARIANCE 53 so that Var( X) = E(X?) - [E(X)]? (3.33) This is equivalent to the parallel-axis theorem or Steiner’s theorem in statics, which relates the respective moments of inertia of a body about an arbitrary axis and about the central axis parallel to it. The standard deviation of X is defined as /Var( X) , and denoted by oy; that is, oy = Var( X) (3.34) For a constant c, the following properties of variance are readily established from the definition (3.31): 1. Var(c) = 0 (3.35) 2. Var(cX) = c?Var(X) (3.36) Example 3.10 Consider the experiment of throwing an “honest” die, letting X denote the number of spots on the upward-landing face: 6 Var(X) = © [x,- E(X)P P(X = x) i=l = (1-35) - $4 (2-3.5)?-£4+(3 - 3.5)? -24(4— 3.5) +4 +(5- 3.5)? $+ (6 — 3.5)? $= 21 Example 3.11 It is readily shown that for a random variable with Cauchy distribution, no variance is defined (see also Example 3.8). Example 3.12 The variance of a random variable X with Laplace distribution is (see Example 3.9) 2 Var(X) -{" Gxt de = 0 a Example 3.13 If the probability density function of a continuous random variable X is given 54 SINGLE RANDOM VARIABLE by Hee b flx)={ boa’ 7S*S 0, otherwise that is, if it represents a rectangular “pulse,” we say that X is uniformly distributed in the interval (a, b). Using the definition of a distribution function, Eq. (3.7), we obtain 0, x med(X) ed E()X — mea X)) + 2 f(x = a) x(x) a, a < med(X) E(\X — al) = The qth quantile of a continuous random variable X is defined as a smallest root of the equation Fy(€,)=9, ford0 0 E(x) x Fig. 3.10. Examples of probability density functions with positive, zero, and negative coefficients of kurtosis. 3.6 CHARACTERISTIC FUNCTION* The characteristic function of a random variable X, denoted by M,(@), is defined as the mathematical expectation of the complex variable, exp(i@X), treated as a function of 6: M,(0) = Ele®*] = el *f,(x) de (3.40) in other words, the characteristic function is the Fourier transform of the probability density function. Since |e'**| = 1 for every real @ and x, due to a property of the probability density function (3.11), the characteristic function does not exceed unity in its absolute value and equals unity at @ = 0. Since fx(*) is nonnegative, the probability density function can be found as the inverse Fourier transform of the characteristic function 1 0 10x fel) = 35 f Mx(a)e ox dg (3.41) Example 3.14 The characteristic function of a random variable, distributed uniformly in the interval (a, b), equals M,(0) = fret dx = 106 _ pia b-adl, te el coe i0(b — a) Example 3.15 The characteristic function of a random variable with Laplace distribution equals My(0) = $f ee dx = 0 2 *This section may be omitted at first reading. a’ +6? CHARACTERISTIC FUNCTION 59 Example 3.16 The characteristic function M,(@) of a random variable Y = aX + b, where a and b are real numbers and X is a continuous random variable, is found as My(8) = M,(a0)e'*? (3.42) since M,(0) = E(eY) = El eax] = E[eiaXei9>] — My(ad) ei” Here property (3.26) of the mathematical expectation was used. If a random variable X has an absolute moment of k th order, its character- istic function is differentiable k times. Conversely, the value of the kth derivative of the characteristic function My(@) at 6 = 0 determines the kth moments m, of a random variable X. Indeed, after k-fold differentiation of M,(@) we obtain d‘'M,(6 00 oe ) = af xkel®*#, (x) dx This derivative can be estimated by its absolute value d'M,( do* oe) [J testes) asl < J betel x)[de = J lathes) ax Since the latter integral is finite by assumption, the derivative d*M,(0)/d0* exists. Equation (3.42) yields x*fy (x) dx = ikm, aa Hence ik m, = 4] mele (3.43) The mean of a random variable equals, then, m, = E(X) => 1a = (3.44) 60 SINGLE RANDOM VARIABLE If all derivatives of the characteristic function M,(@) exist at @ = 0, the function can be represented by a Maclaurin series expansion d‘My(8) of do* Joo al M,(0)= > al k=0 or, with Eq. (3.43) taken into account, we obtain & (i)* M,(6)=14+ » am (3.45) The principal value of the logarithm of a characteristic function is called a log-characteristic function: ¥x(9) = In My(8) (3.46) Let us differentiate y(@) and put 6 = 0. We have dy,(9) aM x( 9] i | dO Jono My( wml a meee dy (8) __ 1 {a?M,(@) _ [@Mx(8) 2? a aoe ol de? M,(0) | a fe = —m,+m? = —Var(X) We used here the fact that My(0) = 1 and Eq. (3.43). As a result we obtain E(X)= - |) | (3.47) Var( X) = - [fu t(| - - [See] (3.48) Further differentiation of p,(8) yields [fe _ my ~ 3mymy + 2m} de? |g~0 2 [| i oe) em, 6=0 164 — 4mm, — 3m} + 12m?m, — 6mt CONDITIONAL PROBABILITY DISTRIBUTION AND DENSITY FUNCTIONS 61 For the coefficients of skewness and kurtosis, we obtain, respectively, VEO) te THOr? ce (IV), 0 = TOF os The value i*d*)) ,(6)/d0* is called the kth cumulant or semiinvariant of X. With semiinvariants known, the various moments of a random variable are readily obtainable. It should be noted that moments (if they exist) are uniquely determined through the probability density function (or the characteristic function, or the log-characteristic function). Accordingly, the question arises whether the set of moments determines uniquely the probability density function of a random variable, and the answer is generally no; this is known as the problem of moments and will not be discussed here. (The reader may consult, for example, Kendall and Stuart.) 3.7. CONDITIONAL PROBABILITY DISTRIBUTION AND DENSITY FUNCTIONS First we recall Eq. (2.10), which states that if B is an event with nonzero probability, then the conditional probability of an event A, knowing that B has taken place, is P(AB) P(B) P(A\B) = The conditional probability distribution function of the random variable X under condition B is defined as the probability of the event {X < x): P(X a, then the event (X < x|X < a) is a certain one and Fy(x|X a)< EQ) (3.56) Indeed, if f(y) is a probability density function of a continuous random variable Y E(Y) = [toe = Loto) = Lot) dy + Ph) a > [tro & > af fy) a =aP(Y> a) which was to be proved. Here the fact was used that /y(y) vanishes for negative values of y. Consider now the random variable Y = |X — aj", where n is a positive integer. This random variable takes on only nonnegative values irrespective of the sign of X. Therefore for a = e” (e > 0), El|xX = a)" P(\X-al">e")< ; that is, El|xX = a\"] P(\X—a|>e)< 7 (3.57) This inequality, named after Bienaymé, signifies that P(X —a| > e) < min “UX 40"] If a random variable has a finite variance a, then we may put a = E(X), n = 2, and e = koy in the inequality to get P(|X- E(X)| > koy) <4 B (3.58) PROBLEMS 65, This inequality, named after Tchebycheff, signifies also that 1 P(X - E(X)| < koy) > 1- a For example, for k = 2 we obtain P{E(X) - 2oy < X < E(X) + 20y}) >} for any random variable X with finite variance. For k = 3, P[E(X) - 30, < X < E(X) + 30y] >§ For any random variable X with finite variance, the latter inequality signifies that the probability of X falling within three standard deviations of its mean is at least §. This bound is independent of the distribution of X, provided it has a finite variance. PROBLEMS 3.1. A spring-mass system is subjected to harmonic sinusoidal excitation with specified frequency w. Suppose the spring coefficient is a random variable with given probability distribution Fy (k), where the mass m is a specified quantity. What is the probability of no resonance occurring in a system picked up at random? 3.2. A random variable X is said to have a triangular distribution if its density is i) a rer (Al a} a t). (b) Show that the hazard function at time ¢ equals the probability density of the time of failure divided by the reliability R(7), the probability of the system surviving up to ft: frlt) A(t)= R() felt) = Flt), R(t) = 1 Felt) (c) Verify that R(t) = [1 ~ FrlO)]exo] - f'a(0) a] fale) = [1 ~ Fe(a)]a(sere| ~ f'n(0) at] where F;(0) is the probability of failure at t = 0. Remark Since 0 < R < 1, h(t) > f;(t). By analogy, the probability of a specimen subjected to a fatigue test with sufficiently high amplitude of the repeated load fracturing between 10° cycles and 10° + 10 cycles [corresponding to f(t) dt] is very small. The probability of fracture in the same interval, provided the specimen survived up to 10° cycles [corresponding to h(t) dt] is much higher. Find the probability density of the time of failure f;(t), if the hazard function is constant h(t) = a, and the probability of initial failure is zero, F,(0) = 0. Show that a is the reciprocal of the mean time of failure, E(T). RECOMMENDED FURTHER READING 67, 3.6. Find the conditional probability of a system failing in the time interval (t), t2), assuming that it did not fail prior to time t,; h(t) = a. 3.7. Find the conditional probability of a system surviving in the time interval (t,t) assuming that it survived up to time 1, [probability of prolongation by an additional time interval At = t, — t)]; A(t) = a. 3.8. Verify that if X is uniformly distributed in the interval (a, b), the probability of X < a + p(b — a), where 0 < p < 1, equals p. 3.9. Following the steps used in the text, prove the inequalities of Bienaymé and Tchebycheff for a discrete random variable. 3.10. X is uniformly distributed in the interval [8, 12]. Calculate the probabil- ity P(E(X) — oy < X < E(X) + oy) and compare it with the upper bound furnished by Tchebycheff’s inequality. 3.11. Show, using Tchebycheff’s inequality, that if E[(X — a)*] = 0, where a is a deterministic constant, then X = a with unity probability. CITED REFERENCES Crandall, S, H., Dahl, N. C., and Lardner, T. J., Eds., An Introduction to the Mechanics of Solids, 2nd ed., McGraw-Hill, New York, 1969, pp. 164-169. Kendall, M. G., and Stuart, A., The Advanced Theory of Statistics, Vol. 1, Distribution Theory, Charles Griffin, London, 1958, pp. 109-115. Popov, E., Introduction to Mechanics of Solids, SI ed., McGraw-Hill, New York, 1975, pp. 47-53 RECOMMENDED FURTHER READING Gelfand, I. M., and Shilow, G. E., Generalized Functions, Vol. 1, Academic Press, New York, 1964. Chap. 1: Definition and Simplest Properties of Generalized Functions, pp. 1-18; Chap. 2: Differentiation and Integration of Generalized Functions, pp. 18-44. Gnedenko, B. N., Theory of Probability, Chelsea Publ. Co., New York, 1962. Chap. 4: Random Variables and Distribution Functions, pp. 155-200; Chap. 5: Numerical Characteristics of Random Variables, pp. 201-235; Chap. 6: The Law of Large Numbers, pp. 236-265; Chap. 7; Characteristic Functions, pp. 266-301. Leighthill, M. J., Introduction to Fourier Analysis and Generalised Functions, Students’ ed., Cam- bridge at the University Press, 1962. Chap. 1: Introduction, pp. 1-14; Chap. 2: The Theory of Generalised Functions and Their Fourier Transforms, pp. 15-29. Papoulis, A., Probability, Random Variables, and Stochastic Processes, Intemational Student ed., McGraw-Hill, Kogakusha, Tokyo, 1965, Chap. 4: The Concept of a Random Variable, pp. 83-115; Chap. 5: Function of One Random Variable, Sec. 5. 5-4, 5-5, pp. 138-164. chapter Examples of Probability Distribution and Density Functions. Functions of a Single Random Variable In this chapter we present some widely used discrete and continuous probabil- ity distribution and density functions. 4.1 CAUSAL DISTRIBUTION In order to represent the constant ¢ probabilistically, we use a causally distributed random variable with a probability density function (Fig. 4.1) f(x) = 8(x - €) (4.1) that is, the random variable X takes on the value ¢ with probability unity. The distribution function is readily obtained by integration of (4.1): F(x) = U(x ~ e) The mathematical expectation is obviously E(X)=c and the variance Var( X) = 0 DISCRETE UNIFORM DISTRIBUTION 69 fy xd Fig. 4.1. Probability density of causally dis- tributed random variable represented by Dirac’s 0 ce x delta function. The characteristic function is M,(9) = E(e**) 4.2, DISCRETE UNIFORM DISTRIBUTION A random variable X has a discrete uniform distribution, if its probability density function reads (Fig. 4.2) felx) = 4S 8(x 4) (42) int that is, X can take on only values ¢,, c,..., ¢,, each with probability 1/n. The distribution function is 1¥ u(x- 4) i=l Fy (x Fig. 4.2. Probability density of random vari- able with discrete uniform distribution rep- resented by combination of Dirac’s delta functions. 70 FUNCTIONS OF A SINGLE RANDOM VARIABLE The mathematical expectation is n E(X) = Vie P(X= 4) i=l For the particular case c, = i, we get for the mathematical expectation el +1 E(X)= Yin =" 5 and for the variance n ie Var( x) = B(x?) - [BYP = D2 - (254) im] a(n + I(Q2nt)) (n+) 6n 4 4.3 BINOMIAL OR BERNOULLI DISTRIBUTION Independent trials, each of which involves an event A, occurring with positive probability p = P(A), are called Bernoulli trials. The event itself is referred to as a “success,” and the complementary event 4, which occurs in each of the trials with probability g = 1 — p, asa “failure.” In other words, p = P(success), q = P(failure). If n trials are considered, then each elementary outcome w can be de- scribed by a mixed sequence of “successes” and “failures,” for example, (s, 8, f, f, f, 5,-.-,f, 8). The probability P(w) of each elementary outcome w, at which there are exactly m successes and n — m failures, is given, in view of the mutual independence of the outcomes, by P(w) = prgm As can be seen, the elementary outcomes are equiprobable if p = q = }. Consider now the random variable X, which denotes the number of suc- cesses in n Bernoulli trials. X(w) =m if an elementary outcome indicates exactly m successes. The number of different outcomes w resulting in m successes in any sequence equals that of combinations of m s’s and n — mf’s, which in turn equals that of combinations of m objects drawn from an ensemble of n objects: eG BINOMIAL OR BERNOULLI DISTRIBUTION 71 All these outcomes have the same probability P(w), hence the event X = m has the probability P(X=m)=(f)p™g"™, m=0,1,2,...,7 (4.3) The probability density function of X is "in felx) = % (M)pmgnn3(x = m) (4.4) m=0 Note that since all possible mutually exclusive outcomes of n trials consist in a success occurring 0 times, once, twice,..., 1 times, it is obvious that ¥ P(xX=m)=1 m=0 which also follows from the equality n x (n)era™=(p+q)"=1 m=0 The mathematical expectation of X is B(x)= ¥ mm) Pan” = x m( jn)" m=0 m= Since, however, we have 1 E(X)=m > (moh mig mal a-l eae - wr ("; ) ta" ‘k= np k=0 72 FUNCTIONS OF A SINGLE RANDOM VARIABLE The variance is Var(X) = E(X?) — [E(X)/, so that E(X?) n " x m?( 2) prgn-™ =m m( 1 am tgt tm m= m=t n-1 np (k + (” k 1) phar k=0 n=l n=1 wy «("; ") phge +m lee 1) ptgt*! k=0 k=0 np((n — 1)p] + mp (4.5) It was taken into account that the first sum in Eq. (4.5) represents the mathematical expectation of X inn — 1 trials, and the second sum equals unity as the probability of a certain event. As a result, Var( X) = np(n — 1)p + np ~ (mp)” = np(1 - p) = mpq Example 4.1 A device consists of five components, the reliability of which equals p and the failures of which are mutually independent. Then the probability of none of the components failing equals P, = p', that of at least one of the components failing equals 1 — p*. The probability of exactly one component failing is 5 P= (2) % = 5p'q that of two components failing is Pa G)ee = 10p°q? That of at least two components failing is 1-P)- P,=1-p? ~ 5p%q 4.4 POISSON DISTRIBUTION A random variable X is said to have a Poisson distribution if it takes on values 0,1, , with probabilities P(X =m) = e*, m= 041,... (4.6) where a is a positive constant. POISSON DISTRIBUTION 73. The probability density function is co gm Sx(x) = e7* YF 8(x — m) m! m=0 The characteristic and the log-characteristic functions are, respectively, oo m © (gei0)™ = a a (ae’ My(8) = E(e*) = e7" Yelm = es YA m=0 m=0 = en tee = paella and wy(0) = ae —a The mathematical expectation and variance are obtainable, for example, from Egs. (3.47) and (3.48), respectively, E(X) = -i¥(0) =a Var(X) = —¥%(0) = a The Poisson distribution may be regarded as a limiting case of the binomial distribution, where the number of trials is large and the probability of success very small but the mean number of successes a = np not too small, in which case it can be shown that Indeed, as is known i al din(1-4) -¢ and since p = a/n, we obtain from Eq. (4.3) P(X=0)=4"=(1-p)"=(1-4 ~e and moreover, P(X =m) -™~o(m—1p_ a m=1,2,... P(X=m-1) mq m 74 FUNCTIONS OF A SINGLE RANDOM VARIABLE when n > oo. Therefore, P(X=1) ~ $P(X=0)~ fen* P(X = 2) ~ GP(X = l)~ P(X = m)~ 2 P(X =m — y-= a which is the Poisson distribution. 4.5 RAYLEIGH DISTRIBUTION A continuous random variable X is said to have a Rayleigh distribution if its probability density function is given by (Fig. 4.3) 0, x<0 fx (x) = zen, x30 (4.7) a with parameter a?. The distribution function is Fy (x) = [1 - o872|U(x) 0 a 2a 3a Fig. 43. Rayleigh probability density function EXPONENTIAL DISTRIBUTION 75 The mathematical expectation and the variance are, respectively, -7 =1,25a = Var(X) = a? ~ 0.43a? The Rayleigh distribution is an example of a nonsymmetric distribution. The third moment m, is Lf? og x2 202 (Z)” m,=—] x‘e @ dx = 3a°|> ail 2 and the third central moment is rive (g)*—ae(9)!2 20 (5) 72 =(n-=3(F) a The coefficient of skewness is eae ace 4.6 EXPONENTIAL DISTRIBUTION A random variable X is said to have an exponential distribution if its density fx(x) is given by 19% x<0 fel) = i x20,a>0 (48) the distribution function being F(x) = [1 — e**]U(x) and the characteristic function a Mx(8) = TG The mathematical expectation and the variance are, respectively, a(x)=4 and Var( x) = 25 76 — FUNCTIONS OF A SINGLE RANDOM VARIABLE 4.7. x? (CHI-SQUARE) DISTRIBUTION WITH m DEGREES OF FREEDOM A random variable is said to have a chi-square distribution if its density is given by 0, x<0 fax) = { 70 PIA)’ x>0,A=m/2 (49) where m is the number of degrees of freedom, and I'(x) is a gamma function defined as T(x) = [ee ay, x>0 For x > 1 we obtain, after integration by parts, T(x) = (x - 1)T(x - 1) Moreover, I'(1) = 1; hence, for x a positive integer, we have T(x) =(x- 1)! The mathematical expectation and the variance are, respectively, E(X)=m and Var(X)=2m 4.8 GAMMA DISTRIBUTION A random variable X is said to have a gamma distribution if its density is given by (Fig. 4.4) 0, x<0 felx)={ | pee x 3 0 (4.10) Be*'T(a + 1) where a > —1, 8 > 0 are constants. For positive integer a and x > 0, Eq. (4.10) takes the form xte2/B alget! fx(*) = U(x) The characteristic function reads 1 MO apy WEIBULL DISTRIBUTION 77 Fig. 4.4. Gamma probability density function 1) a = —1/2, B = 1,2) a= 1,8 = 1/2,3)a = 10, B= 1/5. The mathematical expectation and the variance are, respectively, E(X)=(a+1)B and Var(X) = (a+ 1)A? Note that for a = 0, a gamma distribution reduces to an exponential one with a=1/8; for B = 2, a=m/2—1, it reduces to a x? distribution with m degrees of freedom. 4.9 WEIBULL DISTRIBUTION A random variable X is said to have a Weibull distribution if its density is given by 0, x<0 fels)=(Moeoiet, eso (4.11) where a and 8 are positive constants. The mathematical expectation and the variance are, respectively, B(x) = 0 (4.12) ox ae The relevant distribution function (inexpressible in terms of elementary func- tions—polynomial, trigonometric, or exponential) is F(x) = oat. eol-4 (44 all dx | x Fy(a) = 3 (4.13) where erf(x) is the error function, defined as 1 piiy erf(x) =—— | ea 4.14 (=F f y (4.14) Note that this function is odd: et ee ya 1 erf(—x) = Efe"? ay = - LE fe" a = - ent Cae fh De ie L (e) In view of the familiar integral (see Appendix A) ae d= yz (4.15) 1 00 2 (00) =e fe’ 2 dy =4 feo) ah wns it can be shown that Comprehensive tables of erf(x) are readily available; some values are listed in Appendix B, Note that the error function is often (e.g., in computer sub- routines) defined in a different way, namely, erf*(x) = Fhe dy (4.16) NORMAL OR GAUSSIAN DISTRIBUTION = 79 The connection between erf(x) and erf*(x) is as follows: erf(x) = ter( =) (4.17) A normal distribution is often denoted by N(a, 02), and the parameters a and of can be shown to represent the mathematical expectation and variance of X. Indeed, E(X) = fatal) d= se lvel-1(2)] dx or, introducing a new variable € = (x — a)/oy, B(X) = = J” (gk + adenp( 48°) a = BE J gov -18] de + Ff enol 40] at the first integral vanishes (the integrand being odd), and the second equals Y2r by Eq. (4.15). Thus, E(X)=a (4.18) This also follows immediately from the fact that a normal density (4.12) is symmetric about x = a. For the variance we obtain CHEE oe eEetH EE NEE _(x-ay Var( x)= f(x a) ol a dx 2 00 = FEL fovl-18] ae = oF (4.19) thus justifying use of the symbol 0? for this parameter. The graph of a normal density function is shown in Fig. 4.5. The curve is symmetrical about a and descends steeply as x — a increases. At x = a the curve has a maximum equal to 1/o,/2z ; this ordinate increases as oy decreases. A normal random variable with zero mean and unity variance N(0,1) is called standard or normalized, and its density and distribution functions are, 80 FUNCTIONS OF A SINGLE RANDOM VARIABLE ty (x) 06 1 04 3 2 0.2 3-2 ) 0 7 2 3 “ 5 6 x Fig. 4.5. Probability density function of normal distribution with (1) a = 0, oy = 3, (2) a= 0, oy=1,3)a=3,0y= 1. respectively, o(x) = Fel 4x?) (4.20a) P(x) = f (x) dx =4+ erf(x) (4.206) For P(x), the following approximate formula holds (see Abramowitz and Stegun): P(x) = 1 = 9(x) Eb! where 1 + px and p = 0.2316419, b, = 0.319381530, b, = —0.356563782, b, = 1.781477937, b, = — 1,821255978, and b, = 1.330274429. The central Ath moment of a normally distributed random variable is given by tea f° x~ a) felx) de = off” NORMAL OR GAUSSIAN DISTRIBUTION 81 Integration by parts yields Me = (a Dot of*/ Fel 48 ]ae or By = (k ~ I oxmy-2 Since p, = 0 and p, = of, all odd central moments vanish, while the even ones are Bay = 3nh ill 'Seh (4.21) ag = 2k — I)Mo2*, k= 1,2,... where (2k = I)! = 1-3-5 +++ (2k-1) The absolute kth moments m, as per Eq. (3.37) are (2r — 1)!02", k=2r m, = E(\X|k) = 12 «EO () rrloet!, k= 2+] Note that for a normal random variable », and m, are functions of the variance. The characteristic function of a normally distributed random variable is M,(8) = pea exis = aa oie —oo 0a 00 2 exp] it — — | dt ola Be | “| Recalling the equality (see Appendix A) f © pate at = Fea a (4.22) 00 82 FUNCTIONS OF A SINGLE RANDOM VARIABLE we have M,(8) = exp(ida — $0367) Wy(8) = ia — 4036 (4.23) These formulas can also be used for the moments of X. From (4.13) it follows that P(x < X 0, and (4.24) takes the form P(X — al 1 and X becomes a normally distrib- uted random variable with E(X) = x, and Var(X) = o. A normal variable is said to have a symmetrical truncated distribution if in (4.26) we have x, =x 9—ko and x, =x + ee A then becomes A = [2erf(k)]"'. From Appendix B we have for k = 2, erf(2) = 0.47725, and A differs ‘only slightly from unity. 4.12 FUNCTION OF A RANDOM VARIABLE We now consider the mapping of a random variable into another random variable by means of some deterministic relationship. Let X be a random variable. This signifies that to each experimental outcome w we assign a real number X(w). The set (X < x) is an event for any real number x, and the probabilities are P(X = +0) =P(X= —w)=0 (4.30) MOMENTS OF A FUNCTION OF A RANDOM VARIABLE 85, Suppose that a real function p(x) of a real variable x is given. We construct the function Y= 9(X) (4.31) Suppose that the set (Y S) (4.46) Now let a = 1/oy and B = —E(X)/oy, where E(X) is the mathematical expectation of X and oy its standard deviation. Then, X- E(X) ox Ye \ i is the normalized random variable, and fy(y) = oxfxloxy + E(X)] (4.47) Consider two special cases. 88 FUNCTIONS OF A SINGLE RANDOM VARIABLE Example 4.2 Let X be uniformly distributed in the interval (x,, x2), that is 1 > X) 0, and in the interval (ax, + B, ax, + B) for a <0. Example 4.3 If X is N(a, 03), then so dee meh oyV20 202 and =! exp] - Me = 8)/e = al” fry) = owaviw 7 20} or ene _ [y= (aa + 6)? OM Sail “| 20? (4.49) demonstrating that a linear function of a normally distributed random variable is likewise normal, N(aa + B, 020”). 4.16 EXPONENTS AND LOGARITHMS OF A RANDOM VARIABLE Now let Y=e% (4.50) EXPONENTS AND LOGARITHMS OF ARANDOM VARIABLE 89 The function y = e* is strictly monotone increasing, and we have Fry) = FeL¥()] = Fin »)UCy) fy) = 5 falln y]O(y) + Fyn y)8(y) (451) In particular, when X is N(a, 0%), we obtain 0, ys i (In y — a)” fr(y)= yo, of 203 (4.52) » yd The random variable Y is said to have a logarithmic-normal or for brevity, log-normal distribution. We can show that the probability of Y falling within an interval [y,, ¥2] is (1, Y2 > 0) bi P(y,<¥| 4 ete I, yoda : 202 a Substituting in the integral In y ox we obtain ‘a, 2 Pin <¥ena pe [Or OM hep — 5) de ((n y1-a)/oy 2 which immediately leads to the desired result (4.53). We now seek the mean E(Y) and the variance Var(Y ), using Eqs. (4.38) and (4.39): 1 20 (x- a) E(Y) = Yexp| A= 9) | Nem vs| 2 | (x= ay -oe f en - aaa) dx 90 FUNCTIONS OF A SINGLE RANDOM VARIABLE which, in view of Eq. (4.15), transforms into E(Y) = exp{a + 402} (4.54) In order to find the variance, we first calculate in Eq. (4.39) oa [Pot a= f" oat Fe onl - = exp[2(a + o3)] whence Var(Y) = exp(2a + 0?) [exp(o?) — 1] (4.55) If instead of (4.50) we have Y = 10* and Eq. (4.52) becomes fry) = “ae Fullogio yIU(y) (4.56) where log ice = 0.4343. Consider now the Napierian logarithm of a random variable U=InV where V is a positive-valued random variable with probability density f,,(v). Then, since the function u = In v is strictly monotone increasing, we have Suu) = e"fy[e] In particular, if V has a log-normal distribution as per Eq. (4.52) then U has a normal distribution N(a, 62), as anticipated. 4.17 DISTRIBUTION AND DENSITY FUNCTIONS OF A FUNCTION OF A RANDOM VARIABLE (GENERAL CASE) Consider now the case when y = p(x) is not a monotone function (see Fig. 4.10). On the abscissa axis, the interval Y=9(X) 0 there is a single interval where x? < y, with w)=-vy WO)= vy and in accordance with (4.59) and (4.60) we write for y > 0 Fy(y) = Fy(Vv) - Fx(-vy) (4.61) flv) ~ gla) +h(-W)] (4.62) and Fy(y) = 0, fy(y) = 0 for y < 0. DISTRIBUTION AND DENSITY FUNCTIONS (GENERAL CASE) 93 $ ole $ en Fig. 4.11, p(x) constant in some interval (x;, x). Probability distribution function Fy(y) of random variable Y = 9(X) undergoing jump discontinuity equal to integral of f(x) over interval (x, %2)- 94 FUNCTIONS OF A SINGLE RANDOM VARIABLE Fy) 0 Vil x Fig. 4.12, Determination of the probability density function of the square of a random variable. If, for example, X is a standard normal variable M(0, 1), Eq. (4.20.1), we find 1 fr(y) = eg cued? y>0 which represents a one-degree-of-freedom chi-square probability density (4.9), with A = 4, [[(4) = va}. Example 4.6 Consider now the function Y = |X| (Fig. 4.13). In this case we again have a single interval where |x| < y is satisfied with via -y Wi )=y As in the preceding example, Fy(y)=0, f(y) =0, y <0 Combining (4.59) and (4.60), we obtain Fy(y) = Fy(y) - Fr(-) (4.63) Lv) = fx) + fx(-¥) (4.64) DISTRIBUTION AND DENSITY FUNCTIONS (GENERAL CASE) 95, Plx) =lxl Yly) 0 ofl) x Fig. 4.13. Determination of the probability density function of the absolute value of a random variable. If X is N(a, 02) we obtain for Fy(y) and f,(y), respectively, itinire a) fyr(y) = it {e»| - oF + os|- oan | (4.65) Fy(y) = [er(2 —*) if en 24 =2\lu (4.66) for y > 0 and fy(y) = 0 for y < 0. For a = 0 we obtain 0, y<0 =e ie 2 fr) = (3) oxo - 25, Poh (4.67) Fy(y) = deri 2 u(y) (4.68) These functions are shown in Fig. 4.14. Sometimes a random variable Y with density (4.67) is said to have a one-sided normal distribution with parameter oy. Note that oy does not exceed the standard deviation of Y. Example 4.7 A random variable X is uniformly distributed in the interval (0, 1]. The random variable Y is a strictly monotone increasing function of X: Y = p(X). We are interested in F,(y) and fy(y). We have ={l, O h) = 1 — Fy,() (4.73) Fig. 4.17. Village of Nieuwerkerk on Schouwen-Duiveland island in Zeeland, one of the areas hardest hit during the disastrous flood on the night of 31 January-1 February 1953. Two months later the water level was still high, the village was accessible only by rowing boats and the few inhabitants who had remained behind camped in their attics. Today the damage has been repaired, the dead have been buried, and Nieuwerkerk is once more a perfectly ordinary Dutch village; but the memory of the disaster lives on as strong as ever. (Reproduced with permission of BV. Uitgeversmaatschappij Elsevier, from Scherer K., and Werkman, E., “Holland in Close-Up”, 1979, Elsevier-Amsterdam, p. 117). 100 FUNCTIONS OF A SINGLE RANDOM VARIABLE where H,, is the maximum storm surge above the mean sea level. In 1939 Wemelsfelder found that the annual exceedance frequencies during high tide at the Hook of Holland in the period 1888-1937 followed the exponential distribution P(H, > h) = exo( - Ass) (4.74) where a and £ are some constants. We confine ourselves to the question of optimal dike height from the economical point of view, presenting the solution by Van Danzig. The eco- nomic decision problem can then be formulated as follows: Taking into account the cost of constructing the dike, the material losses when a breach occurs, and the distribution of different sea levels, determine the optimal height of the dike. Assume that future dikes at the site in question will all have the same height H above a given standard level, so that the amount x by which the dikes must be elevated is x=H-H, (4.75) The cost C of elevating the dikes from Hy to H is a function of x. The simplest assumption about the possibility of losses is as follows. Let h denote the sea level along the dikes at any moment; then, no loss is incurred so long ash < H; if h > H, one may disregard the possibility of partial losses and reckon with total loss only, that is, with the circumstance that all houses, farms, livestock, industries, etc., in the polder are lost. Let V be the total value of the property in the polder and assume that the consequent loses (migration costs of the population and livestock, slump in production, etc.) are included in it, We then have s-(% ifh H Van Danzig treated the problem as one of insurance; he assumed that a sum L would be reserved to cover all future losses. If L is invested at a certain rate of interest i (in percent), it must cover the expected values of all future losses, P(H,, > H)V per annum, and we have « L= P(H,,> H)V > (1 + 0.011)‘ m0 = P(Hy > H)V [eC at = PPK, >H)V (4.77) 0 Cost of elevating the dikes is T= Ip + kx (4.78) PROBLEMS 101 where J, is the initial cost and k the subsequent cost of elevation per meter. Adding (4.77) and (4.78), we obtain T+L=Iht+kx+ 1004 e--a¥8 (4.79) Now we determine x so that J + L is minimal, that is, #1220 (4.80) or —(H~a)/B k- cer =0 (4.81) yielding the optimal dike elevation increment x=at Bin TRE Hy (4.82) With constants a = 1.96 m, 8 = 0.33 m, V = 24 - 10° Guilders, k = 40 - 106 Guilder/m, i = 1.5%, Hy = 4.25 m, van Danzig’s analysis yielded the optimal increment of 1.57 m. PROBLEMS 41. 4.2. 43. 44, A dike is designed with 5 m freeboard above the mean sea level. The probability of its being topped by waves in | yr is 0.005. What is the probability of waves exceeding 5 m within 200 yrs? Using Eq. (4.23) for the characteristic function of a normally distrib- uted random variable, show that E(X)=a_ ~ Var(X) = 02 Suppose that the duration of successful performance (lifetime, in years) of a piece of equipment is normally distributed with a mean of 8 yrs. What is the largest value its standard deviation may have if the operator requires at least 95% of the population to have lifetimes exceeding 6 yrs? What is the probability of a piece of equipment turning out faulty at delivery? A system is activated at ¢ = 0. Its time of failure is the random variable T with distribution function F;(t) and density f(t). Denote the hazard function by h(t) (see Prob. 3.5). (a) Does T in Prob. 3.4 have an exponential distribution? 102 FUNCTIONS OF A SINGLE RANDOM VARIABLE (b) Show that if A(t) = at, T has a Rayleigh distribution. (©) Show that if h(t) = aBt*-', T has a Weibull distribution. (d) Find f(t) if h(t) = aye”. Remark The resulting f(t) is called an extreme-value density func- tion. For interesting failure models leading to the extreme- value distribution, see Epstein (1958). 4.5. A system consisting of n elements in parallel with equal reliabilities R(t) fails only when all elements fail simultaneously. (a) Show that the reliability R,,(1) of such a system is R,(t) = [1 — R(a)]" (b) Find R,(t) when the conditional failure rate of each element is constant, h(t) = a. (c) Find the allowable operation time t, of the system, such that R(t,) = r, where r is a required reliability, for n = 1,2, 3. (d) Show that the mean lifetime of the system is B(r) = 1+ vie teal) 1 and interpret the result for n — oo. 4.6. Repeat problem 4.5, but with the elements arranged in series instead of in parallel. 4.7. Suppose that the reliabilities of n individual elements are identical and are given by R(t)=t/r for0 00. 48. The random variables X and Y are linked the following functional relationship -a3, x<-a Y=( x3, -aa RECOMMENDED FURTHER READING 103 X has a uniform distribution in the interval (—a, a). Find Fy(y) and fy(y), and present them graphically. 4.9. Given the functions of Problem 4.8 but with X having a uniform distribution in the interval (—2a,2a), show that Y is a mixed random variable. 4.10. X has a Cauchy distribution. Show that Y = 1/X also has a Cauchy distribution. CITED REFERENCES Abramowitz, M., and Stegun, I. A., Eds., Handbook of Mathematical Functions, Dover Publ., New York, 1970, p. 932, formula 26.2.17. Epstein, B., “The Exponential Distribution and Its Role in Life Testing,” Ind. Quality Control, 15 (6), 4-9 (1958). National Bureau of Standards, Tables of Normal Probability Functions, Appl. Math. Series, 23, 1953. Van Danzig, D., “Economic Decision Problems for Flood Prevention,” Econometrica, 24, 276-287 (1956). Wemelsfelder, P. J., “Wetmatigheden in het Optreden van Stormvloeden,” De Ingenieur, No. 9, (1939), (In Dutch.) RECOMMENDED FURTHER READING Melsa, J. L, and Sage, A. P., An Introduction to Probability and Stochastic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1973; Chap. 3: Random Variables, pp. 46-109; Chap. 4: Functions of Random Variables, pp. 110-188. Papoulis, A., Probability, Random Variables, and Stochastic Processes, Intern. Student Ed., Mc- Graw-Hill Kogakusha, Tokyo, 1965. Chap. 3: Repeated Trials, pp. 47-82; Chap. 4: The Concept of a Random Variable, pp. 83-115; Chap. 5: Functions of One Random Variable, pp. 116-164. chapter Reliability of Structures Described by a Single Random Variable The scope of the knowledge acquired by us so far suffices for reliability analysis (recalling that reliability is the probability of nonfailure performance) of the simplest structures, described by a single random variable. We are concerned with generally differing structures called upon to realize different functional assignments and will not consider nonfunctional ones. In other words, reliability is associated with a purpose—exploitation of the structure in accordance with defined goals. The acceptability criterion consists in the reliability exceeding some specified level. We now proceed to concrete examples of structural reliability. 5.1 A BAR UNDER RANDOM FORCE Consider a bar of constant cross-sectional area a, under a tensile force N, which is a random variable with the probability distribution function Fy(1), n > O (Fig. 5.1). The conventional strength requirement is for the normal stress 2, with values a, to be less than or equal to the allowable stress o,yoy,: N Z= TFS Gatiow (5.1) We simplify the analysis by assuming that both a and @,),,, are deterministic quantities. The reliability R will then be defined as the probability of event (6.1): ‘ R= P(E < exon) = P( < Caton) (6.2) 104 ABAR UNDER RANDOM FORCE 105 7 ‘Fig. 5.1. Bar under tensile force. or R= Fy (Ontiow4) (5.3) In other words, the reliability of a bar equals the probability distribution function of the tensile force N at the 0,y,.,4 level. If, for example, N has a uniform distribution 1 nm = Fwvorst (5.9) Fatlow we have R = 1, and inequality (5.7) is satisfied for any r. This design is, according to the maximal possible value n, of the tensile force N, the “worst” case consideration. Concentrate now on the “short-of-the-worst” case Ny S tow n, = n, ay > 0, and N tends to become a random variable with causal distribution, it takes on the value n with probability 1. We return to the deterministic design load: E(N) n a= (5.12) Fallow Fallow otherwise Eq. (5.11) has to be used for the sought area. Equation (5.11) indicates that the “design” according to mean load is unconservative if r > 0.5. It may only be used if oy < E(N). Thus the deterministic design is a particular case of the probabilistic design. Note that Eq. (5.11) tells us that when a reliability as low as r = 0.5 is required, the mean load-based design may be used, because then the second term in Eq. (5.11) vanishes and a... = E(N)/0,oy- This, however, implies, in accordance with the statistical interpretation of probability, that nearly half the ensemble of structures so designed will fail (which is quite a lot!). Note also that Eq. (5.3) is valid for all positive-valued distributions of N, since then the associated deterministic strength requirement is given by (5.2). If, for example, N has an exponential distribution (Sec. 4.6): F,(n) = {1 s or9|— zn | }UCn) then R=1- eve - ay (5.13) and the required area is E(N) Fattow In (5.14) Thus, if the required reliability is 0.99, then the required area is In 100 = 4.605 times that calculated according to the mean. Note that now the calculation according to the “worst” case is ruled out: It would yield an infinite eq» The determination of a,,, is illustrated in Fig. 5.3. Consider now the case where the force N can take on negative values as well. Assume also that the bar is not slender, so that there is no possibility of buckling failure. Then the strength requirement is written as Nj tol = Lg oxy (5.15) ABAR UNDER RANDOM FORCE 109 Flo) “allow req Fig. 53. Calculation of required cross-sectional area. The reliability is given by p(Hal < Sawn = P(IN| < Oxnow4) = P(~Ogtow < N < Ogtoy 4) = Fy (tow) — Fv (—Canow?) (5.16) and the strength requirement becomes Fy (xtow@) — Fry (—Onnow4) > 7 (5.17) where the equality yields the required area a. Note that in this case also the design according to the “worst” case ensures any level of reliability. Indeed, if n, > —n, (Fig. 5.4a), we may choose a = 13/0, Yielding Fry (Caow?) = 1; Fy (~Gaow) = 0 and R = 1. If, however, n. < —n,, (Fig. 5.46), we may choose a = —1)/0,now with Fry(~Oxtow4) = Fy(m)) = 0 Fry (atow4) = F,(—m,) = 1 and still R = 1. If ny = —n, (Fig. 5.4¢), choosing a = 12/0, again yields =1, 410 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE b)ng<-ny cl nge-ny ™ on n Fig. 5.4. Worst-case design: Worst situation governed by deterministic load equal to (a) 2, (b) ny, (6) either n, or m3. If, however, the required reliability is not too high, the bar will be “overde- signed.” For n, = —n), we have from (5.17): Fry (Ontiow4) — Fy (—Ontiow4) = 7 but Fry (= tow) = 1 — Fy (atiow4) yielding Fy (O,uow@) = 2(1 + 7) or Gatiow4 + 2 _ 1, i nor See +T) t = MT Og = BO ABAR UNDER RANDOM FORCE 111 and nor 4 iH Greg a show worst For r = 0.9, the required area is only 0.9a,,.3 however, for r = 0.99, it is 0.99 yorst3 that is, Ayr: May be used as the required area. Consider now the case where N has a symmetrical truncated normal distribution with zero mean, in the interval (”,, 12) (Sec. 4.11): n= ko ny =ke where @ is a parameter of the distribution, and 0, -o oo. Design according to “mean plus three standard deviations,” that is, choice of 30y Greg = req ntiow 4112 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE A Fy(n) q a neel 0 Satiow@ 7 12 Fig. 5.5. Calculation of reliability of a slender bar. corresponds to a reliability R = 2 erf(3) = 0.9973. Design according to “two standard deviations” is equivalent to choice of 2ay a,.,=— req Fallow yielding R = 2erf(2) = 0.9545. Note that calculation of the required area for a bar under an exponentially distributed tensile force according to the “mean plus a times standard devia- tion” yields, according to (5.13), R = 1 — e~“*#) so that in order to achieve the required reliability, say r = 0.9973, we now have to use a = 4.9145 instead of a = 3 for the normally distributed force. For the case where the bar is slender, and buckling in compression is possible, the strength requirement (5.15) reads El 4P SN < Onow4 where the left-hand term represents the buckling load of a clamped-free bar. Reliability is determined as the probability of the above random event: 2EI R= P(A cv < canna] EI = Fy (now) ~ my {- SE) which may be used either for determining the reliability or for designing the bar with the desired level of reliability (Fig. 5.5). ABAR WITH RANDOM STRENGTH = 113 5.2. A BAR WITH RANDOM STRENGTH Consider now a bar under a deterministic load n, and having a random strength with given continuous probability distribution Fy, (joy). We con- fine ourselves here to the case where 2,4, takes on only positive values, and moreover n > 0. Then reliability is R= P(= < Eaton} = 1 — Foy, (2) (5.21) a Assuming an exponential distribution for ©. allow? Sattow F554 (Satiow) = 1 — oxp|- Ae | ‘allow we obtain immediately The reliability R is given by the shaded area in Fig. 5.6. 0 a v. a ‘allow Fig. 5.6. Shaded area equals the reliability of the structure with random strength subjected to a deterministic load. 114 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE ‘The condition R = r yields the required area: eq (in +) i “8 E(Z stow) Vr So, for the desired reliability r = 0.99, we have Beg = 99S oo E(Zaiow) almost a hundred times the value n/E(2,yo) Obtained from the mean value approach, The results obtained for a single bar can be generalized in a simple way for the truss system of Sec. 2.8. Consider a statically determinate truss, assuming that the stresses in the bars 6,, 0,,..., 6, are deterministic quantities and that the distribution functions of the allowable stresses Fy 4(j)(Fattow(J)) for each bar (j = 1,2,..., ) are given. Then Fy, (;)(o;) is the probability of failure of the jth bar, and its reliability is R= 1 Fag) If bar failures represent independent random events, the reliability of the entire truss is R -1 [! ~ Feuin(%)| In the particular case where the allowable stresses are distributed identically, this formula becomes R= 0 D- F,.(9)] and in the subcase where the stresses are equal, o, = o (j = 1,2,..., n) (see Fig. 2.9e), we obtain R=[1~ Fy()]" instead of Eq. (2.22). 5.3 A BAR WITH A RANDOM CROSS-SECTIONAL AREA Consider now a circular bar with given 0,,,, under a given tensile force n, its cross-sectional area A being a random variable with continuous distribution function F,(a), a > 0. The strength requirement reads n 2 =F < Stow ABEAM UNDER A RANDOM DISTRIBUTED FORCE = 115 and the reliability is R= P(A> “ )-1-2/ Fatlow Fattow ) 5 22) ‘The maximal allowable tensile force 1,9 is then determined from the equality e,( a) =l-r Fxtow Randomness of the cross-sectional area is due to that of its radius c, which has a continuous distribution function F.(c); then n Z=-Fr, whereas the equality R = r yields the required beam dimensions. Let the cross section be circular with radius c. Then me* L=> S= iwc 7 4 4 alo Determination of c according to either the strength or the stiffness requirement is straightforward. When both are imposed, we calculate first the reliability as a function of c: R(c) = P\ligi< 62EWatow 24) a (\o| < 22a, ss 3 0. Figure 5.9(1) shows the system in its straight (undisplaced) state, and Fig. 5.9(2), in its displaced state. The horizontal reactions at the hinges equal F/2. Furthermore, the moment about the middle hinge must vanish. This requirement leads to the equilibrium equation: Ma4RI-P =4(ke+ hn + KP) -F (5.38) and for small values of ¢, we obtain the following asymptotic result AE =A (E+ af? + DS +--+) (5.39) A. = 4k, (5.40) k k. an. Price (5.41) *This section is an extension of the author's paper (1980), from which Figs. 5.9 and 5.12-5.18 are also reproduced. STATIC IMPERFECTION SENSITIVTY OF A NONLINEAR MODEL STRUCTURE 124 Fig. 5.10, Nondimensional load versus additional displacement relationship for symmetric struc- ture. Equation (5.39) represents the load-displacement relationship. Now the trivial solution on I ° (5.42) satisfies this equation for all values of the load 4, which merely confirms that the straight, vertical state is one of equilibrium. However, when ¢ is not zero, we can still satisfy this equation, i.e. have equilibrium, if N=A(1 + a& + be +--+) (5.43) The solid curves in Figs. 5.10~5.13 show the equilibrium load A plotted against the displacement £. At point = 0, A = X,, there is a branching (bifurcation) of the curve, one branch continuing along the straight line € = 0 and the others moving out along a parabola (5.43). This splitting into branches indicates the onset of instability; therefore the load A. at the branching point is called the classical bifurcation buckling load. Fig. 5.11. Nondimensional load versus additional displacement relationship for asymmetric structure. 122 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE 1—_ 1 L —1—____1 _1__» 0.4 0.3 0.2 0.1 00 01 0.2 03 O46 £ Fig. 5.12. Nondimensional load versus additional displacement curves for nonsymmetric struc- ture (a = —7.5, b = 25). The structure is designated as nonsymmetric in the general case a + 0, b * 0; as symmetric if a = 0, b + 0; and as asymmetric if a + 0, b = 0. In the latter case the parabola degenerates into a straight line: peat (5.44) We now proceed to the realistic imperfect structure. Assuming that an unloaded structure has an initial displacement ¥ = Lé, then equilibrium dictates, instead of Eq. (5.38), the following relationship: M(E+ €) =4F[1 — (6+ €)]'? = (ke + kag? + eoe)[I — (+ 2]? STATIC IMPERFECTION SENSITIVTY OF ANONLINEAR MODEL STRUCTURE 123 =0.20 -0.10 Fig. 5.13. Nondimensional load versus additional displacement curves for nonsymmettic struc- ture (a= —1.5,b = —25). where é is an additional displacement and & + é a total displacement. For small values of , we arrive at the following asymptotic result: ME+E) =A [E+ ad? + bE? + O(87E)] (5.45) Equation (5.45) indicates that £ and & have the same sign (i.e., additional displacement & of the system is such that the total displacement + & is increased by its absolute value). Otherwise, the assumption && < 0 would imply <0 for 0 <|é| <&, the presence of tension, which is contrary to our formulation of the problem. Note also that the graph A/A, vs. € for an imperfect structure issues from the origin of the coordinates. Additional zeroes of A/A, coincide with the zero points (-a + Va? — 4b )/2b of the parabola (5.43) representing the behavior of a perfect structure. The dashed curves in Figures 5.10-5.13 show the equilibrium load A plotted against the additional displacement é, in the imperfect structure. We now seek the static buckling load), which is defined as the maximum of ) on the branch of the solution \ — originating at zero load, for specified &t 1 The point with coordinates (Amax» fo) where fy is the additional displacement corresponding to Arman» is sometimes called the limit point, and Xpax itself the limit load. 124 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE We refer to a structure as imperfection-sensitive if an imperfection results in reduced values of the maximum load the structure is able to support; otherwise, we designate the structure imperfection-insensitive. To be able to conclude whether or not a structure is sensitive to initial imperfections, we have to find whether the first derivative of A with respect to dy dé ne + (a+ 3b&)e? + 2akt + E] has at least one real root. For our purpose, it suffices to examine the numerator (€) = 26g + (a + 3bE)E + 2akE + E The structure buckles if the equation ~(é) =0 (5.46) has at least one real positive root for € > 0, or at least one real negative root for & <0. In some cases, Descartes’ rule of signs provides an answer on buckling of the structure. This rule states that if the coefficients ao, a), @2,..., a, of the polynomial 9(E) = ag" + a"! + tay E + a, have v variations of sign, the number of positive roots of the polynomial equation (£) = 0 does not exceed v and is of the same parity. The number of negative roots of this equation equals that of positive roots of equation o(-§) = 0. - Consider the case b < 0, € > 0. We wish to know whether there is at least one positive root between those of p(£) = 0. In the subcase a < 0, we have ay=2b<0 a,=a+3b<0 a,=2a&<0 a,=§>0 so there is a single change in sign, indicating the occurrence of buckling. In the subcase a > 0, we have a)=2b<0 a,=2a€>0 a,>0 SO, irrespective of the sign of a, = a + 366, there is again a single change in sign. Thus the structure has a buckling load if b < 0, € > 0. Consider now the case b < 0, < 0. The question is now whether p(£) = 0 has at least one negative root. Then o(—€) = —2bg? + (a + 3bE )E? — Dake + E STATIC IMPERFECTION SENSITIVTY OF ANONLINEAR MODEL STRUCTURE 125 and for a > 0 we have aj=-2b>0, a,=a+3b€>0 a,=-2aE>0 a,=&<0 that is, a single change in sigh. For a < 0, we have @=-2b>0 a,=-2a€<0 a,=&<0 and irrespective of the sign of a, we again have a single change in sign, the conclusion being that for b <0 (irrespective of the signs of a or é) the structure carries a finite maximum load. In compete analogy, it can be shown that the structure is imperfection-insensitive for b > 0 and aé > 0 (this is left as exercise for the reader). For b> 0 and aé <0, neither Descartes’ rule nor the Routh-Hurwitz criterion for the number of roots with a positive real part (used in conjunction with the fact that $(£) = 0 always has one real root) suffice for a conclusion. This case, however, can be treated by Evans’ root-locus method (see, e.g., Ogata), frequently used in control theory. Let us consider first the particular subcase b > 0, a < 0, and > 0. The formal substitution > s, where s = Re(s) + iIm(s) is a complex variable, in Eq. (5.46) 3bs? + 2as +1 s?(2bs + a) (647) 1+&(s)=0 ¥(s)= We now construct the root-locus plot with & varying from zero to infinity (obviously, only § < 1 has physical significance). For § approaching zero, the roots of Eq. (5.47) are the poles of y(s), marked with crosses (X’s): a S=8=0 s=- OB The £ — oo points of the root loci approach the zeros of y(s), marked with circles (O’s): (s) has three poles: one double at zero, and another at (—a4/2b) > 0. A root locus issues from each pole as é increases above zero; a root locus arrives 7 each zero of y(s) or at infinity as € approaches infinity. For the case = 3b, the two “circles” coincide. As is seen from Fig. 5.14, Eq. (5: 47) has ine real positive roots, and the structure buckles for any £ > 0. For a? < 3b both “circles” are complex (Fig. 5.15); for a certain value of & (called_the critical value, &,, s) 4 pair of loci break away from the real axis. For E> &.,, ae Eq. (5.47) has no real positive root, and consequently the structure is imperfec- 126 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE 4 imis) Fig. 5.14, Root-locus plot (a < 0,6 > 0,£>0,a? = 3b). tion-insensitive. The breakaway point is found as the root of the equation dc ac L___s%(2bs +a) ds W(s) 3bs? + 2as +1 0 C(s)= This equation has only one real root, s) = —a/3b. The appropriate value of & equals — C(59): r a 1 Ses = 3B spa? (5.48) The static buckling load associated with ¢ = sq and £ = &,,., is N= (HF ~ Y[Eow $5) = —1/6a and \,/A, = 4, and for For example, for b/a? = 3, we have &., , &> €,,,, static buckling does not occur. Fig. 5.15. Root-locus plot (a < 0,5 > 0,£> 0,a? < 3b). STATIC IMPERFECTION SENSITIVTY OF A NONLINEAR MODEL STRUCTURE 127 Fig. 5.16. Root-locus plot (a < 0, > 0,€> 0, a? = 4b). For the case a? > 3b (see Figs. 5.16-5.18), there are always two real positive roots to p(€) = 0, and the structure buckles. Consequently, the structure turns out to buckle for a? > 3b. In the range a? < 3b the structure buckles if E< bas We next consider the case a > 0, b > 0, € < 0. It is readily shown that the inverse root loci for — 00 < § < 0 are the mirror images of the original root loci for 0 < & < co with respect to the imaginary axis. The system buckles for a? > 3b, and also for a? < 3b if £ > €,, , as indicated by Eq. (5.48). For the particular case a = 0 the structure buckles if b < 0 (for both &>0 and & <0) and is insensitive if 5 > 0. For b =0, the structure buckles if aé < 0 and does not buckle in the opposite case. As for the structure that buckles, differentiating Eq. (5.45) with respect to and setting (for b + 0) dr Beno NAR (5.49) we obtain the relation between the buckling load ), and initial imperfection Im(s) Fig. 5.17. Root-locus plot (a < 0,b > 0,£ > 0,3b < a? < 4b). 128 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Im{s) Fig. 5.18. Root-locus plot (a < 0,6 > 0,£ > 0,a? > 4b) amplitude é: 4, a) ap| 4 (4-8) alg} For b < 0 and a = 0, we get from (5.50) 3/2 (: : x} — Baym (5.51) Note that the displacement ¢ corresponding to A,/), is given by nex fo-afi-¥)] (5.52) where £,,, depend on & via A,/A,. The static buckling load can be obtained from Eq. (5.50), given the initial imperfection & The meaningful root \,/), of Eq. (5.50) is the greatest of those that meet the requirement ¢é > 0. The case b = 0 has to be considered separately. Equation (5.46) reduces to 1 f= 3b o(§) = a8? + 2ake + E Descartes’ rule then immediately yields the conclusion that the structure is STATIC IMPERFECTION SENSITIVTY OF ANONLINEAR MODEL STRUCTURE 129. imperfection-sensitive if a < 0. Equation (5.49) then leaves us with .\ cA ( -=} + 4af 5 (5.53) Equations (5.50), (5.51), or (5.53) permit us now to find the probabilistic characteristics of the random buckling load A, (with possible values \,) provided the initial imperfection X (with possible values £) is a random variable with given probability distribution Fy(). We seek the reliability of the structure, which is defined in these new circumstances as the probability of the event of the nondimensional buckling load A,/A, exceeding the given nondimensional load a: R(a) = of = 2] (5.54) Consider for example, the symmetric structure (a = 0), and assume that the initial imperfection X is normally distributed N(m jz, 0%). Figure 5.19a, which shows schematically the solution of Eq. (5.51), indicates that A,/A, exceeds a when X falls within the interval (—’, &’), where p 2 (i-ay? Therefore the reliability is R(a) = P(-& < X¥< &) (5.55) In conclusion, we have (see Eq. (4.24)) R(a) = ee = *£) iz er 2m a ox ox 7 on 24) + en 0s) (5.56) oz x Note that this expression is symmetric with respect to the sign of the mean imperfection m yz. Indeed, as is seen from Eq. (5.51) for a symmetric structure, the buckling load depends on |é|. For mz = 0 we obtain R(a) = 2eat{ £2) ‘x In Fig. 5.19b, for example, the shaded area equals the reliability at the load 130 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE -06 0. 0.2 0 0.2 04 06 £ Fig. 5.19. (a) Buckling-load parameter as a function of initial imperfection amplitude. (b) Probability density of initial imperfection amplitude (shaded area equals the reliability of the structure at nondimensional load level a). level a = 0.7 for mz = 0. Figures 5.20 and 5.21 show the reliability function R(a) versus « for the different standard deviations oz. Figure 5.20 is associ- ated with m z= 0.05 and Fig. 5.21 with my = 0. Consider specifically the case mz> 0. For og approaching zero, all structures tend to buckle at some constant level a*. This a* satisfies the equation (ary? - Bing which is obtained from Eq. (5.51) by substituting a* and mx for A,/A, and &, STATIC IMPERFECTION SENSITIVTY OF A NONLINEAR MODEL STRUCTURE 131 05 00 0s 0.7822 1.0 Fig. 5.20, Influence of standard deviation of initial imperfections on reliability; mean imperfec- tion nonzero (m z= 0.05, b = —1) respectively. Indeed, for a < a*, the difference (a) — mis a postive quan- tity, so that when o¢ > 0 &(a) — mz ox > +0 and ea fen) a4 ox However, since &(a) + m ‘x > 0, irrespective of a, we also have wf Kol mz) 132 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Ria) 0.0 05 10 Fig. 5.21. Influence of standard deviation of initial imperfection on reliability; mean imperfection identically zero (b = ~1). and for a < a*, R(a) > 4+ 4 = 1. As for a > a*, the difference £(a) — mz is a negative quantity. With o7 — 0, &(a)—mz and on £0) >-} ox Consequently, R(a) > — 4 + 4 = 0. The reliability function, shown in Fig. 5.20 by the dashed line, has the shape of a rectangular “pulse,” a* being equal to 0.7822. Such a rectangular reliability function is characteristic of a determin- istic structure. In fact, for the deterministic imperfection, all realizations of the structure are identical with the same buckling load a* (note that the mean buckling load in this case also equals a*!). Therefore, Prob(A,/A, > a) = lif a a) = 0 if a > a*. For a structure with zero mean DYNAMIC IMPERFECTION-SENSITIVITY OF A NONLINEAR MODEL STRUCTURE 133 00 05 1.0 Fig. 5.22. Influence of mean imperfection on reliability (a7 = 0.1,b = —1). (see Fig, 5.21), with o¢ approaching zero, we have a* = 1, implying that almost all structures buckle at the classical buckling load. This is as expected, since with mz = 0 and og ~ 0, almost all structures tend to become perfect. The influence of the mean imperfection on the reliability, at constant standard deviation, is illustrated in Figs. 5.22 and 5.23. ois set at 0.1 in Fig. 5.22 and 0.02 in Fig. 5.23. As is seen, for larger values of m y/o; the reliability curves become steeper and resemble more closely the deterministic case with rectangular “pulse”-type reliability. 5.6 DYNAMIC IMPERFECTION-SENSITIVITY OF A NONLINEAR MODEL STRUCTURE In the dynamic setting, the central hinge carries a mass M, and the system is under an axial force A f(t). Instead of Eq. (5.45) we have (see Fig. 5.24) alae (1 - 90 Je sags =MOg 6557 wt dt? 134 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Rial, 00 0s 1.0 Fig. 5.23, Influence of mean imperfection on reliability (07 = 0.02, b = ~1). where w, = yk,/ML is the natural frequency of the mass for \ = 0. As an example, we have the load represented by the step function of unit magnitude and unlimited duration, defined in Eq. (3.2), f(t) = U(t), which vanishes for t < 0 and equals unity for 1 > 0. The first integral of Eq. (5.57) subject to the initial conditions § = 0, dt/dt = 0 at t = 0 (zero displacement and zero velocity at 7 = 0, that is the mass at rest up to that time) is readily found to be 2 4(¢) + (: -p)e + 3ag? + $be4 = aft) ée of and the corresponding integral curves satisfy #(b[A)e- ( -fle- at? ~ soe] ae = ot DYNAMIC IMPERFECTION-SENSITIVITY OF A NONLINEAR MODEL STRUCTURE 135 (Femi) Fig. 5.24, Derivation of Eq. (5.57). where the left-hand term may be evaluated in terms of elliptic integrals. For sufficiently small \/A,, the motion is periodic, with its amplitude satisfying (1 fbn + Fate + H5Ea = (2) (5.58) The dynamic buckling load \, is defined as the maximum value of \, such that the response £(r) still remains finite. At A, a finite jump in ,,,, is produced by an infinitesimal increase in A (see Fig. 5.25). For all A * \,, the response (t) is periodic, as \ approaches ) , from below, the period tends to infinity and it takes an infinitely long time for §(t) to reach ¢,,,,. The value of A, occurs at the first maximum of the relation A versus ,,,,- Thus the dynamic buckling load is defined by the criterion dd di =0 A=Ay (5.59) Sax Note that Eq. (5.58) becomes identical with Eq. (5.45) if we make the following formal substitution: E> bum G7 3a b> 4b, E> 2E (5.60) Analogously, all conclusions in the static case are readily extended to the dynamic one. The structure under a step load buckles for any b < 0 (irrespec- tive of the sign of a or €), and does not buckle for b > 0 and aé > 0; forb > 0 136 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Fig 5.25. Load versus maximum response; A, is the dynamic buckling load. (After Budiansky and Q Smax Hutchinson) and af <0 it buckles for any & given the inequality a? > 4b, which is obtainable from its static counterpart a? > 3b by formal substitution (5.60), and also when a? < %b for € < &,, 4, where The Ena, Value associated with &,, , equals —4a/9b, and consequently the dynamic buckling load is At & = &,, 4 the concept of dynamic buckling is preserved by associating A, with the point of inflection in the A-€,,,, curves according to Budiansky (1965) (Fig. 5.26). Comparison of the latter results with their static counterparts shows that the interval 3b < a? < 4b is characterized by duality: The struc- ture is statically imperfection-sensitive, but dynamically imperfection-insensi- tive. In the particular case b/a? = 3, considered by Budiansky (1967, pp. 95-96), Eq = —8/45a and &,,, > &,,, and consequently the interval £,, , Fig. 5.26. Generalization of dynamic buckling criterion : Emax (after Budiansky). DYNAMIC IMPERFECTION-SENSITIVITY OF A NONLINEAR MODEL STRUCTURE 137 <&<&,, , is Similarly characterized by duality, the reverse of the preceding case. For b/a? = } the structure is statically imperfection-sensitive for any £, but dynamically imperfection- insensitive if £ > £.4= — #. Finally, the relation between the buckling load A, and the initial imperfec- tion £ is given by (b * 0) ( Mg x, (5.61) For vanishing a and b < 0, we obtain the relationship r,\3? 36 (1 . i) - 7 2 iW =5 3 (5.62) © For the case b = 0, a * 0, and substituting (5.60) in (5.53), we immediately obtain In the case a < 0, b < 0 with & such that A,/A, < 1 and A,/A, < 1, is readily eliminated by correlating Eqs. (5.50) and (5.61) for a given structure with a given imperfection. The result relates A, to A,: —& (de ga)? _ 4 1-4 we 2b NSA 7 wohal[_ 4; de 22)? _ af, X, || 2 xX. 3b 3b (5.63) In this form, 4 ,/A, is no longer directly depended on the imperfection, but via A,/\,. For vanishing a, we obtain the expression da G(pahay" aan sera tees (5.64) Equations (5.62) and (5.64) are due to Budiansky and Hutchinson. Recapitualting, our structure is statically imperfection-sensitive and dy- namically imperfection-insensitive when 3b< a? < Hb, E> kay 138 = RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE and also when a? < 3b, baa <8 < bens For this case, following Budiansky, the criterion of the dynamic buckling is generalized, the dynamic buckling load being defined as the point of inflection on the A-£,,,, Curve: dr d 3 7 = ———_|b + 2€)" — 8b& + ag - 4€} =0 Eira yl (s ‘) lines §] (5.65) (See also Fig. 5.26.) For b > 0, a <0, and €> 0, Descartes’ rule of signs indicates a single positive root. Eliminating ¢,,,, between Eqs. (5.58) and (5.65), we relate the generalized buckling load and the initial imperfection: A = = zo {id= (d-2€)[1 + 4a(d—2€) + $o(a - 2€)'] (5.66) where 24b@ — 16ag? + 12€ \'7 fr 3b We proceed now to the reliability analysis for a symmetric structure, assuming X to be N(m x, 0%): A R(a) = P( 5 > a) = W(-€ < ¥<&) (5.67) where resulting in R(a) = en Pema (5.68) x + erf| &(a) + mg ox Note that the probability densities of static or dynamic buckling loads are obtainable via the reliability function. Indeed, the unreliability at the load level a is O(a) = 1 R(a) = P[FE A,) =0 (5.76) DYNAMIC IMPERFECTION-SENSITIVITY OF ANONLINEAR MODEL STRUCTURE 141 Ria) 1.0 Y Os (1) 08 06 04 0 0.03 006 0.09 O12 015 oy Fig. 5.28. (1) Nondimensional mean buckling load E(A,/A.) equals the area under the reliabil- ity curve (mz= 0.05, 07 = 0.1,0 = —1). (2) E(A,/A,) versus the standard deviation 07 of initial imperfections. and finally, (5) = [[R(0) ae (5.77) © so that the nondimensional mean buckling load equals the area under the reliability curve (see Fig. 5.28). 142 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE 9, Ag/Ke, 12 0.08 0.04 0 0.030 0.060 0.080 oy Fig, 5.29, Standard deviation of nondimensional buckling load as a function of the standard deviation of initial imperfection. The standard deviation of the nondimensional buckling load 172 A,\? A,\? 9, =(E =) -[«(#}] 5.78, va (ell -R)" om eli] -fenior where © is shown in Fig. 5.29. o,,,,, increases with the standard deviation o7 of the initial imperfections, With the reliability function available, we can solve the design problem, of determining the allowable load from the requirement atiow R( gov) = 7 Satow = 5" (5.79) © where r is the required reliability. Referring to Eq. (5.56), we obtain the follow transcendental equation for eaoy,: ey = fr en (sow x + ea ® (eatiow) +m ox ox r (5.80) For mz = 0, and bearing in mind the expression for €'( ojo), Eq. 5.80 reduces DYNAMIC IMPERFECTION-SENSITIVITY OF A NONLINEAR MODEL STRUCTURE 143. to _ yn ; [Sse | iz et(5) (5.81) where erf~'(... ) is the inverse of erf(... ), found from the table in Appendix B. For example, for r = 0.99 we have erf~'(0.495) = 2.575. The last equation may be rewritten as ta eee is 3v3, acerca and o; treated as a function of a,yo,. This function is given in Fig. 5.30 as curve J. For mz * 0, the transcendental equation (5.80) has to be solved numeri- cally. Results are given in Fig. 5.30 as curves 2 and 3. It is seen that the 0.7822 0.6840 (2lm5+0.05 (3)mos01 ¥ 05 00 0.05 01 oc X Fig. 5.30. Allowable load corresponding to required reliability r = 0.99 as a function of standard deviation of initial imperfections (b = — 1). 144 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE loads mean buckling load 08 allowable load associated with r= 0.99 06 0.4 00 0.05 O41 Fig. 5.31. Mean buckling load exceeding allowable load associated with high reliability. allowable load decreases as the mean imperfection increases, and that for larger values of oz the mean imperfection becomes less significant and the allowable loads associated with different mean imperfections lie closer to- gether. Note also that for og=0 the allowable load coincides with the mean buckling load irrespective of the required reliability. As is seen from Eq. (5.82), og=0 if a, = 1 for any r. Figure 5.31 contrasts the allowable load associated with the required reliability r = 0.99 and the mean buckling load. As is seen, the allowable loads are much smaller than the mean loads. This implies that the design according to the latter overestimates the load-carrying capacity of the structure, as against a,,,, associated with the high reliability. Significantly, Figs. 5.20 and 5.30 apply for the dynamic case as well, albeit for a different b—namely, b = 4(—1) = — 4, according to the analogy in Eq. (5.60). Comparison of the two cases for the same b (= — 1) shows (Fig. 5.32) that the dynamic allowable loads are lower than their static counterparts, the analogue of (5.82) being 2_{__ C= 440) ox = | OOS Tm (5.83) 3V6 \ agnowY —b ert '(r/2) Comparison of the two analogues shows that for the loads to be equal, the standard deviation of the initial imperfections in the dynamic case should be l/ y2 that of the static case. In other words, the dynamic allowable load at a specific o¢ is obtainable directly from the static curve by reading the latter at V2 03. AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 145, 4 allowable 10 static allowable loads 05 b dynamic allowable loads 00 005 Ot a Fig. 5:32. Comparison of allowable loads corresponding to required reliability r = 0.99 for static and dynamic cases (b= —1, m= 0). 5.7 AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS* We consider now another problem of dynamic buckling, that of the initially imperfect bar under axial impact (Fig. 5.33). In the case under consideration, we use Hoff’s definition, which states, “A structure is in a stable state if admissible finite disturbances of its initial state of static or dynamic equilibrium are followed by displacements whose magnitude remains within allowable bounds during the required lifetime of the structure” (in contrast to the previous paragraph, where finiteness of the displacements was required for the structure to be considered stable). By virtue of Hoff’s definition, we may postulate that a bar with initial imperfections fails (buckles) under axial forces when its dynamic response (deflection, strain, or stress) first reaches an upper-bound *This section is an extension of the author’s paper (1978), from which Figs. 5.35~5.38 are also reproduced. 146 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE ye P(t) = PUIt) a Fig. 5.33. Imperfect bar under axial impact, represented by unit step function in time, P(t) = PU()). level Q* or a lower-bound level -Q~, Q* and Q~ being prescribed positive numbers that represent borderlines between stability and buckling (i.e., safety and failure) (Fig. 5.34). Consider an ensemble of response histories y(t) in the interval 0 < t < r*, all of them originating under the same initial conditions ¢ = 0. Let R(t; t*) be the probability of y(t) remaining in the same domain throughout the interval (0, t*). Formally, the reliability is then R(t; t*) = Prob([y(t) < Q*] N[y(t)> -Q°], O<1 Q*] U [y(t)< -Q°], O Q*) or {y(t) < —Q7). Obviously, R(t, 1) = 1 - Q(t; *) which is readily understood if we recall that in these circumstances reliability is the probability of survival up to time 1, or that of “being,” whereas unreliabil- ity is the probability of failure, of “not being.” The random event “to be or not to be” has a unity probability. AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 147 y(t) FAILURE REGION (a) SAFETY REGION FAILURE REGION yy FAILURE REGION (b) SAFETY REGION FAILURE REGION Fig. 5.34. Illustration of Hoff's definition of stability. (a) ¢,, time to fail. (b) Structure does not fail. For ¢* tending to infinity, the reliability and the unreliability are functions of t only, and we denote R(t) = Jim ROSH) QC) = im O45) We seek probabilistic information on the random time when failure occurs (ie., the structure buckles) under suitable initial conditions, in terms of y(t) and perhaps of its derivatives. This problem is very difficult in the general case and is known as the “first-passage” (or “first-excursion”) problem. We give here the exact solution to this problem for impact buckling of a bar with 148 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE random initial imperfections of given shape, with the amplitude as a continu- ous random variable, We formulate the problem as follows: Given the probability distribution function of the amplitude of random initial imperfection, find the probability of the time required for the response process to move outside the prescribed safe domain (i.e., the first excursion time) being less than the given time, t* being infinity. Consider first the corresponding deterministic problem. We disregard axial wave propagation and assume uniform compression throughout the bar, whose motion obeys the differential equation ary ay Py ay Er2® + p22 4 pate = ple 5.86 ax4 axt ay? dx? (5.86) where x is the axial coordinate; f, time; ¥(x), the initial imperfection (a small perturbation to the perfect, straight shape of the bar); y(x, f), the additional transverse deflection measured from yo(x) [yo(x) + y(x, t) being the total deflection of the beam axis from the straight line between the two ends x = 0, x = 1); E, Young’s modulus; J, the moment of inertia; p, mass density; A, the cross-sectional area; and P, the applied axial load; with EJ and pA taken as constants. The differential equation (5.86) is supplemented by the boundary conditions 2 y(x, t) = 0, foes =0, x=0 x 2 y(x, 1) =0, eyo =0, x=/ x and the initial conditions y(xn=o, BOO 9 p26 The shape of the imperfection is taken as B(x) = Hsin™* (5.87) We now introduce the nondimensional quantities f= 3 deo nF w(gry= 29 gy 2G) gH AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 149 wEI _ (ay [ET I aa “= (7) Vea ava P. being the classical (or Euler) buckling load of a perfect bar; w, its fundamental natural frequency in the absence of axial compression; A its radius of gyration; « the nondimensional applied force; u(£, A), #(€) the nondimensional additional and initial displacements, respectively; G the non- dimensional amplitude of the initial imperfection; and A the nondimensional time. Equations (5.86) and (5.87) become, respectively, where 4 2. 2. 2a Be yg FH 4 gt Oh 8 2g At (5.88) ag ag? aw dé? u(€) = Gsin 7 (5.89) Equation (5.88) is based on the conventional strength-of-materials assump- tions of uniform geometrical and physical properties, linear elastic behavior, and small displacements, disregarding rotational and axial inertia and shear effects. It is also assumed that the standard deviation of the amplitude of the initial imperfections is smaller than the radius of gyration. Thus the problem may be treated in a probabilistically linear setting. The boundary conditions are satisfied by setting u(£, A) = e(A)sin wf (5.90) Substituting Eqs. (5.89) and (5.90) in (5.88), we obtain the differential equation for e(A) de(A) dx + (1 —a)e(A) =aG (5.91) with initial condition de(0) _ dr e(0) = 150 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE The solution is given by Ta ploosh rd -1), B<1 e(A) = ( 3GX, B=1 (5.92) PE Goo), B> r= yi al.p =F and the total displacement v(£, A) = V(A)sin wé by To FS gloosh rh = B), B<1 V(A) = { 4G(¥ + 2), B=1 (5.93) (cosrh— 8), B>1 where V(A) = e(A) + G. We proceed now to buckling under random imperfections. Let G be a continuous random variable with probability distribution function F,(g). For simplicity, we consider the case of symmetrical bounds, Q* = Q~ = c (> 0), failure being thus identified with the total displacement reaching, in absolute value, the critical point c. We seek the probablity Prob(T < 1) of the first passage time T being equal to or smaller than the given time /; or, in nondimensional form, Prob(A < A) of the nondimensional first passage time A = w,T being equal to or smaller than the given nondimensional time A = w,t. Denoting by (L) the contingency of buckling being possible [i.e., in the time interval (0, 00)], Prob(L) is the probability of failure of the system in the infinite time interval. Note, more- over, that if this probability is zero (ie., buckling cannot occur), then likewise Prob(A < A) = 0. This also follows from the formula of overall probability Prob(A ¢) = 1 — F,(c) + Fe(—c) If F,(c) — Fo(—c) = 1, then F,(0) = 0. (Note the analogy with Problem 4.3.) We now proceed to calculate F,()) for three different cases, in accordance with Eq. (5.92). Case 1(B <1). In this case, owing to the exponential growth of u(4, A) in time, {L} is a certain event. The conditional and nonconditional probability distribution functions of the first passage time coincide, F,(A) = F,(A|L), and A satisfies the equation FlcoshrA — B) =e (5.98) the probability distribution function sought being - —e(1 = B)_ ire Clea) A(X) = Prb| (6 < ~ cosh(rA) 2) i {« i cosh(rA) — a}| -1-F,|U=8)_ (a ao rf a, 7% Se = ai 7 F(A) = 0, a0 + F,(0)8(A), AZO (5.100) Case 2 (B = 1). In this case, (L) is again a certain event. The first passage time satisfies the equation HG? + 2) =c (5.101) The probability distribution of A is given by 2¢ 2c F(A) = prob|{ o> a} u {e< -<¥ ;}| 2e 2e (5.102) w1-n[ Pe ]+n[-25], A>0 F(A) = 0, rA<0 and the probability density by f(A) = ey {to[ Pe] + 46|- For symmetrically distributed initial imperfections, we have re 2¢ F,(A) = 2 - 2%] = > (A) | 5] A>0 F(A) = 0, X<0 (5.103) Aa) = ]+n@s0), A>0 (2+ ee ee +2 AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 153 Case 3 (B > 1). In this case (L} is no longer a certain event; buckling is possible if max maxo(é, A) = iat (5.104) reaches the critical point c. That is, Prob{L)} = Prob {iG > e| =1- rele | + F[-5 4] (5.105) Note that for B > 1, Prob(L) > Prob(A = 0); that is, the probability of the bar buckling at all (i.e., at any time) approaches that of failure at zero time. The first passage time satisfies the equation IGI 62 7 (B — cosrA) = ¢ (5.106) and the probability of failure is Prob(A < A) = Prob| 6 < -e 6} U {6 > Po} =1-F,| G2) + Fo|- ge re [o,2| B-cosrd B-cosrd |’ Prob(A z (5.107) For symmetrically distributed random imperfections, Eq. (5.107) becomes Prob( A < d) = 2~ 2F,| $=. , D0 (5.108) Prob(A < A) = 0, A<0 154 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE The conditional probability distribution function is e(B- 1) e(B - 1) 1-7 goa +e A] FA(AIL) = : : 1 [@—D), - [| B—D | B+1 - B+ (5.109) since F,(A) = Prob(A < A, L) = Prob(A < A|L)Prob(L). The conditional probability density is 1 aP(A 2/r, Fy(A|L) > 1; this is due to the obvious fact that if buckling is possible, it must occur in the time interval [0, 7/r]. Note also that in all three cases the functions may be written down in the general form (for A > 0) Prob(A 1 Finally, note that for F,(0) = 0, the last terms in the equation for the density functions have to be omitted. Consider now some numerical examples. Let G have a uniform distribution in the interval (Spun Smax)* 0, 8S 8min 8 Fe(g) = { 8, gain <8 < Six (5.113) Bmax ~ Son 1 8 > Bmax For time instances A satisfying the inequality (A) > Bmax > 0 (5.114) AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 155, Fa 10 4 08 06 O4 02 oO 0 1 2 3 4 5 6 7 8 » Fig. 5.35. Probability distribution of first passage time ( P = 2P.). then Fe[@(A)] = 1 and Eq. (5.111) yields Prob(A < A) = Fe[-9(A)] (5.115) If, moreover, —p(A) < Zin» then Prob(A < A) = 0. For symmetrically dis- tributed random imperfections g,,,, = —8max» this happens for times A < A*, where a +A), B<1 Bmax Ata B=1 (5.116) rary MED) pr een For B > 1, the probability of buckling may turn out to be zero. From expression (5.104) it may be seen that this is the case, for example, if +1 e> Sut and 8min = — 8max (5.117) then Prob(L) = 0 and Prob(A < A) = 0 for any A, and the bar remains in the safe region throughout. The above cases are illustrated in Figs. 5.35-5.37, where Prob(A < A) is plotted against A. These figures represent, respectively, the cases where the 156 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Ex 10 08 06 04 02 0 2 4 6 @ 10 12 1h 6 Fig. 5.36. Probability distribution function of first-passage time (P = P.). actual load is greater than, equal to, or less than the corresponding classical buckling load of a perfect structure. Particularly, in Fig. 5.35 the actual load was chosen double the classical buckling load, and in Fig. 5.37, it was half. For all calculations the random imperfections were chosen with a symmetric distribution (Sax = —8mn)- The curves marked / are all associated with &max = 0.5, whereas those marked 2 are associated with g,,,, = 0.3. The nondimensional critical Jevel c was taken identical for all curves in these figures, the failure being thus identified with the absolute value of the total displacement reaching four-tenths of the bar’s radius of gyration. In the cases where gn... > ¢ (curves J), the probability of the bars buckling at A = 0, Prob(A = 0), differs from zero; in particular, Prob(A = 0) = 0.2. Thus, under the statistical interpretation of probability, 20% of bars from a large popula- tion buckle immediately upon application of the load at A = 0. In the cases where max <¢ (curves 2), Prob(A A) = 1 — R(A) 158 = RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Probi Asi! 1.0 08 06 04 02 0 _ L L 4 0 1 2 3 a Fig. 5.38. Influence of load ratio P/P, on Prob (A < X). where R(A) is the reliability of a structure at time A, we have od 2 d E(A) = f Agy Prob(A @ Indeed, for the random variable A > 0 with finite mathematical expectation, it follows from convergence of the integral 5° A f,(A) dA that [OARA) dA +0 as M > 00 M However, Mf") dv< Jorma) an AXIAL IMPACT OF A BAR WITH RANDOM INITIAL IMPERFECTIONS 159 06 1 L L 7 0 1 2 3 4 5 6 7 @ on Fig. 5.39. Shaded area equal to mean buckling time of structure (P = 2P,). Therefore, M[1 — F,(M)] = MR(M) > 0 as M > oo. Consequently, lim AR(A) = 0 (5.120) Asoo implying that B(A)= f°R(A) ad = [1 ~ RA)] dn (5.121) E(A) is represented by the shaded area in Fig. 5.39. For P < P., however, Prob(A < oo) = Prob(L) + 0, as we have already seen. In other words, in this case, E(A) > 0 (5.122) This result becomes obvious if we recall that for P < P, a certain percentage of the bars do not fail, that is, they have an infinite mean buckling time. With the expressions for the unreliability Prob(A < )) at hand, we can pose the problem of determining the allowable operation time A, for which the reliability R(A) reaches a given level: R(A,) =r (5.123) 160 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE This allowable operation time is clearly finite, whereas the mean buckling time may be infinite, again demonstrating that reliability-based design is superior to mean-behavior based design (compare with Prob. 4.5). PROBLEMS 5.1, The loads acting on different machine components often have a chi- square distribution with m degrees of freedom, as was shown by Serensen and Bugloff, nfayrt fu(n) = FI) Un), v= a TABLE 5.1 Values of x2, y a= 99 a= 99 a=.975 =05 a=025 a=01 a=.005 1 0000393 000157 000982 .00393 3.841 5.024 6.635 7.879 2.0100 0201 .0506 -103 5.991 7.378 9.210 10.597 3 0717 IS 216 -352 7.815 9.348 11.345 12.838 4 207 .297 484 7 9.488 11.143 13.277 14.860 5 412 554 831 1.145 11.070 12.832 15,086 16.750 6 676 872 1.237 1.635 12.592 14.449 16.812 18.548 7.989 1,239 1.690 2.167 14.067 16.013 18.475 20.278 8 1.344 1.646 2.180 2.733 15.507 17.535 20.090 21.955 9 1,735 2.088 2.700 3.325 16.919 19.023 21.666 23.589 10 2.156 2.558 3.247 3.940 18.307 20.483 23.209 25.188 11 2.603 3.053 3.816 4.575 19.675 21.920 24.725 26.757 12 3.074 3.571 4.404 5.226 21.026 23.337 26.217 28.300 13° 3.565 4,107 5.009 5.892 22.362 24.736 27.688 29.819 14 4.075 4.660 5.629 6.571 23.685 26.119 29.141 31,319 15 4.601 5.229 6.262 7.261 24.996 27.488 30.578 32.801 16 5.142 5,812 6.908 7.962 26.296 28.845 32.000 34.267 17 5.697 6.408 7.564 8.672 27.587 30.191 33.409 35.718 18 6.265 7.015 8.231 9.390 28.869 31,526 34,805 37.156 19 6.844 7.633 8.907 10.117 30.144 32.852 36.191 38.582 20 7.434 8.260 9.591 10.851 31.410 34.170 37.566 39,997 21 8.034 8.897 10.283 11.591 32.671 35.479 38.932 41.401 22° 8.643 9.542 10.982 12.338, 33.924 36.781 40.289 42.796 23 9.260 10.196, 11.689 13.091 35.172, 38.076 41.638 44.181 24° 9.886 10.856 12.401 13.848 36.415 39.364 42.890 45.558 25 10.520 11.524 13.120 14.611 37.652 40.646 44.314 46.928 26 11.160 12,198 13.844 15.379 38.885, 41.923 45.642 48.290 27 11.808 12.879 14.573 16.151 40.113 43.194 46.963, 49.645, 28 12.461 13,565 15.308 16.928 41.337 44.461 48.278 50.993 29 13.121 14.256 16.047 17.708 42.557 45.722 49.588 52.336 30. 13.787 14,953 16.791 18.493 43.773 46.979 50.892 53.672 PROBLEMS 161 5.2. 5.3. 5.4. Problem 5.1 Find the reliability of vs bar under random tensile force. Table 5.1 contains the values of x2 y»» Where Se fw(n) dn =a for a = 0,995, 0.99, 0.975, 0.95, 0.05, 0.025, 0.01, O:005, and vy = 1,2,..., 30 (see figure). Find the reliabilities of the truss structures shown in Figs. 2.9a, c, d, and e, if P is treated as a random variable with given density function fp( p). The truss shown in the figure carries the random normally distributed load P with E(P) = 10 kips, op = 5 kips, a = 10’, 0,154, = 24,000 psi, area = | in’, Check whether the reliability exceeds the desired r = 0.999. A cantilever of rectangular cross section is loaded as shown in the figure. G is a random variable with given F,(g); oy, the yield stress in the tensile test of the cantilever material, is given. Use the maximum shear-stress criterion to find the reliability of the cantilever. On the concrete numerical example, discuss the change of the reliability esti- mate under the von Mises criterion. 162 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE Problem 5.4 5.5. A beam is loaded as shown in the figure. G is a random variable with given probability distribution function F,(g) and a is a given number. Verify that the extremal bending moment occurs at the section x = (Sa — 1)/4a and equals =Ga? 24e\(seat)_ 3 M, = Ga'|( 4 ( da} 2 and derive the reliability. Problem 5.5 5.6. Determine the reliability of the cantilever (see figure) under a given force q applied at the random distance X from the clamped edge. Fy(x) is given. (For a generalization of this problem with both X and q Problem 5.6 5.7. 5.8. PROBLEMS 163 Problem 5.7 random variables, or with random concentrated loads and moments applied at random positions on the beams with different boundary conditions, see the paper by Shukla and Stark.) A thick-walled cylinder (see figure) is under external pressure P with a discrete uniform distribution 10 Fp(p) = 6 L U(x — poi) i=] that is, P takes on values po,2pp,..., 1029 with constant probability 1/10. For the transverse stresses, the following expressions are valid (see, e.g., Timoshenko and Goodier): plteltiy (ost yp lto/n) + (ro/r = (r/ny 1 (n/n) =) where r, and r, are the outer and inner radii, respectively. Using the von Mises criterion, find r,/r; such that the desired reliability is not less than 0.99, A rectangular plate, simply supported all around, is under a load Q uniform over its surface, with chi-square distribution (Sec. 4.7) z, eq! fo(q) = Srey” m > pares q> Ora 5 The displacement of the plate under a deterministic uniform load q is (see, e.g., Timoshenko and Woinowski-Krieger): l6g 2 1 .. mmx nny w=— > —_————- sin —— sin 7D mat nat mn(m?/a? + n2/b?) as 5 164 5.9. 5.10. 5.12. 5.13. 5.14. 5.15. 5.16. RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE wx (1-Bx2/12) Problem 5.15 where a and 6 are the sides of the plate; D = Eh?/12(1 — v7) is the flexural rigidity; E, the modulus of elasticity; », Poisson’s ratio; and h the thickness of the plate. Find the probability density of the maximum deflection. Derive an equation analogous to (5.63) for an asymmetric structure. Find the probability density function of the dynamic buckling loads and find the mean dynamic buckling load for the symmetric structure. Find the reliability function of the asymmetric structure if the initial imperfection is normally distributed N(0, 07), in the static setting. Repeat 5.11 for the dynamic buckling problem. Assume that the initial imperfection has an exponential distribution, with parameter a'=E(X) given. Find the reliability of the asymmetric structure. Plot the nondimensional static buckling \,/A,, versus initial imperfec- tion £ curve for the nonsymmetric structure, according to Eq. (5.50). Find the reliability at the load level A. A rigid weightless bar with a frictionless pin joint at A, constrained by nonlinear springs with k > 0, 8 > 0, is under an eccentric load P (see figure). The equilibrium equation is P(x +e) = 2klx(1 — Bx?/I?) yielding P. = 2k/. Find the expression for the maximum force P, supported by the bar as a function of the eccentricity e. Assume eccentricity to be a continuous random variable with probability den- sity f,(e). Find the reliability of the structure at load level A. Generalize the results of Sec. 5.7 for the case where the load function is a rectangular impulse P(t) = P[U(t) — U(t — r)] with P and 7 given positive quantities. (See figure.) 5.17. 5.18. 5.19, 5.20. PROBLEMS 165 Pit) Problem 5.16 Generalize the results of Sec. 5.7 for the case where failure is considered in the finite time interval 0 < t < ¢* [see Eq. (5.85)]. Verify that the buckling time A in Sec. 5.7 for P < P. does not represent a random variable. Assign an infinite buckling time to the structure that does not buckle. How does Eq. (5.84) change in these circumstances? Show analytically that for P < P., E(A) approaches infinity. Modify the results of Sec. 5.7 for the case where the structure possesses viscous damping. Consider the load-bearing capacity of an imperfect bar. As can be seen from the figure, a concentric load P produces a bending moment M, = — Pw and increases the displacement by an amount w — w,. The differential equation of the column is, therefore, * (w—w,) = —Pw (5.124) Breage The bar is simply supported at its ends and has an initial imperfection mx w, = gsin 7 (5.125) Equation (5.124) becomes, upon substitution of Eq. (5.125), aw Pw yin TE dx? EI, 'P ! The solution of this equation is ea w= cviel( )"« (5.126) where P,= 7?EI/I? is the classical or Euler buckling load. The 166 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE P 1 P y (b) (a) Sa °%y bar fails plastically \ 1 94<92<95 % bar buckles: elastically 93 aE va {ec} Problem 5.20 boundary conditions are w = 0 at x = 0, /, and these yield . P Cysiny/ E For P < P., both C, and C, must be zero, and the total deflection is 0 G=o0 PROBLEMS 167 represented by the last term in Eq. (5.126): 1 . MX = apes (5.127) We see that the total deflection becomes increasingly large, as P > P.. The normal stresses in the bar are given P My RT = —Pw Thus the maximum compressive stress takes place at x = //2 and is given by Imax Snax = “| re (5.128) S 1- P/P, where S is the section modulus (S = 1,/Ymoxs Ymax being the distance from the neutral axis to the point of maximum stress). Denote P/A = o,, and P/P. = 0,,/0,. Equation (5.128) becomes then gA 1 P, eau = 9 _ ee “4 where o, = 77E/(I/r)*, r being the radius of gyration of the cross-sec- tional area of the bar. The load P,, for which o,,,, equals the yield stress %, is the limit load for which the column remains elastic. This load results in the average stress 0, = P,/A, and Eq. (5.127) becomes = gA__t 4 =a[1+ S mae | where 2 (S) es t+ E14 BYE 4 Zo (5.129) 9, 6, S }}o, 9, Part (c) of the figure shows o,/o, as a function of the slenderness ratio l/r. Treating the initial imperfection amplitude G as a random variable with gamma distribution (Sec. 4.8), Sols) = “e~ 8/BU(g) 1 B Ta + 1% the average limit stress is also a random variable. 168 5.21. 5.22, RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE (a) Extract o,/o, explicitly from Eq. (5.129) as a function of g. (b) Find F,(o,). (c) Consider also the special cases of an exponential distribution and of a chi-square distribution with m degrees of freedom. Perform the numerical calculations. In problem 5-20, assume G has an exponential distribution. Find the probability of the maximum total displacement w(//2) taking on values in the interval [0,2 £(G)]. Investigate the behavior of this probability as P approaches the classical buckling load P.. Consider now another important case of an ially perfect simply supported bar under eccentric load with eccentricity e, as shown in part (a) of the accompanying figure. The differential equation for the bar deflection reads 2 aw EI, + Pw =0 (5.130) The solution is . P P. w= sin E,* + 08) ET, * The integration constants are determined by the boundary conditions w=eatx = +//2, so that The maximum displacement is reached at x = 0 nae) U8 =m It is seen that the maximum deflection becomes increasingly large as P approaches P.. The elastic limit is reached at the most stressed point when al )"] (5.131) ~ Me 1, (5.132) PROBLEMS 169 ta} slr - E4>E,>%, Wy (b) Problem 5.22 170 5.23. RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE or when )" (5.133) NIa die This equation can be rewritten as 2 0, = o,(1 + sol (2) }} (5.134) where o, = P,/A is the “average” limit stress. Equation (5.134) is referred to as the secant formula and is usually plotted in the form of o,/o, versus |/r for a particular material (with o, and E) specified for different values of ec/r?, as shown in part (b) of the figure. Treating the eccentricity as the random variable E, with F,(e) given as exponentially distributed, the average limit stress also is a random variable. Find F;(o;). In his (now historic) doctoral thesis in 1945, and in his 1963 paper, Koiter analyzed a sufficiently long cylindrical shell with an axisymmet- tic initial imperfection, under axial load. He chose an initial imperfec- tion function wo(x), coconfigurational with the axisymmetric buckling mode of a perfect cylindrical shell, as i 2 w(x) = ghsin, i, = (5) » c= (1 —»2)]'? 6.135) where g is the nondimensional initial imperfection magnitude; i,, the number of half-waves at which the associated perfect shell buckles; L, the shell length; R, the shell radius; and h, the shell thickness. Using his own general nonlinear theory, he derived inter alia a relationship between the critical load and the initial imperfection magnitude: (1-A)? = 3elgjA = 0 (5.136) where A = P,,,/P., P, = 20Rho,, and o, = Eh/Rc, is the nondimen- sional buckling load, Prigs the buckling load of an imperfect shell; P., the classical buckling load of a perfect shell; E, the modulus of elasticity; and », Poisson’s ratio. The buckling load P,;, was defined as that at which the axisymmetric fundamental equilibrium state bifurcates into a nonsymmetric one. The absolute value of g in Eq. (5.136) stands, since for a sufficiently long shell the sign of the imperfection is immaterial: Positive and negative initial imperfections with equal abso- lute values cause the same reduction on the buckling load. PROBLEMS 174 Equation (5.136) yields the explicit buckling load-initial imperfection relationship: a= 1+ ae $(6161 + 2)? where & = cg. (a) Assume X with possible values § to be a normally distributed random variable N(é, 0”). (b) Find the probability density function of |X|. (Consult Example 4.6.) (c) Find the reliability of the shell at the load level X. (d) Assume, after Roorda, § = 0.333 x 10-3R/h and o? = 10-3R/h, and find the stress level at which the system has a given reliability. Compare your result with Roorda’s. 5.24, (a) Koiter (1945) also analyzed the imperfection sensitivity of a shell with nonaxisymmetric, periodic imperfections: a tmx i gmx igmy w(x) = gh cos Tt 400s SF cos | (5.137) (where y is the circumferential coordinate, the remaining notation as in the preceding problem) to arrive, instead of Eq. (5.136), at the equation (1-A)? + 6cgd = 0 (5.138) for the nondimensional buckling load A = Pj,,,,/P., where Pi. is the limit load (as in Sec. 5.5). For the imperfection function (5.138), the limit load exists only at negative values of the imperfection parame- ter g. For positive g, the origin of the coordinate system may be shifted, and since the shell is sufficiently long, the analysis would be unaffected except that the sign of g would change to yield (1-A) = 6cga = 0 (5.139) Combining Eqs. (5.138) and (5.139), we arrive at the final equation (1 -A)? — 6clgiA = 0 Perform calculations as in Prob. 5.23. Compare the reliabilities of shells with axisymmetric and nonaxisymmetric imperfections. (b) Sometimes the imperfection is represented by a local dimple extend- ing over a small region of the shell. A more or less localized 172 RELIABILITY OF STRUCTURES DESCRIBED BY A SINGLE RANDOM VARIABLE imperfection may be represented in the form ix ix Wo(x) = gh cos rt 4cos aL X exp (5.140) ~ tial? +9?) which is the function in Eq. (5.137) multiplied by an exponentially decaying function. For example, at a distance x = (47/i,)R or y = (41/i,)R, a complete wavelength of periodic part in Eq. (5.140), the exponential factor reduces to exp(—8q7u?/i2). At first ap- proximation, the term y?/i2 may be neglected with respect to unity. Koiter’s analysis (1978) yields then (1-Ay = —4ega (5.141) Assume again that X = cG, with possible values £ = cg, is a nor- mally distributed random variable N(0, ”), and find the reliability of the shell. Are the localized imperfections as harmful as the periodic ones? CITED REFERENCES Budiansky, B., and Hutchinson, J. W., “Dynamic Buckling of Imperfection Sensitive Structures,” in H. Gértler, Ed., Proc. Eleventh Intern. Congr. Appl. Mech., 1964, pp. 636-651. Budiansky, B,, “Dynamic Buckling of Elastic Structures: Criteria and Estimates,” in G. Herrmann, Ed., Dynamic Stability of Structures, Pergamon Press, New York, 1967, pp 83-106. Elishakoff, I., “Axial Impact Buckling of Column With Random Initial Imperfections,” ASME J. Appl. Mech. 45, 361-365 (1978). Elishakoff, I., “Remarks on the Static and Dynamic Imperfection-Sensitivity of Nonsymmetric Structures," ASME J. Appl. Mech. 46, 111-115 (1980). Hoff, N. J., “Dynamic Stability of Structures” (Keynote Address), in G. Herrmann, Ed., Dynamic Stability of Structures Pergamon Press, New York, 1965, pp. 7-44 Koiter, W. T., “On the Stability of Elastic Equilibrium” (in Dutch), Ph.D. thesis, Delft Univ. Technology, H. J. Paris, Amsterdam; English translations: (a) NASA-TTF-10, 833, 1967, (b) AFEDL-TR-70-20, 1970 (translated by E, Riks). Koiter, W. T., “The Effect of Axisymmetric Imperfections on the Buckling of Cylindrical Shells under Axial Compression,” Proc. Kon. Ned. Akad. Wet., Amsterdam, Ser. B., 6, 265-279 (1963) (also, Lockheed Missiles and Space Co., Rep. 6-90-63-86, Palo Alto, CA, Aug. 1963). Koiter, W. T., “The Influence of More or Less Localized Imperfections on the Buckling of Circular Cylindrical Shells Under Axial Compression,” in Complex Analysis and Its Applica- tions (Dedicated to the 70th Birthday of Academician I. N. Vekua,), Acad. U.S.S.R. Sci., “Nauka” Publ. House, Moscow, 1978, pp. 242-244. Lomakin, V. A., “Strength and Stiffness Calculations of the Beam Bent Under a Random Load,” Mechanics of Solids, Faraday Press, New York, 1966, No. 4, pp. 162-164. Ogata, K., Modern Control Engineering, Prentice-Hall, Englewood Cliffs, NJ, 1970. RECOMMENDED FURTHER READING 173 Roorda, J., “Buckling of Shells: an Old Idea With a New Twist,” J. Eng. Mech Div., Proc. ASCE, 98 (EM3), 531-538 (1972). Serensen, S. B., and Bugloff, E. G., “On the Probabilistic Representations of the Varying Loading of Machine Details,” Vestn, Mashinostr., 1960, No. 10 (in Russian). Shukla, D. K., and Stark, R. M., “Statics of Random Beams," J. Eng. Mech. Div., Proc. ASCE, 98 (EM6), 1487-1497 (1972). Timoshenko, S. P., and Goodier, J. N., Theory of Elasticity, 3rd ed., McGraw-Hill, New York, 1970. Timoshenko, S. P., and Woinowski-Krieger, S., Theory of Plates and Shells, McGraw-Hill, New York, 1959. RECOMMENDED FURTHER READING Augusti, G., and Baratta, A., “Reliability of Slender Columns: Comparison of Different Ap- proximations,” in B. Budiansky, Ed., Buckling of Structures, Springer Verlag, Berlin, 1976, pp. 183-198. Bolotin, V. V., “Statistical Methods in the Nonlinear Theory of Elastic Shells,” NASA TTF-85, 1962. Fraser, W. B., “Buckling of a Structure With Random Imperfections,” Ph.D. Thesis, Div. Eng. Appl. Phys., Harvard Univ., Cambridge, MA, 1965. Hansen, J. S, and Roorda, J., “Reliability of Imperfection Sensitive Structures,” in S. T. Ariaratnam and H. H. E, Leipholz, Eds., Stochastic Problems in Mechanics, Proc. Symp. Stochastic Prob. Mech., Waterloo Univ. Press, Waterloo, Ont., 1973, pp. 229-242. Konishi, I., and Takaoka, N., “Some Comments on the Reliability Analysis of Civil Engineering Structures with Special Reference to Compression Members”, in T. Moan and M. Shinozuka, Eds., Structural Safety and Reliability, Elsevier, Amsterdam, 1981, pp. 341-357. Miller, R. K., and Hedgepeth, J. M., “The Buckling of Lattice Columns with Stochastic Imperfections,” Int. J. Solids Structures, 15, 73-84 (1979). Perry, S. H., and Chilver, A. H., “The Statistical Variation of the Buckling Strength of Columns”, Proc. Inst. Civ. Engrs., Part 2, 61, 109-125, (1976). Thompson, J. M. T., “Towards a General Statistical Theory of Imperfection-Sensitivity in Elastic Post-Buckling,” J. Mech. Phys. Solids, 15, 413-417 (1967). chapter Two or More Random Variables With the aid of the theory of a single random variable, we were able to calculate the reliability of a structure characterized by such a variable, for example that of a bar under a random tensile force, with strength treated as a deterministic quantity. We considered also the “reverse” example, where the force was assumed to be given and the strength considered as a random variable. In practice, however, both force and strength are random. Moreover, the beam may have to bear several concentrated loads rather than a single one, and so on. In these circumstances the theory must be extended to the case of multiple random variables. We begin with a pair of such variables and then proceed to the general multiple case. 6.1 JOINT DISTRIBUTION FUNCTION OF TWO RANDOM VARIABLES Let X and Y be random variables as defined in Sec. 3.1. The joint distribution function of X and Y, denoted by Fyy(x, y), is defined as the probability of the intersection of two random events {X < x) and {Y < y): Fyy(x, y) = P(X 6.2 JOINT DENSITY FUNCTION OF TWO RANDOM VARIABLES Consider now a pair of continuous random variables X and Y. We define as their joint density function the limiting probability of a random point with coordinates (X, Y) falling within the elementary rectangle with vertices (x, y), (x + Ax, y), (x, y + Ay), (x + Ax, y + Ay) as its area Ax Ay approaches zero, and denote it by fyy(x, y): —_ on Plex Xe xtdx,y<¥1 Fyy(% ) = (dy +1), x>I{yl<1 0, x<-ly<-l 1, x>Ily>l The marginal density functions are obtained by applying (6.17): f(x) = {b Bis! 0, otherwise fyy=(b bis! 0, otherwise implying that each random coordinate has a uniform distribution. As can be concluded from Prob. 6.1, however, this is not always the case. Example 6.2 Given the joint probability density of two random variables f,y(x, y), the following expressions can be written down for the probabilities of various CONDITIONAL PROBABILITY DISTRIBUTION AND DENSITY FUNCTIONS 183, events: (a) P(X> Y)= fen dxfee fev(% ¥) ) PUX|> Y)= fen dxf! favs y) dy (©) P(X > IY) = for axf®, fay (% y) dy These situations are illustrated in Fig. 6.6a, b, and c, respectively. 6.3 CONDITIONAL PROBABILITY DISTRIBUTION AND DENSITY FUNCTIONS In Sec. 2.6, we defined conditional probability, the probability of occurrence of a random event A under the hypothesis that B has taken place, as P(AB) P(B) P(A|B) = (6.19) provided P( B) * 0. Now let X and Y be a pair of random variables. Denoting A={XKx} 0 B={y<¥ Xq = Xp is, in com- plete analogy with Eq. (6.26), LC 15 Xaveeey Her Mee pe Xn) S(Xpa 1 Megas SL (X15 25000 Me Metts Megaee ses Xn) = (6.35) 6.5 FUNCTIONS OF RANDOM VARIABLES Given an n-dimensional random vector (X} with components X,, X2,..., X, and joint probability density f y)(x), x2,-.-, ¥,), we seek the joint probability Seyy(Wis Yas+++) Ym) Of an m-dimensional random vector (Y} with components 188 = TWO OR MORE RANDOM VARIABLES Y,, Yo...» Yn» Fepresenting the given function of X,, X2,..., X,: Y, = 91( Xs Xa. Xp) Y= (iy Kayes My) (6.36) = nl Xs Kayes Xy) The joint distribution function Fiy)(¥), Y2,---s Ym) reads Fy (Yrs Yarns Im) = PY SIs Ya Sarees Yin Im) = Plo Xs Xase1s Xn) SIs P2( Ms Kavos Xp) Sarees Pal Kis Xarees Xn) Im) (6.37) So that Fey) (Ys Yao Im) -ff 7 [for 1s Xaseoes Hq) ey ep s+ dx, (6.38) where the integration domain D is defined by the following inequalities (Xs Xa 50065 Xp) SN P21 Rayo Ba) by (6.47) 190 = TWO OR MORE RANDOM VARIABLES, and Fy(y) = f fro) dy (6.48) Example 6.4 Consider the case m = 2, n = 1, with @, and q, the linear functions, so that Y= a,X+ B, ¥, = a,X + B, where a, a2, 8,, 8, are given real numbers, a, + 0, a, + 0. Then fay dre a) =f fal 2)8( 1 ~ a4 ~ By)8(99 ~ ax ~ Ba) de we introduce a new variable = a,x and obtain fon w= 2 [27g L)o(y, ~~ 8198 - SE-B) 1 J oo sign ay a = BS te Eon - €- e108» - S24 - a.) a8 = hr = Ja(». - Sn + Abt - 6] or fon( v8) = phe 2ZB a 9, ~ Shy + SH - 0 a a If, B, = B, = 0, and a, = a, = 1, we have Soy Ya) = fe V8 = Ya) = fe 2) 802 — 1) which could be anticipated, since in this case Y, = Y, = In the case of the functions Y= a|Gl Y= a,|G] where G is a continuous random variable with probability density f,(g), the FUNCTIONS OF RANDOM VARIABLES 191 above equations yield fos 2) = aol 2!) 2( cs 2») for 2) = aha 22) 8( 91 - S49) (6.49) We can apply these results to Example 5.1, where G was a random force applied at the middle section of a beam, simply supported at its ends, and Bee ee ole 81 BS tay 82 384 ET, with the reliability defined as R = P(a,|G| < 1, a,|G| < 1) = P(Y, <1, % <1) which, since a,, a, > 0, may be rewritten as R= ff Fra) andr =['f hal 2}a(» - 1} ay, dy, Now, for a, > a we have [2(2- Zn) a=1 and therefore =f't;, (4) =7,(+ to flastal a) = Fas] For a, < a, however, we have = fiw ftp .(2 Sea ne [nf Ltal2)(0— Sn) _ il fp 1 a, Steele gl dy aa 1 ~ a sr hfol aa.) 9” aA fale) %~Fe(,] and finally, for a, = a, = a, #- Fal) a These results coincide with those obtained in Example 5.1. 192 TWO OR MORE RANDOM VARIABLES > Fig. 6.7. Integration domain D for the sum of a pair of random variables. x YX Example 6.5 We seek the distribution function of a random variable Y representing the sum of a pair of random variables X, and X, with f y)(x,, x2) given. In this case the domain D of the x,x, plane is determined by the equation x, + x2 < y and is shown in Fig. 6.7. For the distribution function F,( y) we obtain eta =f deaf Yay (xu x) ae, ng or FO) = f° af” “Sone %) dia Differentiating these equations with respect to y, we have oat fry) = =f Sel — 202) dea =f fool 9 ~ 0) a (6.50) In the particular case where the components X, and X, of (X) are indepen- FUNCTIONS OF RANDOM VARIABLES 193. dent, that is, fon 15 2) = fi %1) fa %2) and Fy) = f° Ba — x2) faa) ia = f° Fo ~ Vf) be (6.51) which in turn yields f(y) in the form Fel) = f° faa — afi) da = f° fa’ — fig) (6.52) This result follows also from Eq. (6.45), which now reads fol) = f° J fan bi $2) 809 ~ bs ~ 2) dB as The integral in Eq. (6.52) is called the convolution of fy,(y) and fy,(y) and is denoted by Lev) = fo) * fx) Thus, the probability density of the sum of a pair of independent random variables is represented by the convolution of the density functions of the component random variables. Example 6.6 Let now Y=X,X, The integration domain D (Fig. 6.8) is determined by the part of the x,x. plane where XX. 0, we have Fy(y) = f fh X_) dx, dx, Resorting to polar coordinates, r= |x? + x3 and 6 = tan~'(x,/x,), we obtain Fy) = [77 fay 1008 8, rsin)r dr do (6.58) 0 40 196 TWO OR MORE RANDOM VARIABLES. i Dy: minixy, x)= xy Sy by.v} Dp: min Ixy, x2) =x2 de, eStart) 7 ets =f ala) bef PO fay aes Hy) ea ey es Seay The inner integral, however, equals f,,(x,): ae lal PO PE fe ts Ravens He) dea + ty = f(r) saveee so that Eq. (6.64) may be rewritten as . Ela(X)] =f" sls )fa (x) & as which is the desired result. EXPECTED VALUES, MOMENTS, COVARIANCE 199 It is also readily shown that Var( X,) = E(x?) ~ [E(X)P Letting, in Eq. (6.64), g(x, X9)---) X,) = xfxf, we obtain the (k + r)th mo- ment of the random variables X, and X;: my, = E( X#X?) )-f° isis xbxtfyx (xi %)) didx; (6.65) Denoting X, = X, X; = Y, we obtain rye = BCX) = ff” yh 9) de dy The joint central moments p1,, are defined by Mar = E{[X ~ E(X))*[¥ - E(Y)]'} = ff Le COND ~ BO fer(s 9) dedy (6.66) while Meo = (LX E(X)}*) poe = (LY ~ ECV) are the appropriate central moments of the components X and Y, respectively. In particular, ao = Var(X) Hoa = Var(Y) The second central moment y,, is denoted by Cov( X, Y): Cov X, ¥) = E([X - B(X)][¥ - £(¥)]) = {0 fF be BODILY ~ ECM Sars ») de dy 6.67) and is referred to as the covariance of X and Y. The ratio ___Cov(X, Y) (6.68) is /Var(X)Var(Y) is called the correlation coefficient. 200 = =TWO OR MORE RANDOM VARIABLES. It is readily shown that Cov( X, ¥) = E(XY) — E(X)E(Y) (6.69) Indeed, from Eq. (6.67) we have Cow X,Y) = E(XY) — E(X)E(Y) — E(Y)E(X) + E(X)E(Y) which leaves us with Eq. (6.69). We say that two random variables X and Y are uncorrelated if their covariance (and therefore their correlation coefficient also) is zero. We im- mediately observe from Eq. (6.69) that for uncorrelated random variables E(XY) = E(X)E(Y) (6.70) That is, the mathematical expectation of the product of two uncorrelated random variables equals the product of their mathematical expectations. We prove now that independent random variables, that is, those possessing the property fyy(x, y) = fy(x)fy(y), are also uncorrelated. Indeed, calcula- tion of the covariance of independent random variables yields Covx,¥) = f° f° Le BOIL EON fers 9) ey = [fF be ECOL ~ EO ef) = {J [x= BOOM fel) ae} { {Ly - BOA) } (6.71) Each of these integrals is zero, since the central moment of first order of a random variable vanishes. Thus, independent random variables are uncorre- lated, while correlated random variables are dependent. The opposite, by contrast, is not necessarily valid; indeed, dependent random variables may be either correlated or uncorrelated. For example, if E(X)= E(Y)=0 and Sy(% ¥) = fyy(—x, —y), then ,, = 0, although X and Y may be dependent; such is the case in Prob. 6.1, where the covariance is zero, since the distribution Sxy(%, y) is symmetric. The conclusion is that independence is a stronger property than uncorrelatedness. We now introduce the Cauchy-Schwarz inequality, namely, |Cov( X, Y)| < ayo, (6.72) EXPECTED VALUES, MOMENTS, COVARIANCE 201 are the standard deviations of the random variables X and Y, respectively. The above may be rewritten as where Irxyl <1 (6.73) To prove it, we introduce new random variables _X-E(X) , Y-E(Y) RAGE eea nclerdaH Zz Since Var(Z) > 0, we have {[2- 2) , 7-20) = Al + ryy) > 0 ox which is equivalent to the sought property (6.73). The equality sign in Eq. (6.73) is obtained when e282 Bon ay ox ox Signifying that the following equalities hold: ee ea Ox Oy Bee) cere 78) Oy Oy In other words, there is a linear functional dependence between absolutely, fully correlated random variables (defined as those with either ryy = —1 or ryy = 1). Therefore the magnitude of the correlation coefficient is a measure of the degree of linear dependence between two random variables. In fact, there may exist a nonlinear functional relationship between two uncorrelated ran- dom variables. This happens, for example, when Y = X%, if in addition f,(x) is an even function. When, for example, X is normally distributed, N(0, 02), then E[XY] - E[X]E[Y] = E[X?] - E[X] E[X?] =0 (6.75) since all odd moments vanish. 202 TWO OR MORE RANDOM VARIABLES Example 6.11 Given n random variables X,, X2,..., X, with known f x)(x), X2,-.., ¥,). We construct their sum Y=X+Xyt-X, and seek E(Y) and Var(Y). Applying Eq. (6.64), we find E(Y) = E(X,+ X,+-+°+X,) n af [i fatto te) X foxy (1 Xa 50009 Xp) xy dxy +++ dx, = E(X,) + E(X,) +--+ + E(X,) (6.76) that is, the mathematical expectation of the sum of a set of random variables equals the sum of mathematical expectations of the component variables (irrespective of whether they are correlated or uncorrelated). The variance of Y is var(¥) = £((¥ - £(Y)P) = #{5 [x,-2(x)]} = 5 ¥ al(x,- a(x Ly - 2X) snl k=l Var( x) + EE Cowl x, X,) jal kal jek “2 oy, + z z Ty x Px ex, (6.77) j=lk=l ink where oy and oy, are the standard deviations of X; and X,, respectively. Equation (6.77) indicates that the variance of the sum of the random variables equals the sum of all variances and covariances of the component variables. If the components X, X,,..., X, of a random variable X are uncorrelated, all EXPECTED VALUES, MOMENTS, COVARIANCE 203 correlation coefficients ry, y, vanish and n n va x x) = Y Var(X;) (6.78) jn jn that is, the variance of the sum of uncorrelated random variables equals the sum of variances of the components. Obviously, Eq. (6.78) holds for indepen- dent random variables. In the case of a pair of random variables X, = X, X, = Y, Eqs. (6.76) and (6.77) become E(X+ Y)=E(X)+ E(Y) 03, y = 02 + 02 + 2ryyoyoy (6.79) where oy, y is the standard deviation of X + ¥. For absolutely positively correlated X and Y, ryy = +1, and Oy, y = Oy + Oy (6.80) and if oy = oy, we have oy, y = 2ay = Joy. For absolutely negatively correlated X and Y, ryy = —1, and Ox4y = Oy — Oy (6.81) and if oy = oy we have oy, y = 0. If X and Y are uncorrelated, we have from Eq. (6.79), Oxny = Vox + oF (6.82) For the difference of two random variables, we similarly have E(X- Y) = E(X)- E(Y) Oxy = 0% + OF — 2ryyoyoy (6.83) For absolutely positively correlated X and Y, we have Oxy = [x — oy (6.84) whereas for absolutely negatively correlated X and Y Oy_y =0y+ oy (6.85) For uncorrelated X and Y Of-y = Okay = OF + OF (6.86) 204 = TWO OR MORE RANDOM VARIABLES The central moments of second order of an n-dimensional random vector {X} with components X,, X,,..., X, represent the variances Var(X,), Var(X,),..., Var(X,) and all covariances Cov(X,, X2),..., Cow(X, X,), «+. Cow(X,,_,, X,,), their number being n*. They may be treated as elements of a matrix [V]: Var( X,) Cov(X,, X,) +++ Cov X,, X,) Iv}= Cov Xa, X,) Var(X,) +++ Cow(X,, X,) (6.87) Cov X,, X,) Cow X,,X,) +++ Var(X,) This is referred to as the variance-covariance matrix, and is symmetric, since Cov(X,, X;) = Cov(X;, X;) according to the definition (6.67). For a pair of random variables X, = X and X, = Y, the variance-covariance matrix reads Iv1= Var(X) Cov( X,Y) Cov(y, X) — Var(Y) The Cauchy-Schwarz inequality (6.72) implies that the determinant of [V] is nonnegative: det[V] > 0 (6.88) Since Var( X) is also nonnegative, we conclude that [V],,.2 is likewise nonnega- tive or positive-semidefinite. We will show that this property is possessed by the variance-covariance matrix of any n-dimensional random vector. To this end, consider the random variable Y =a, X, + a,X)+-++ +0,X, in which a, a,..., @, are real. The mathematical expectation and variance of Y are, respectively, BY) = Fy (X) iz Var(Y) = r r aa,Cov( X;, X,) (6.89) jalk=l Now the right-hand term of Eq. (6.89.2) is nonnegative, irrespective of the values of 0, @2,.--, &,- Thus the matrix [Cov( X;, X,)] is nonnegative-definite. This implies, according to the Sylvester’s theorem (see e.g., Chetaev) that all EXPECTED VALUES, MOMENTS, COVARIANCE: 205 principal minor determinants associated with matrix [Cov(X,, X,)] are non- negative: Var( X,) Cov( X;, X>) Var( X,) > 0 20 an( Xi) Cov(X,,X)) Var X;) Var( X;) Cov( X,, X,) Cov X,, 3) Cov( X;, X,) Var( X,) Cov( X,, X;)|>0 (6.90) Cov( X;, X3) Cov( X,, X3) Var(X;) ete. Example 6.12 A box contains a red, b white, and c blue balls. A ball is picked at random from the box. The random variables X, Y, and Z denote the following events: x-(h 0, ya{h 0, z-{\ 0, if a red ball is picked otherwise if a white ball is picked otherwise if a blue ball is picked otherwise We seek the variance-covariance matrix formed by the random variables X, Y, Z. Obviously, Mk spre MUO sere MY Nesapee PWS Sees Pizm Nase Piza a= 8th The mathematical expectations are found from Eq. (3.21): E(X)= Sx PLY ]=0-P[X=0] +1-P[X=1]= eee a+bt+e 206 © TWO OR MORE RANDOM VARIABLES with x, = 0 and x, = 1. Analogously, b B(Y)= PIY= l= 546 c B(Z)=PIZ= = a Since E(X?) = E(X), we have for the variance of X 2 Var(X) = E(X?) - E2(X) = ae ae _ a(b+c) (a+b+c) and the variances of Y and Z are, respectively, b(a+c) Var(Z) = c(a +b) vai) = bre (a+b+c) In order to find Cov( X, Y), we first calcuate the following probabilities: P[{(X = 1) n(¥ =0)] = P[(X= 1)n (Y=0)n (Z=1)] +P[(X= 1) (¥=0)n(Z=0)] by the formula of overall probability, (2.25), However, P[(X=1)N(Y=0)N(Z=1)] =0 as the probability of an impossible event, and P[(X=1)N(¥=0)n(Z=0)] = a Therefore, P[(X=1)n(¥=0)] "a EXPECTED VALUES, MOMENTS, COVARIANCE 207 In a similar manner, E[(X=1)n(¥=1)] =0 e[(X=0) 0 (¥=)l=s5 5 E[(X=0)n(¥=0)] =sSERE so that the covariance becomes Cov( X,Y) = Di z [x - E(X)][y- BO) P[(¥ = x) 9 (¥=y,)] eae aaa a b b + (0-4) - ase) +(\-a3533) Jo - ae) (ss) +(1- rare )(.- ares) ae ab (at+bt+cy Analogously, Cov( X, Z) = #_ Cov(¥,Z) = - z= 3 (a+b+c) (a+b +c) The correlation coefficients, defined in Eq. (6.68), are Pepa stip ceeds nni DCE RenT ane eon et te (a+ c)(b+c) ie (a + b)(b +c) n= -,/——e_ iid (a+ b)(a+c) with all of them equal to — } for a = b = c. Finally, the variance-covariance 208 = TWO OR MORE RANDOM VARIABLES matrix is 1 a(b+c) —ab ac [v] =] -ab ba +e) —be (a+b +e) —ac —be c(a+b) Let us check the nonnegativeness property of the variance-covariance matrix. Indeed, a(b + c) > 0, and a(b+c) —ab = + a b(a +c) abe(a+b+c)>0 a(b+c) —ab —ac —ab b(a+c) —be |=0 ac —be c(a +b) since the elements of the first row are the sums of the corresponding elements of the other two rows, taken with a minus sign. Note that the variance of the sum X + Y + Z is, in accordance with Eq. (6.77), Var( X + ¥ + Z) = Var(X) + Var(¥Y) + Var(Z) +2[Cov( X, Y) + Cov( X, Z) + Cov(Y, Z)] =0 which is explained by the fact that X + Y + Z is a certain event: either a red, a white, or a blue ball will be picked from the box during the experiment. If we neglect the covariances between the random variables X, Y, and Z, the approximate value of the variance could be Var(X + ¥ + Z) = Var(X) + Var(¥) + Var(Z) _ 2(ab + be + ac) (a+b+c) The relative percentage error with respect to the exact value of the variance is defined as Var(X + ¥+Z)-Val(X+¥+Z ill ar(X + Y + Z) — Var(X + +2) y 100% Var(X + ¥+ Z) with infinity as its upper limit. The conclusion is that the error induced by assumption of uncorrelatedness may be very large. APPROXIMATE EVALUATION OF MOMENTS OF FUNCTIONS 209 6.7 APPROXIMATE EVALUATION OF MOMENTS OF FUNCTIONS As can be seen from Eq. (6.89), determination of moments of linear functions of random variables is a straightforward task, whereas that of moments of nonlinear ones is often rather cumbersome and approximation may be prefer- able. Consider, for example, the mathematical expectation of the function g of random variables X,, X,,..., X, given in Eq. (6.64). In order to evaluate the integral there, we resort to the Laplace approximation. f.y)(x,, X2,---) Xn) takes on significant values in an interval containing the point with coordinates E(X,), E(Xq),..., E(X,)s if g(x, %25-.+, X,) is a slowly varying function in this interval (see Fig. 6.12 for the case n = 1), it may be withdrawn outside the integral sign for x, = E(X,), x, = E(X)),..., X, = E(X,) to yield E[g( Xi, Xas--- X,)] = 8[B(%1), EUG), E(%)) co PLP fo 0 Rayo Xn) de bayer by = g[E(X,), E(%),---1 E(X,)] This estimate may be improved by expanding g(x), x2,..-, x,,) in series about the point E( X,), E(X,),..., E(X,): (X15 X250+4 Xq) = B[E(%), B(%),--- E(%)] [,- 2(%)] | ag “E(e) ax, x EG) i a EE jalkel i ee 7 P(X )]se aac Substituting this in Eq. (6.64) and noting that E[X, — E(X;)] = 0, we obtain E[g( Xs Xar-o-> Xy)) = g[B(%), E(%)>--» E(%)] - i HEE fa cont) jolket (6.91) 210 TWO OR MORE RANDOM VARIABLES fyb glx) 0 x E(x) Fig. 6.12, To the Laplace approximation of the moments of the function of a random variable: f<(x) has a peak at E(X), and g(x) is a “smooth” function, For the variance of g(X;, X,..., X,,) we obtain in a similar manner mom a a Var[g(X, XX J= LL (38) (7) Cov( X,, X,) j=lk=l ST xj EX) Kd xymE(X) (6.92) Formulas (6.91) and (6.92) are readily extended to the n functions g, of n random variables X,, X,,..., Xj. Example 6.11 Application of the above formulas for the mathematical expectation and variance of the quotient of two random variables X, = X and X, = Y yields (2) - 28 1 (x,y) + £09 yl" z@)7~ E(Y) E?(Y) : E*(Y) Var(Y) x Var(X) , Var(¥) _ 2Cov( X,Y) |[ E(x) ]? vax( $) = + oom Wary 6.93 Y | E(x) * EY) BONEC) Le@) | ©? Note that if the standard deviations are of the same order of magnitude as the mathematical expectations, then f(y)(x,, X2,-.., X,) takes on significant values outside the interval containing the point E(X,), E(X,),..., E(X,), and ap- proximations (6.91) and (6.92) will be too rough. If, for example, X and Y are JOINT CHARACTERISTIC FUNCTION 211 two independent exponentially distributed random variables x(x) = e*U(x) fy(y) = e'U(y) then, according to Eq. (6.57) with Z = X/Y fal 2) = [vice fy) & ~ f° aha ev)fr(») The second term vanishes, since X and Y take on positive values and 1 fz(z) =f ye %e"? dy =f ye E+ DY dy = fe > The mean E(Z) is infinite zdz B(Z) = [atel2) de [YO whereas formula (6.93.1) yields 2. 6.8 JOINT CHARACTERISTIC FUNCTION The joint characteristic function of the components X,, X,,..., X, of an n-dimensional random vector {X} is defined as the following mathematical expectation: Myx)( 915 825-0 8,) = EXexpli(.X, + 2X, + +++ + 8,X,)]) PP Fi fe ran) x exp[i(O,x, + O,x_ + +++ + G,x,)] dx, dx, ++ dx, (6.94) and represents the generalization of the characteristic function M,(@) of a single random variable, defined in Eq. (3.40). In Eq. (6.94), @,, 02,..., 6, are the arguments of the joint characteristic function. Since the joint probability density function f.,, is nonnegative and integrable, it can be represented as a Fourier integral. The inverse Fourier transform of the characteristic function 212 > TWO OR MORE RANDOM VARIABLES equals the joint probability density function: fog (X15 X20eees Xn) oa JP May 8 Bas--05 8) x exp[—i(0,x, + 0.x, +--+ + 6,x,] 40, d0, +--+ d8, n (6.95) The marginal characteristic functions are obtained from the joint characteristic function. For example, My,(9,) = Efe] = Mcx)(61,0,..., 0) My, x,(81, 82) = Ele #4] = My), 050)... 0) ete. Using the joint characteristic function, we readily obtain the moments of any order of the random variables X,, X,,..., X,. For example, B( xP --- xf) | Ot Mey (815 82-25 M4) ma ae 30k abK BOK 0\~0,—---=0,=0 (6.96) Note that M,y)(0,0,..., 0) = 1; indeed, co p00 Mcy(0,0,---5 0) Rites Jo foo tav eos y) be dr ve dey = 1 Another important property of the joint characteristic function is that for independent random variables X,, X,..., X, it equals the product of the marginal characteristic functions n Mex) (915 824 8) = I1™(9) (6.97) je Since in that case foxy Xaseees Xn) = Da) JOINT CHARACTERISTIC FUNCTION 213, we have 2 p00 20 M,y)(9;, O25-.+5 9) = tee Hy Xyeeey Xp xy Baseoes Oh) = ff PO fa 0 ares a) x exp[i(O,x, + 0.x, + +++ + 8,x,)] dx, dx, +++ dx, = 00, =I f™ fa()exp(ig.x;) ax, j=lt-« and since cy JP fa los)ex0(i9.5)) dry = My(G) we obtain Eq. (6.97). Distinction is again possible between the independent and uncorrelated random variables. Consider, for example, two random variables, X, and X,; then, from (6.96) we have, if X, and X, are independent, aha y) (4), *) 6)=6,=0 E(XexXk) = (jet ( V ‘) (-1) 20k ak But since for independent random variables Eq. (6.97) is valid, a" My (01) | a" My(6) aor 6,=0) ay 0,=0 B(XPXP) = -o| = E( XP) E(X#) (6.98) whereas uncorrelatedness implies only that E(X,X,) = E(X,)E(%) If we substitute 0,=0,=++ =6,=0 in Eq. (6.94), we arrive at the characteristic function of the sum Y = X, + X, + +++ +X, of random variables: My(8) = E[exp(i0Y)] = Eexp[i0( xX, + X, +--+ + X,)]} = Myx (9,9,.--5 8) (6.99) If X, X,,..., X, are independent, then combining Eqs. (6.97) and (6.99) we 214 TWO OR MORE RANDOM VARIABLES, obtain My(8) = TIM, (8) (6.100) j=) where M, x; (@) is the marginal characteristic function of X,. Thus, the character- istic function of the sum of independent random variables equals the product of characteristic functions of the constitutents. In the particular case where all variables have an identical characteristic function My,(9) =M(0), j=1,2,. yn we have My(8) = [M(9)]" (6.101) Example 6.12 wallet, n independent random variables, each with a normal distribution N(a;,07), j = 1,2,..., n. The characteristic function of their sum is, in accor- dance with Eqs. (6. 100) and (4.23), 7 076? n ew My,(6) ~ Tov i a ) = onl La,- z Yo j=l ja Denoting n n a=Ya &= Yo? (6.102) j=l jel Then 292 M,(8) = exo( iad mie ) (6.103) Comparing Eqs. (4.23) and (6.103), we notice that the characteristic function of the sum of independent normal variables is also normally distributed, N(a, 0”), where a and o? are defined by Eq. (6.102). The mathematical expectation and variance of this sum equal the respective sums of those of the constituents. These results are particular cases of Eqs. (6.76) and (6.78), respectively. What is new is that the sum of n normally distributed variables turns out to be normally distributed as well. Example 6.13 Let X,, X,,..., X, be independent random variables, identically normally distributed, N(a, 0). We wish to find fy(y) of y= ¥ (4-0) PAIR OF JOINTLY NORMAL RANDOM VARIABLES 215 We first determine the characteristic function My(y). Since X,, X2,..., X, are independent, (X, — a)?,(X, — a)*,...,(X, — a)’ are also independent; more- over, since all x; have identical distributions, so have all (X% — a)?-s. Denote 2 Z;=(%,~ a) Since X, is N(a, 0”), X, — ais N(0, 0”). Now, according to Eq. (4.62), we have z —a ex(- sis)UCe,) o(27z,) o The characteristic function of Z; (with the j’s dropped) is fe(2)) = wo gids 7 207 M,(8) = f ) ae = (1 — i207)" 0 oy2az exp - Now, since all Z;’s have identical characteristic functions, by Eq. (6.101) we have My(8) = [Mz(9)]" = (1 ~ i2078) and the probability density function of Y 1% =5-f[ My(0)e" a6 NO) = af Mr) reads f= sale) %(- a) That is, the random variable ¥/o? has a x? (chi-square) distribution (see Eq. 49). 6.9 PAIR OF JOINTLY NORMAL RANDOM VARIABLES A pair of random variables X and Y are said to be jointly normal, or to have a bivariate normal distribution if their joint probability density reads 1 2m0,0,(1 — r xexp{- x1 7 7) (s 2)’ - wie (*3*)]} (6.104) fay(% 9) = na 216 = TWO OR MORE RANDOM VARIABLES, This density depends on five parameters a, b, o,, 0, and r, the significance of which will be shown further on. The (cumulative) joint distribution function reads 1 2n0,0,(1 — r?)'? «LS 290(- ata ((EGt) -2rtsetst a +(254)]) didn (6.105) We find the marginal probability densities f,(x) and fy(y) as Fay, y) = (2) =f for 9), Sol) = J far 9) ax For fy(x) we have 1 hk) = sa * 2m0,0,(1 — r?)'? 2 x *>*) opera yre f _o(- all a, oo z yrob : ( 02 )]} ” Denoting eae 02 cat then fx)" — +, |* “ol (An? — 2Bn + C)] dn 2no,(1 — r2)'” where pieueil tie =a) —(x=ay ae ae ata geray oe(1 — r?) PAIR OF JOINTLY NORMAL RANDOM VARIABLES 217 Using the formula (A2) derived in Appendix A, we obtain 1 1 AC ~ B? file) = o,[2a(1 — r2)]'7 Te (- 2A ) or, finally, _(x=ay? fels) = feo OS That is, X has a normal distribution with a = E(X) and o, = /Var(X) = oy. Similarly, ¥ has a normal distribution with b = E(Y) and 0, = /Var(Y) = oy. Let us show now the significance of r. To do this, we calculate the covariance Cov X, Y) = [fie — a)(y — b)fyy(x, y) dx dy Applying a change of variables, x-a es oad = r&+ (1-7?) q (6.106) and the incremental area dx dy transforms into dx dy = |J\d& dy where J is the Jacobian of the transformation (6.106), a a(x,y) _| 8% an} _|o% 0 : _ ayn an) |} ax ay |“ [ry (1—72)2o,) 22) a am so that Cov( X,Y) = ae Sale +(1- )*a]exr|- (e+ 7)| dédy or 1 2 2 7 2 Cov( X,Y) = roe] he f° peor || f el/2)1 a| T *—c +(1-r?)! Po ef eeevne a Z| ‘TT — oo 218 = TWO OR MORE RANDOM VARIABLES. and Cov X, Y) = a,0,7 implying that r is the correlation coefficient of X and Y. If X and Y are uncorrelated, their joint probability density becomes x-a) 1 ( farl®9)= Fre on a 2 -o-) = fer) 2 that is, they are then also independent. The conditional probability densities fy(y|x) and fy(x|y) are obtained from Eqs. (6.26) and (6.27): SECEDE LEC ed * a Al 3” i ee ecb ene aoe a frie) aa al % a I} (6.107) Sx(aly) Note that the joint probability density of two normally distributed random variables reaches its maximum at (a, 5): 1 a,b) = ———___. frr(a,b) 2no,0,[1 — r?]!? The density (6.104) is coustant along the ellipse (x=a) _ 5 (e-a(y=6) , = 4" 2 108) 010, of Q(x, y) = with center (a, b). The probability of the point with random coordinates X, Y, having jointly normal distribution, falling within the ellipse of constant density is P(c) = Fram aI Lo la Tar 0% 9)| 20,0, [ -r (6.109) where A(c) is the region bounded by the ellipse (6.107) in the xy plane. The PAIR OF JOINTLY NORMAL RANDOM VARIABLES 94g, integral in Eq. (6.109) is calculated by transformation to polar coordinates, The result is P(c)=1- ex aol (6.110) We next determine the joint characteristic function of X and Y: Myy(809) = ff furl yexpCi( Ox + 6,9) de dy or Myy(9,, 6.) = exp{i(,a + 8,b)} xf LP fer vlexrliL O(a — a) + 6,(y ~ b))) de dy Making use again of the substitution (6.106), we have Myy(0y,4,) = expli(Qja + 8,b)) x ag Sf orl i[eik + 0,0,(r€ +(1 - r?)'7n)] ~A(@ + 9) dé dy _ exp{i(9,a + &b)} 7 2a x [J ow(—ale — 2i( 6,0, + 8,0,r)£) ae] x [/*oo(-a[e — 210,0,(1 — r2)'*n]} an] (6.111) Using formula (A2) in Appendix A, we evaluate these integrals as follows: f° exo(-al? — 2i(8,0, + 8,0,r)€]} df = vam exp[—4(8,0, + 60,r)"| f° exe afr _ 2i6,0,V1 — r?n]} dy = yim exp| - $0307(1 — r?)] Substituting the latter equations in Eq. (6.111), we obtain finally Myy(8,, 9) = exp[i(6,a + 6,5) — }(6?07 + 26,8,0,0,r + 6703)] (6.112) 220 = TWO OR MORE RANDOM VARIABLES Using the joint characteristic function Myy(6,,0,), we readily find the characteristic function of the sum or difference of two random variables X + Y. We first note that the characteristic function of the difference is obtained from Eq. (6.94) by setting 0, = 6, 0, = —0,6, = --- = 6, =0, that is, My_y(8) = E(exp[i8(X - Y)]} = Myy(4, -8) (6.113) Thus, comparison of Eqs. (6.99) and 6.113) yields Mysy(9) = Myy(8, £0) and My,y(0) = exp(i0(a + b) ~ 462(0? + 2r0,0, + 02)) implying that E(X+Y)=atb Of, y = 07 + 2ro,0, + of (6.114) These results are again particular cases of Eqs. (6.76) and (6.77), respectively. What is new is that the sum or difference of a pair of jointly normally distributed dependent variables turns out to be normally distributed as well. 6.10 SEVERAL JOINTLY NORMAL RANDOM VARIABLES Note first that the probability density function of a pair of jointly normal random variables, given in Eq. (6.104), can be rewritten as (—4(x — m)"[V]-{x - m)) 1 Fiayx,(%15 %2) = Fe (aetly exp! where X= X, Y= X,,x = x,,y =x, and x m c= (2) (m= (i) [vV]= E[(x — m}{x - m)"] i | o? me 2 70,0, 03 SEVERAL JOINTLY NORMAL RANDOM VARIABLES 224 where [V] is the variance-covariance matrix, and T indicates transpose. Accordingly, (det[V])'7 = 0,0,(1 — r?)'/7 [vyt= = 1 ; is | oo? (1 — r?) | —ra,o, a? eae 1 oa? 102 a -r r 1 “6, oP hence, (x= m)"[V] "Ge — m) L_[Ger= my? _ 5, (a) — mylxg~ ma), (xa — ma) l-r a? 00, o} which then yields Eq. (6.104). In the general case of an n-dimensional random vector {X), its components X,, X2,..., X,, are said to be jointly normal if their joint probability density is given by Sory (21s ¥29-+5 Xn) a exp(— {x — m)"[V]“ {x ~ m}) pee eee SEE HE ((2m)"det[V]) (6.115) where ()7= [a x a {m)"=[m, m, +++ m,] 222 TWO OR MORE RANDOM VARIABLES and On OD vee Pin [V] = B[(x— myx myt]=| 2" Be On On2 tH Cnn is the variance-covariance matrix. The density is characterized by n + (n? — n)/2 +n =(n? + 3n)/2 parameters: n mathematical expectations m,, m, +++ym,, and n variances v1), 022,..-, U,,, and (n? — n)/2 covariances Dx (j ,..., X, are jointly normal, is Fy BB) = e09(—A EB ouohu) (6120 With Eq. (6.120) available, we readily find the central moment of different orders: [CX = my BUG = mg) (XK, — mm) OM x) (8;, O2,.++5 8, ie 4 hal bf) = =f) (6.121) q 1) 98 * 0G, 0,8 .— = 0 where k =k, +k, +---+k,. 6.11 FUNCTIONS OF RANDOM VARIABLES The general case of m functions Y,,¥2,..., Y,, of m random variables X,, X,,..., X, was already considered in Sec. 6.9. In particular, the joint probability density fy)(¥\, Y25---+ Ym) of derived random variables is expressed by Eq. (6.45) in terms of the density fog % 1 Xas-++5) Xm) Of initial random variables. Consider now the important particular case where n = m and the functions Pp, P2,---, Ym in Eq. (6.36) define one-to-one mapping. This restric- tion ensures existence of transverse transformation, and we may therefore express {X} as a function of {Y} by the formula {X) = 9 '({¥)) = h((Y)) or, in component notation 1 = Yar In) = Ag (Ms Yay In) (6.122) y= My (Ws aver In) Let us use Eqs. (6.122) for transformation of the variables in Eq. (6.45): =A MM) TAL [= OE be Se] It follows then, from a well-known result on coordinate transformation, that dé, df, --- dé, =|J|dndy,--- dn, FUNCTIONS OF RANDOM VARIABLES 225 where J is the Jacobian of the transformation pq Bb S009 4) (hy, gy...) My) © OM Me) OCI Aas Me) ah hy ah ium Oy ay ahy ay =| 9m On, ann an, On, Om and we obtain, instead of Eq. (6.45), nef? fo x f° fool, Maseees Ma )oeees Ma(Ms Maes Mad] Soi Yas O(hy, ho hy a(n. x T18(y, -1) qe dn, dny--+ d’ Ta) 472 Un Integration of the above is preformed using the basic property (3.14) of Dirac’s delta function. The result reads: Serr Daves In) = fag lis Jase Indovees Maa Yasee Ind] vy) ets 6.123) IW We ee where (hy, fay.++s hn) [eee > = det} ——_.—_—_— 6.124) (Ys Yass In) ay, (6.124) Example 6.15 Consider the transformation {Y)} = [A]{x)} (6.125) 226 = TWO OR MORE RANDOM VARIABLES where the square matrix [A] is nonsingular; in these circumstances [A]~' exists and (X) = [4]“{Y) (6.126) It is readily shown that the Jacobian of the transformation, J, is the determi- nant of matrix [4]~', and we obtain fos Save In) = fog Bu + Bide $0 + Bu Suse Ba +By2 J + +++ + Bay Yn] |der[ B]| [8] =[4]"' (6.127) The mathematical expectation vector E({Y}), with elements E(Y)), {E(Y)}" = [E(%,) EK) +++ E(Y,)] (6.128) or the variance-covariance matrix of vector ¥: (W1 = [Cov(¥,, Ye) Jen (6.129) may be derived directly from (6.125); in fact, E((Y}) = E([A]{X)) = [A] E((X)) (6.130) since [A] is a matrix of constants. Also, [Ww] = E[((¥) - EY) (CY) - £()))7] = B[(LAKX} — [ATE(C)) (LAX) - LA E(X)))] = E([A((X) ~ E((X))) (A(X) - £((X))))") = L4[E(09 ~ £(09)) (2) - (09) JA? =[A4][V 4" (6.131) where [V’] is the variance-covariance matrix of (X), [V7] = [Cov X), Xe) ] cn (6.132) FUNCTIONS OF RANDOM VARIABLES 227 Now let {X) be a normal random vector with density (6.115) and let {Y) be determined by the linear transformation (6.122). Equation (6.123) yields Lory Save In) (LAV {y)—(m)) "TV! -—_!_,, (- "" ((n)"det[v])'7 : x([AI"'(y) = (m))) det 4]? det[ A]~' ((2m)"det[ ¥]) 7 xexp( — HC) - Lm)" LAP) x (LAO) -LAlm))) But (4) [F]'LAy = [wy and det[A]7! 1 (det{v])'? (det w])'” and recalling (6.130) we have 1 fon dao) = Tat xexp( — 3((y)— (u)) TW] (Cv) ~ (a) (6.133) that is, the linear transformation of the jointly normal random variables yields jointly normal random variables with (w= [Ahm [W] = [4][v][4]" (6.134) The mathematical expectation vector and the variance-covariance matrix of the derived random vector {Y} were already given by Eqs. (6.130) and (6.131), The property of normality is obtained by using Eq. (6.123). Example 6.16 Let the matrix [A] in Eq. (6.125) be HH cos@ sin@ [A] = a cos 6 (6.135) 228 = TWO OR MORE RANDOM VARIABLES Here transformation as per Eq. (6.125) is replaced with rotation of the axes (Fig. 6.13) through the angle 0. [A]~' is -1_|cos@ —sin@ {4] ale | and Eq. (6.127) becomes fons 2) = fon (1008 8 — y,sin 8, ysin 8 + y,cos 8) If X, and X, were jointly normal with zero means, 2 2 bualowsn)= emer ool ~ agg 25+ 3) then Pre Je) = —— oo - (Pi - 20.0 + »2)) 2n0,0,V1 — 1? 21 — r?) where _ 00879 sin26 | sin’ mans o? 910, o} Q- sin26 i 700828 sin20 20; 902 20} in? ii 2 Rasim a 4 pine + 208 @ oa? 910, of Note that for an angle satisfying 2r0,0, 2 of — oF tan26 = (6.136) we have Q = 0 and 1 1 + Yn) = ——— = oxp( - —— (Py? + RB Fru 2) = 5 ata =( may »)) implying that linear transformation with the [A] of Eq. (6.135) yields indepen- dent normal variables. FUNCTIONS OF RANDOM VARIABLES 229 ° y x Fig. 6.13. Linear transformation according to Eq. (6.135), with 0 as per (6.136), makes Y, and Yp independent. This result can be extended to multidimensional random normal variables. We saw in Example 6.15 that the linear transformation {Y} = [A]{ X) of the random normal vector X similarly yields the random normal vector (Y)} with mean my, = [A]E((X}) and variance-covariance matrix [W] = [A]VILAI, where E({ X}) and [V] are the mean and the variance-covariance matrix of the initial vector { X). However, due to the symmetry of [V], it is always possible to find a square matrix [ B] such that [Biv e=f A] [ay’=[ay"! (6.137) Here [ A J is a diagonal matrix with diagonal elements representing the eigenvalues of [V]: PA dn =ABe where 5, is the Kronecker delta 1 jak 54 = {0 im (6.138) Since [V] is nonnegative, the eigenvalues A,, A2,...,A,, are also nonnegative, and if it is nonsingular (that is, if det[V] + 0), then A,, A,..., A, are positive. A matrix [B] satisfying the conditions given in (6.137) is said to be orthogonal. Thus if the transformation [A] is chosen to be [B], then the 230 = TWO OR MORE RANDOM VARIABLES probability density function of {¥) is, by virtue of Eq. (6.115), 1 fry dares In) ~ (Gey "aetpw)7 xexp( — 4((y) — (my) LW] "((y) — (my) with (w)=[B][v La)" =C al Then [W]-'=[ A J]~', where [ A ]7! is also a diagonal matrix and the diagonal elements of [ A ]~' are the reciprocals of appropriate elements of TA: TA lie= 78x Furthermore, det[W] = det{ A ], and fy becomes 1 fon Iu Pavers We A gy xexp( — 4((y) ~ (my))"E A 1 ({y) — (my) which in scalar notation reads 1 fyi Yar In) = a ((2m)"AyAg oo Ay xo E2G,- 20) or 1 — E(Y,))° a fon 2 Yorn) = TL is “o( na = Tia) (6.139) The conclusion is that jointly normal dependent random variables can be linearly transformed into jointly normal independent variables. Note that the COMPLEX RANDOM VARIABLES 231 eigenvalues A, of the matrix [ A ] represents the variances of the derived random variables Y,. Now consider the remaining case where [V] is singular. Its rank r is then less than n, and n — r eigenvalues denoted A,, Az,..-, A,, of the matrix[ A ] are zeros, implying that Y,, Y,,..., ¥,_, have zero variances and take on values equal to their mathematical expectations E(Y,), E(Y2),...,£(Y¥,_,) with probability unity. Thus Y,, ¥;,..., ¥,-, have a causal distribution (see Sec. 4.1) ar Pity yA D1 are Inne) = Tals, - E(¥,)) and instead of Eq. (6.139) we have nor n Soyo Yar Innes Yanreivees In) = T18(y, = EQ) TT i genet 2m xe ca oo (6.140) 6.12 COMPLEX RANDOM VARIABLES Let X and Y be random variables as defined in Sec. 3.1; that is, let real numbers X(w) and Y(w) correspond to each outcome w € &. We say that Z is a complex random variable if to each outcome w is assigned the complex number Z(w) = X(w) + i¥(w) so that the possible values of Z(w) are z=xt+iy The mathematical expectation of Z is readily found: B(Z) =f" f° (+ furl y) dey 7 iZ [Sel y) dx dy + if? f° ster J) dx dy = E(X)+iE(Y) (6.141) The variance is defined as Var(Z) = E[|Z - E(Z)P] = E([Z - E(Z)][Z - E(Z)]*) (6.142) where the asterisk denotes the complex conjugate. 232. TWO OR MORE RANDOM VARIABLES Bearing in mind that |Z — E(Z)P = [X— E(X)P + [Y- E(Y)P we have Var(Z) = E([X -— E(X)]} + [Y - E(Y))} (6.143) By virtue of Eq. (6.76), E([X — E(X))? +[¥ ~ E(¥)]} = E([X — E(X))?} + {LY - E(Y)T) so that Var(Z) = Var(X) + Var(Y) (6.144) The covariance of a pair of complex random variables Z=X,+iY, Z=X)+iY, (6.145) is defined as Cov(Z,, Z,) = E{[Z, — E(Z,)][Z, — E(Z2)]*} and equals Cov(Z,, Z,) = E{X, + iY, — [E(X,) + i£(¥,)]) x({X, — i¥, - [E(%) — i£(¥%,))} = E{[X, ~ £(X,)] + i[% - £(%))) X{[X, — E(%)]-i[% - E(%)]) = E([X, — £(%))[% - E(%))} +2E([¥, - £(Y)][% - £(%)]) +iE{[X, - E(X,)][Y, - £(%)} ~iB([X, ~ £(%)][% - £(4)]) = Cov( X,, X;) + Cov(¥,, Y2) +i[Cov( X,, Y¥,) - Cov( X,, ¥,)] (6.146) As in the case of a real pair, Z, and Z, are said to be uncorrelated if their COMPLEX RANDOM VARIABLES 233 covariance vanishes, that is, if Cov X,, X,) + Cow(Y,,¥%)=0 and Cov(X,,¥,) + Cow( X,, %)=0 (6.147) We calculate now the second-order moment: E(Z,24) = E{E(Z,) + [Z, - E(Z,)]}{ £(Z$) + [Zs - £(23))) = E(Z,)E(Z3) + Cov(Z,, Z2) (6.148) and for an uncorrelated pair, E(Z,Zh) = E(Z,)E(Z3) (6.149) A pair of complex random variables is orthogonal if E(Z,Z$) =0 (6.150) We say that Z, and Z, are independent if Fayza(%1 is ¥20 Ya) = Sey I) fv, (2 2) (6.151) n complex variables Z,=X,+ iY, j=1,2,...,m, are independent if the groups (X;, Y,),(X2, Y),---.( Xn ¥,) are independent. Their joint probability satisfies Fayzy 2 {X19 Wr3 X29 Va5005%ny In) = Thala, y%) (6.152) PROBLEMS 6.1, The random variables X and ¥ are said to have a uniform distribution in x? + y? < Rif —., x+y? 0. Find the expression for the conditional distribution function Fyy(x, yla 0 fz(z) = [roe )[nerne?] dx = yee, 2<0 and f,(0) = Au/(A + p). The mathematical expectation and the variance of Z are, respectively, 1 B(Z) =} - Var(Z) = 5+ : - Ele The reliability is then, in view of Eq. (7.7), R = A/(A + 1). Define the central 240 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES safety factor (7.8) which in our case is A sao B In terms of this factor, the reliability can be stated as 5 R=T5 (7.9) which implies one-to-one correspondence between the reliability R and the central safety factor s. Thus, if the R = 0.99, the central safety factor equals 99. If \ = 4, then fol) = Fem" that is, Z has a Laplace density (see Example 3.9). The reliability is R = 0.5 and the central safety factor s = 1. Note that this one-to-one correspondence generally does not hold, as will be seen later on. Example 7.2 Let X = = have a chi-square distribution with m degrees of freedom: en t/dym/2-1 Oem U(x) and let Y = 2,y.y have a similar distribution with n degrees of freedom: enVyn 2-1 2"?T(n/2) uO) fy(y) = where both X and Y¥ are independent variables. Before determining the reliability of structure, we find the distribution of _ X/m Ui Y/n (7.10) that is, of the ratio of the independent chi-square random variables divided by FUNDAMENTAL CASE 241 their respective degrees of freedom. The joint density fyy(x, y) is xm/t-lyn/2=1 farm») = 2"AP(m/2E(n/2) “eN2U(x)U(y) In order to find f,(u), we introduce the auxiliary variable W = Y. We then wish to find the joint density f,,,(u,) and obtain a marginal density f,,(u) by integration. The Jacobian is J= mw so u : Sea Sou 49) SAT ATA eM) xw"exp| ~ (adnyoe *¥ | y(uyu(w) and ful) = f“fow(u,w) do 7 aoe (eye 2" AT (m/2)T(n/2) \ A x [winrar — nade ty 0 2 U(u) (7.11) n — Fi(m+n)/2] (")"" yim D2 T'(m/2)T(n/2) [1 + (m/n)u] "#7 where U(u) is the unit step function. The random variable U is said to have an F distribution with degrees of freedom m and n. Note that an F-distributed random variable is often used in statistics. U is often referred to as the variance ratio. It is worth noting that if U has an F distribution with m and n degrees of freedom, then 1/U has a similar distribution with n and m degrees of freedom. Reverting to the initial problem, we have, in accordance with Eq. (7.3), R=F() vas and by virtue of Eq. (7.10), R= #(4) (7.12) where F(x) is the (cumulative) distribution function of U. 242 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES, Example 7.3 Let X = Zand Y = 3,y,,, both have uniform distributions 1 TT XS XS, f(x) = {x2 — 7 UES? 0, otherwise ! YSIS, fra ary oh 0, otherwise 0, x x, and y, > y,. It is immediately seen from Eq. (7.5) that 1 Ye R= F, d) 7.14) mad Fe) (7.14) Thus, when the maximum possible value of the allowable stress Y is less than the minimum of the actual stress X (see Fig. 7.3), the reliability is zero as anticipated, indicating, under the frequency interpretation of probability, that almost every one of the large ensemble of statistically identical structures is due to fail. In the opposite case, when the minimum possible value y, of the allowable stress Y exceeds the maximum x, of the actual stress X (Fig. 7.4), the reliability is unity, since then Fy(y) in (7.14) identically equals unity in the interval y < y < 6 and integration likewise yields unity, indicating also that almost every structure in the above ensemble is due to survive. Note that the central factor of safety so dity x, +X) is always less than unity in the first case and always greater than unity in the FUNDAMENTAL CASE 243 Fy (xd ° x x tyly) yay oo” Ye Y Fig. 73. Both X and Y have a uniform distribution (4 x, (Fig. 7.5), we have 1 2 Y— xX, 1 Ye 2y. — X — Xz R=—— [*=>—w+ at “7.15 f f (7.15) y Ya Vida X27 Xa i ay 2-1) The factor of safety s exceeds unity, but the reliability may be rather low. For example, if in some relative units x, = 1, x. = 2, y, = 0.5, y, = 5.5, we have R = 0.8, whereas the central safety factor is 2. Note that, as is seen from (7.15), when x,7y,+0 and x,>y,-0 (7.16) we have in the limit R = 0.5, corresponding to a safety factor of unity, the same situation as in Example 7.1. All other cases are covered by Problem 7.1. The safety factor is also occasionally defined as i= £(}) = 2(=22) (7.17) Let us find f,(u), where U = Y/X. Consider as an example the case where both X and Y are uniformly distributed in the interval [0, 1]. Then, ful) = f° Vel f (we) fel) ae 244 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES Fylx) , ! ! 1 1 1 1 1 1 Fylytyly) | 1 1 \ i 1 1 - % v2 y Fig. 7.4. Both X and Y have a uniform distribution (x2 < y,); the reliability (equal to the shaded area) equals unity. Since 1, ux 1,x < l/u, =f “cde =4 iy fulu) = [eax = 4(5 The reliability is (Eq. 7.3) R=1-P(X © so that the new safety factor is infinity, whereas the reliability is only 0.5! Example 7.4 Suppose that the actual and allowable stresses are independent random vari- ables with log-normal probability density __ i _ (nx = a) Sx(x) = wow |" a? U(x) -_! _(iny = by fyr(y) = Soba on Ie? Joo where a, 0,, b, 6, are the density parameters so that E(X) = exp(a + 40?) Var( X) = exp(2a + 0?)[exp(o?) — 1] Var(¥) = exp(26 + o?)[exp(o?) — 1] The reliability is then xX R=P(v=F< =F) which may be rewritten as R= P(InV < 0) = Fyy(0) (7.18) Note that InV = In X — InY and since In X and In Y both have a normal distribution, specifically In X is N(a, 6?) and In Y is N(b, o7), InV is also normal, as a difference of normal variables, N(a — b, a? + 07), implying that V is the log-normal with the mean FUNDAMENTAL CASE 247 and variance, respectively, E(V) = exp[a - b + 3(0? + ?)] Var(V) = exp[2(a - b) + 07 + of |[exp(o? + o7)—1] (7.19) Combining Eqs. (4.13) and (7.18), we get R= ; + on - a (7.20) The safety factors s and ¢ are, respectively, ci exp(a + 07/2) exp(b + 07/2) (7.21) t= expla — b + (0? + o?)] For a = b, 0, = 0, = 0, we have s = 1, ¢ = exp(a*), and R = 0.5. Example 7.5 Suppose that X and Y are independent random variables with truncated normal distributions: ful2) = =o oa (= 1") u(ay 2) — Us, - 2] fry) = oar aca! [U(x -») - Un - »)] where A= [e"(25) ~ en = ")] B= [ex(2 = m2) i en 22572)" As in Example 7.3, with uniformly distributed actual and allowable stresses, we 248 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES Fy tx) y Oy Ya Fig. 7.6. Actual stress and allowable stress with truncated normal distribution. conclude immediately that R=0, mino = x, > max Oyyoy = Y2 R=1, maxo =x, < min ogy =) Consider the case analogous to that shown in Fig. 7.6, specifically, WSS We find then, from Eq. 7.5, R= fF) & or, bearing in mind expression (4.28) for the truncated (cumulative) distribu- FUNDAMENTAL CASE 249 tion function of a normal variable, AB_ pn =) (22™) (y=) R= f| — erf| - dq o,y20 L a lo bes 202 a Bp (y= my + exp| — ~—+*- | @ 7,22, 0,V27 i | 20} 7 (7.22) For x, = yy, the last term in Eq. (7.22) drops out. Assume now that the actual and allowable stresses are both distributed symmetrically, that is, that xy=m—ko, x, =m, + ko, Jy = m,— ky, yy = m, + kyo, where k, and k, are positive numbers. If both k, and k, are greater than 2, then according to Eq. (4.30) both A and B differ only slightly from unity and x, and x, may be replaced by — oo and + 00, respectively. The second integral term can be neglected, and Eq. (7.22) replaced by R= oat fii + en? oS ) oa ony dy (7.23) which coincides formally with the expression for the reliability, provided both X and Y can be assumed in advance to have a normal distribution. In that case, rather than evaluate the above equation, we note that Z = X — Y in Eq. (7.2) is likewise normal, N(m, — m,, 0? + 07), and the reliability, in view of Eq. (4.13), becomes ad) (724) R= F,(0) =4 + erf i (0? + 6 and with (0? + 03)” m,— Mm Sz as the coefficient of variation of Z, we finally have R=4+ a(t) (7.25) 250 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES, R 0.999 t ossso} 1 Ny = 0.05 ¥y=0.02 99999 0.99999 5 16 7 18 19 2.0 21s Fig. 7.7. Reliability versus central safety factor. In spite of the formal similarity of (7.20) and (7.24), there is a basic difference: In Eg. (7.24) m,, 0, and m,, 0, are the mathematical expectation and standard deviation of the actual and allowable stresses, respectively, whereas in Eq. (7.20) the parameters are associated with their logarithms. Equation (7.24) may be rewritten in terms of the central safety factor and the coefficients of variation | Sy _% (7.26) oy _ 1 iF E(Y) m, EX) = BENDING OF BEAMS UNDER SEVERAL RANDOM CONCENTRATED FORCES 251 as follows: Gey (7.27) Figure 7.7 shows the reliability versus the central factor of safety for yy = 0.2 and various yy. Freudenthal, Ferry Borges and Castanheta, and others in- vestigated the influence of changes in the coefficients of variation, the factor of safety, and the shape of the distribution functions fy(x) and f(y) on the reliability. Their results are usually plotted as 1 — R versus the safety factor for different coefficients of variation. 7.2. BENDING OF BEAMS UNDER SEVERAL RANDOM CONCENTRATED FORCES As shown in Fig. 7.8, the beam is subjected to concentrated forces X,, Xz,..., X,, with joint probability density f.y)(x), x2,.-., x,). We initially assume that the allowable stress and the beam dimensions are deterministic quantities. Since the bending moment takes on extremal values at the sections where the forces are applied, the reliability is R= > A [IMI < eaerS]} (7.28) jel where M, denotes the bending moment at section x,. These moments are Fig. 7.8. Beam simply supported at both edges, under n concentrated forces applied at specified cross sections. 252 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES expressed in terms of the applied forces as M, = 4, X, + aX, +--+ + a, X, My = a4,X, + a) X, + + aX, (7.29) My = Oy, X, + yg Xq t+ + nn, ‘inn An The joint probability density /(,(m,, m2,..., m,,) of the bending moments is readily found via Eq. (6.127): Loa (Mys Maye ees My) = Soy (Bumy + Brg +00 + BiyMyy-vy Barmy +Byym, + +++ + B,,m,)|det[ BY] (7.30) where [B] is the inverse of matrix [A]: Oy B12 On MA Aan [4] = Oy On2 PELE Onn which was assumed to be nonsingular. When it is singular, its rank p is less than n and some (namely, n — p) of the moments are equal. Renumbering the unequal moments as M,, M,..., M,, the reliability (7.28) becomes P R= >| Duis caw) (7.31) and in the general case p < n can be written as R= fate Bim, + Bym,++:++B gene, B +B ff 5 J fool um 12/2 ip™p pM p22 + +++ + B,,m,)det[ B] pxp dm, dm, --- dm, (7.32) where the integration region A is defined as [41 < OtowSs V2] < StiowS +--+ Wp] < Stow (7.33) BENDING OF BEAMS UNDER SEVERAL RANDOM CONCENTRATED FORCES 253 Example 7.6 Consider the beam, simply supported at both edges, under a pair of con- centrated forces X, and X,. Then, &(l-&) &(- &) M4l="ea-&) G0-&) a a. » hat bap, ara with determinant det[ A] = &(1 - &)(€ - &)? so that [A] is nonsingular if £, + 0, , + 1, €, * &; that is, if none of the forces is applied at a support or applied at the same section. Then & Pee & $c fle eee) | ete, we have i &, sas 1 -1 fan) = ho] ee Ey ~ Bl Sym 1-@, oe “U-bG-" SSPE EEE &(1 ~ &2)(& — &)P? and the reliability becomes a 1 &(1 - &)(& - &)P? Hitibee EE EPH Lt ‘ [J to) ae ey by = bl = = ‘woe ain dm, dm, (7.34) 254 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES In the singular case, where £, = 0 and ¢, + 1, we have only one section where the extremum bending moment M acts, x = a): M = Ik,(1 ~ &)X) and in view of Eq. (4.47) a E >| Tea : al The reliability is en ee oe xe —ay Leela |e where the region A is determined by Il < now S that is, low R= Y Sarasa mney lea | 9, FxnowS =F. allow F, allow ; ole wae - |- (1 | co The case ¢, + 0, £, = 1 is treated analogously, the result being SxowS FtowS R=F, ,allow epee Fallow 1.36 «lett al #1 ~ &1) ee The last singular case is £, = &, = &, the concentrated force Y = X, + X; is applied at section £, the distribution function of X is (see Example 6.4) 00 ‘y—X2 Fy(y) = fo dea faye 2) a and the reliability is amr] ts |- a|- m5 ee For X, and X, being jointly normal with mathematical expectations m, and my, respectively, variances o? and o?, respectively, and correlation coefficient BENDING OF BEAMS UNDER SEVERAL RANDOM CONCENTRATED FORCES 255 r, Y is also normal N(m, + m2, 0? + 2ro,0, + 0?) with Eq. (6.79) taken into account. Eq. (7.37) become R=erf SatowS!~'~1(1 — &)' = (my + mg) (0? + 2r6,0, + 02)'” mei ey? —ert| — SulewS!'E qi -) + (m+ ma) (7.38) (0? + 2r0,0, + 6?) Example 7.7 A circular shaft of radius c is simultaneously subjected to bending moment M and torque 7, considered as random variables with given joint probability density function f,,7(m, t); the yield stress o, in pure tension is a deterministic quantity. We seek the reliability of the system. We resort to the maximum shear stress theory of failure. The maximum shear stress is a, 2 72 as bending 2 Tmax = (“2 ) + Torsion or Mc\? Te\72 to (37) +(7) where, for the circular cross section, J = 7c*/4, J = 21, and 2 2 2)1/2 Tmax = —z(M? + T?) (7.39) a The strength requirement 7,,,. < 6, becomes (24+ 77)'7 < Fac and the reliability reads R= Prob( [2 477 < 5,0") or with Z = (M? + 72)!72, R= F,( Fo,¢°) (7.40) 256 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES We proceed to find the probability density function f,(z). To do this, we first introduce the auxiliary variable e-= tan) (7.41) the possible values of 8 lie in the interval [0,27]. The inverse transformation is of the form M=Zcs® T=Zsin®@ (7.42) since A(m,t) _|cos@ —zsin8 a a(z,0) |sin@ zcos@ we obtain fee = fur (200s 8, zsin@)U(z), O<0<2m the marginal densities being fz(z) = 20(2) [fur 2008 0, zsin8) d0 fo(9) = f° ahyr( 20088, zsin0) dz, 0<0<20 (7.43) 0 Suppose M is N(a, 07) and T is N(b, a”) and they are independent. Then on - (zc0s 6 — a)’ + (zsin0 — b) (7.48) 20? 2n0? From Eq. (7.43) we obtain ae wl) df 2m 5] — (20080 ~ a) + (zsind ~ b) “| a0 20? or fe(z) => ae) exr|- va Beet (ie jee Va? + 6 B cos(0 - i) where 9 = tan~'(b/a). Introducing the variables g = 0 — 0, a = Va? + b?, do BENDING OF BEAMS UNDER SEVERAL RANDOM CONCENTRATED FORCES 257 we have [Py fexo| (22 = Joos] aw = daty( =) (7.45) where I,(z) is a modified Bessel function of zero order, namely, Iz) = 5b [ertaa =F (7.46) M2) Daly © fo 22"(n!)? , Hence, felt) = Foro - 22) (22) ute) (7.7) The (cumulative) distribution function is then 2 lf +o az Fela) = 4 [sexo(- 7 Jto( 28) a (7.48) Applying the expression x" Je ty-s(ax) dx = FI, ax) for positive a we find E,(z) = ew(- 4) y (2)'2,(S)o@ (7.49) 2 20° Jnais and combining Eqs. (7.40) and (7.49) we have the reliability te oo{- Leer +) E (5 s Jul “) (7.50) 2 20 n=1 In the particular case where a = b = 0, we have Zz fel) = =, exp(-2?/207) since J,(0) = 1; that is, Z has a Rayleigh distribution, and the reliability reads a? oye R=1 en - 52] (7.51) 258 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES The general case necessitates numerical integration in Eq. (7.48) or numerical evaluation of Eq. (7.49). The random variable Z with probability density as per (7.47) is said to have a generalized Rayleigh distribution function, for which tables are readily available (Bark et al.). For « < o, the series (7.46) may be reduced to two terms, and we have 2 (ett? za? Sz(2)= eo cx 202 I “a 4o4 Ju) In the opposite case, a > o, we may use the asymptotic representation for the modified Bessel function: e 1 9 hols) = 5 (: * ae * Thee * ~) and f(z) = Ae ow|- = @=ay hae (7.52) where A = (1 + 07/8az)(z/a)!/* is very close to unity in the vicinity of z ~ a, and Z has a normal distribution N(a, 0”). Example 7.8 In this example we generalize Prob. 5.6 for the case where a single random force Q is applied at a random distance X, 2,y,, being a random variable as well. For simplicity we assume that all these quantities are independent and log-normal. 1 1 fo(4) = aie -tcal u(q) _ 2 fax) = sali jen u(x) (11 Gato = 713)" exp} — U(o, P| 30? (atiow) 1 fs, al Oriow ) - PRNPE ae The requirement is Y= M/Zayoy = QX/Zanow < S, where the section mod- ulus S is a deterministic quantity. In logarithmic form, the requirement is IY =InQ+MX— Indu 2 is often referred to as triangular density. It can be shown by induction that for concentrated moments, all of them distributed uniformly in the interval (0, 1), the probability density is 1 : nl ful) = Gamay %, (i) Dm =) (7.63) where i < m 0. PROBLEMS 71. Both X = 2 and Y = yyy have uniform distributions as per formula (7.13). Find the reliability in the following cases: (a) yy,B>28 7.2. Both X = = and Y = 2, have truncated normal distributions, as in Example 7.5. Find the probability density f,(z) of the difference Z=X-Yforz <0. 7,3. Given that the actual stress X and allowable stress Y are jointly normally distributed random variables with mathematical expectations m, and m), standard deviations o, and o,, and correlation coefficient r. PROBLEMS 267 Verify that 912 o)= E/E fle) 7 (3 = 2ro,0,0 + za) 1 xexp{- Y= )ere? = )otee [mio — 2rm,mjo,0, + miei]} 7 e pant = > | + [F120( 7 Jen] where x m0; — rm,o,0, + m,ozv — rm,0,0,0 ves = en I” 6,0, (1 — r2)(a2v — 2ro,0,v + 0?) In the particular case m, = m, = 0, /s; ee a 1 — 2r(0,/o,)v + (0,/0,)° v? With what density does the latter coincide for uncorrelated X and Y? Find the reliability of the structure. 7.4. Assume in Example 7.7 1 m0? fuer, 1) = 3b exp] ~ 4 4 id 2nay407 oy of that is, M and T are independent normal variables with zero mean and with variances of, and o7, respectively. (a) Verify that 20 52 2 2192 2 m'\oy + a: m’|\oy — o. on (0% + 9?) | loi a) 2 g2 2 G2 40K07 4oyor Sul Om or and find the reliability of the shaft. (b) Repeat for the case where Oy = Op. 7.5. Extend Example 7.7 to the case where the von Mises stress theory of failure is used instead of the maximum shear stress theory. 7.6. The clamped-clamped beam is subjected to a load as shown in the accompanying figure. Both Q and 2,4, have log-normal distributions. Find the reliability of the beam. 268 RELIABILITY OF STRUCTURES DESCRIBED BY SEVERAL RANDOM VARIABLES. Problem 7.6 7.1, The cantilever is subjected to a pair of random independent moments with gamma distributions 1 mte—™/U(m)) fal) = Ree ay i 1 mfe-™/AU m3) fal) = are Show that the maximum moment also has a gamma distribution, and calculate the reliability. 7.8. For the system of Problem 7.7, assume that the moments have identical Rayleigh distribution x Su(*) = fu,(*) = 2 exp(—x?/2a?)U(x) Verify that the reliability of the cantilever is R= 1 - exp(—z?/2a?) — Va & exp( 27/40" Jer where Zz = O,noyS- 7.9. A cantilever is subjected to three concentrated moments having uniform distribution in the interval (0, 1). Show that 0, m<0 4m’, O3 7.10. TAL 7.12. CITED REFERENCES 969 so that the graph consists of segments of three different parabolas, in the interval (0, 3). Find the reliability of the structure, and compare with the estimate by the central limit theorem. Prove Eq. (7.61). The probability density (7.60) is referred to as a generalized Erlang density of the first order. Show by mathematical induction that the sum of n independent random variables, each of which has an exponential distribution with different parameters A,, A,,..., A,, has a generalized Erlang distribution of (n — 1)th order, given by Cy Fp, 7 exp(=Aym) i=l jet TT(A,-%) Sm) ljtl....n. where the product Tay is taken over k = 1,2,. The distribution function of M is 1 — exp(—A,m) TOs) kenj ("TIA F, " int A Assume M and T in Example 7.7 to be uniformly distributed in the circle m? + 1? < p? (see also Prob. 6.1), that is, 1 farm, t) = \ mp?” 0, otherwise m +1? } = F(= 5) and hos d=soh(s5), f= 0) sinat’Y\ sin wt 274 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS The first-order distribution and density functions represent the simplest char- acteristics of the random function X(t). More complete characterization is obtainable by considering a pair of random variables X(t,) and X(t,). Their joint distribution function generally depends on ¢, and t,. The second-order distribution function, denoted by Fy(x,, X33 t), t2) is defined by F(x ys 25 ys fa) = P(X(t1) < My X(t) < 2) (8.5) The second-order probability density of the random function X(t) is given by BPE (H 1 X95 ts ta) Ox, Ox, (8.6) Sis 25 ta) = The first-order distribution functions are obtainable from the second-order one as follows Fy (215 1) = Fy (21, 005 ty 2) Fy (x95 ty) = Fy (00, 15 tf) (8.7) and the corresponding relations for the densities are frost) =f fel mait, t) dx Pa t25 ta) =f fers x95 sta) dy (8.8) Example 8.3 We seek the second-order distribution function for the random function X(t) in Example 8.1. We have Fy (21 25 ths fo) = P(X(t) < 1, X(t) < 42) = PLY < x, ¥ < x) If x, < x2, then F(x, X25 fy a) = PUY < xy) = Fy(m) If, however, x. < x), then Fis 25 hy fa) = PLY < x2) = Fy(22) For the random function X(t) described in Example 8.2, we have xy x, Xy Yl ete <= sin 7, sinat, — sin at, F, x2 EEE Y\sinat,}’ sinat, ~ sinat, Fy (x1, X23 tis) = MOMENTFUNCTIONS 275, 8.3 MOMENT FUNCTIONS The mathematical expectations n(¢) or the mean of a random function X(t) is defined as the mathematical expectation of the random variable X(t) for a fixed ¢: a(t) = ELX()) =f afea 0) ae ce) and generally is a function of ¢. The joint moment of the random variables X(t,) and X(t) is called the autocorrelation function R y(t), t) 0 p00 Reltys 2) = ELX(4) X(t) = ff xixa fry X25 ty ta) dey dey = 004-00 (8.10) and is generally a function of ¢, and 1. The covariance of the random variables X(t,) and X(t.) is called the autocovariance function Cy(t), t2) Cet to) = E{EX(4) — 0(t)][ XC) — 1(4))) 20 p00 -f f [xy — (ro = Ct) Ses 25 thy ta) ey den — 004-00 (8.11) The variance of the random variable X(r), 0? is called the variance of the random function X(t): o¢(t) = Cy(t,t) = Ry(t, t) ~ 7 (t) (8.12) Example 8.4 For the random function X(t) in Example 8.1, we have a(t) = E(Y) Ry(t to) = E(¥7) C(t), 2) = Var(¥) 03(t) = Var(Y) 276 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS, For X(t) in Example 8.2 we have n(t) = E(Y)sin wt Ry(ty ty) = E(¥?)sin at, sin wt, Cy(ty, ta) = Var(Y)sin mi,sin at, o3(t) = Var(Y)sin?at 8.4 PROPERTIES OF THE AUTOCOVARIANCE FUNCTION The autocovariance function has the following properties: 1 As follows from definition (8.11), it is a symmetric function of its argu- ments: Cultus ty) = Cx(t25 41) (8.13) From property (6.72) of the covariance of a pair of random variables, it follows that Cys ta) < (Var X(e,)]Varl X(t2)])'? = [Celt 1) Celta» 42)? (8.14) This inequality enables us to introduce the normalized autocovariance function ex(thete) = Cyt b) — x(t b) x(t, Teel th Cy(tg, 7 xl ox(t2) which, in view of Eq. (8.14), in turn satisfies the inequality lex(t 2) <1 (8.16) Note that for t; = ty ex(ty hh) = ex (tas fa) = 1 The autocovariance function is nonnegative definite. For any deterministic function p(t), the following inequality holds: LL Celt, e)e(n)o() ats deg > 0 (8.17) PROBABILITY DENSITY FUNCTION 277 This property, analogous to the nonnegative definiteness of the variance- covariance matrix as per Eq. (6.90), will be proved later on. 8.5 PROBABILITY DENSITY FUNCTION The probability distribution function of nth order is defined as the probability distribution function of n random variables X(t,), X(t2),-.., X(t,): Pe xp da sette keety reaver te), = P{X(t,) < x1, X(tz) < X95... X(t) < Xq) (8.18) which is a function of 2n variables. If the function F(x), X2,.-+5 X} ti) ta,..+5 ¢,) has a derivative OF (X15 Xa50005 Nui bay baseees tn) Fa a ae a) Cay yee Ha tae) (8.19) this derivative is called the probability density function of nth order of X(t). The set of functions fy(x,; 1), fy(%1s X25 tis tase ees fy (X 1s Xaveees ps bi laseves t,,) represents the probabilistic properties of X(t) in one, two, or n coordinates (including that of time, if such is the case). The probability density of nth order yields all those of lower order, except for some “pathological” cases. The random function X(t) is determined probabilistically when the distribution function of any order is known. Occasionally, all of the probabilistic informa- tion is contained in the first-order probability density. The simplest example is the function whose values- X,, X,,...,.X, at the respective noncoincident coordinates ¢,, ta, t, (t; * tj, i, j= 1,2,...,) are independent random variables. This means that the nth order probability density is a product of the first-order probability densities: aloe raves Bi totavsente) = TL falit) (820) D Consider now the processes where the second-order probability density suffices for complete characterization of a random function X(t). The latter is called a Markov random function if the probabilistic behavior in the “future” depends solely on the “most recent past,” that is, on the present state. This may be written in terms of conditional probability densities as follows: Fic (2n5 tal ay ees Spd theses br) = fic (Snstal%nati tay SS 0 (8.38) The autocovariance is defined as Co(ths ta) = Rats ta) — n2() mB (t2) (8.39) The autocovariance function of a complex random function has the following properties: 1 Colts) = C844) (8.40) In the particular case of a real function, we have Eq. (8.13). 282 2 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS For a complex random function Z(t), we have ReC(t1 ta) < [C2(t1 1) Cz (ta, &))'? (8.41) which is a generalization of property (8.14) of a real random function. Equation (8.41) immediately follows from the inequality E{ [ez ))'71Z(4) - 22(4)] £ (Ct o)]12() - na(4)I/} 2 0 since the expression under the operator of mathematical expectation is always nonnegative. The analogue of property (8.17) is LP Ckey 9%) 4) di dts > 0 (8.42) where p(t) is any deterministic complex function. This property will be proved later on. The cross-correlation and cross-covariance functions of a pair of complex random functions Z,(t) and Z,(1) are defined, respectively, by Rajz{tis 2) = E[Z (44) 24 (42)] Coasts to) = Rayzy(trs t2) — 2,(t1) 18, (t2) (8.43) Note that Coral t ta) = C2,2,(ta9h) (8.44) The transpose of a complex vector random function {Z(t)) is (20))" =[Z() Z(t) (9) and the cross-covariance matrix is [Cz(4,42)] = E(({Z(t,)} = (n2(4)))({Z(t2)) - {nz(t2)))*7) = [Gozo 4)) en where Cz z,(t), 12) are the cross-covariance functions, Cz2,(hs )= Raz(ty hh) - nz, (ti) nz,(t2) and Rz/z,(t, t2) forms the cross-correlation matrix [R (t,t nxn COMPLEX RANDOMFUNCTIONS 283 Example 8.6 Given the complex random function Z(t) = p(t) + Ucos wt + i[ w(t) + Vsin wt] where y(t) and y(t) are deterministic functions, w is a real number, and U and V are random variables with Var(U)=02 Var(V)=0o7 E(U)=E(V)=0 The mean function is given by E[Z(t)] = p(t) + E(U)cos wt + i[ y(t) + E(V)sin wt] = p(t) + ip(t) and the autocovariance by Cz(ty, 2) = E[(Ucos wt, + iVsin wt,)(Ucos wt, — iV sin wt,)] = 04608 wt,cos wt, + o7sin wt,sin wt, + iCov(U,V)sin w(t, — 12) (8.45) Example 8.7 Given two complex random functions Z(t)= X(t) +iN(t) Z(t) = X(t) + H(4) with mean functions 7,(¢), 12(t), autocovariance functions Cz(t), t2), Cz,(ty, tz) and cross-covariance functions Cz, z(t) t2), Cz,z,(t» f2). We seek the mean function n(t) and autocovariance function Cz(t), t,) of the sum of Z, and Z,: Z(t) = Z(t) + Z,(t) The mean function (1) is found as a(t) =m (4) + m(t) For the autocovariance function we have Colts tr) = E(LZ,(t1) — (4) + Z2(41) ~ m2(4)] [28 (to) — mt(ta) + 2842) — 1(4))) (8.46) = Co (ty ta) + Coz,(ty to) + Cz,2)(ts ta) + Cz,(t. 2) 284 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS. In components this equation will be written as Colts) = E{[Mi(n) — EX(t)) + X(t) - EX(4)] +i[%\(4) - EY (4) + ¥(4) - £%(4)]) x ([%i(4) - EX\(ta) + X(t) - EX2(t2)] ~i[%\(4) - E(t) + (4) - £%4(4)]) = Cyt ta) + Cee tr te) + Cxyxi(te 2) + Cx, (tr ta) + Cy, (ty 2) + Cry, (ty 2) + Cry (ty 2) + Cy,(ty fa) Fi[Cyyx (tis fo) + Cryag(tie to) + Cyyai(tas ta) + Cra (ts ta) — Cry (ths t) — Celtis ta) — Cray (trs ta) — Ceyra (tr 2) (8.47) If the functions are real, only the first line is retained in Eq. (8.47). A pair of random functions Z,(t) and Z,(t) are said to be uncorrelated if their cross-co- Variance 1s zero: Coz(hr te) = 0 (8.48) For an uncorrelated pair, Eq. (8.46) reads Co(tys 2) = Ce,(tis ta) + Cz,(trs f2) (8.49) and in the case of real random functions, Eq. (8.47) yields Col ths ta) = Cx (ts ta) + Cxg(trs t2) (8.50) That is, the autocovariance function of the sum of a pair of uncorrelated random functions equals the sum of the respective autocovariances. 8.9 STATIONARY RANDOM FUNCTIONS The random function X(t) is said to be strictly stationary if the probability density functions f(yy(X1, %25-++) Xn} iy fay-++5 tn) Of the random variables X(t), X(tg)y---s XCtq ds aNd foxy (Xp Xys-00 Xp ty Fly + Gees ty +8) OF STATIONARY RANDOM FUNCTIONS 285 their counterparts X(t, + e), X(t, + e),..., X(t, + €) coincide for any n and ty, tz»--», t, and are independent of e. This means that the pair X(t) and X(t + €) have identical nth-order probability densities for any 1. Analogously, we say that a pair X(t) and ¥(t) are jointly strictly stationary if their (m + n)th-order joint probability density coincides with that of X(t + e) and Y(t + e) for any e. The complex function Z(t) = X(t) + i¥(t) is stationary if X(t) and Y(t) are stationary both individually and jointly. For the first-order probability density of the stationary function, we have f(x; t) = fas t+ e+) (8.51) and since this must be true for any e, it is true fore = —t and L265 t) = fie(s5 0) so that f(x; ¢) is independent of t: Lx 250) = fl) (8.52) As a result, the mean function (ft), a(t) = [els t) dx = fate) dx = (8.53) is time-invariant. The variance is also constant: Var[ X(t)] = 02 (t) = £ (x — 1)’ fy(x) dx = const = 02 (8.54) The second-order probability density of a stationary random function satisfies the equality Lac 15 05 tis ta) = fey, 25 + 8 fa + 8) Choosing e = ~t,, we have Lac (2615 25 ty ta) = fr 415 2250, 2 =) = fe 2 — fh) and denoting 7 = t) — t), Sas X95 thy 2) = S41 257) (8.55) so that fy(x,, x2, 7) is the second-order probability density of the random variables X(t,) and X(t, + 7). The autocorrelation and autocovariance functions of a stationary random function turn out to be dependent only on ft, — 4). In fact, in view of Eq. 286 © ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS (8.39), Relea fof xed helas mits) dnd, a [PP them, 1) dx, dx, = Ry(7) Cy (ths ta) = Rx(tis to) — ax(ti) nk (ta) = Rat) ~ aynk = Cy(7) (8.56) The variance of X(t) equals the value of the autocovariance function at t, = t, = t. Therefore, Var[ X(1)] = Cx(t, t) = Cx(7)le-0 = Cx(0) (8.57) The autocovariance function of a stationary random function has the following properties: L Cyt) = CE(-7) (8.58) which follows immediately from Eq. (8.40). For a real function, this property becomes Cy(17) = Cy(-7) (8.59) That is, the autocorrelation function of a real random function is an even function. 2. Re Cy(r)| < Cy(0) (8.60) which follows from Eq. (8.41). For a real random function, this signifies ICx(1) < Cy(0) (8.61) that is, the variance is the upper bound of the autocovariance function. 3 [’f-cel~ 4)9"(u)o(t) deata > 0 (8.62) which follows from Eq. (8.42). A random function is said to be stationary in the wide sense (or weakly stationary) if its mean function is constant and the autocovariance depends ‘SPECTRAL DENSITY OF A STATIONARY RANDOM FUNCTION 287 only on f, — ty: n(t) = const Cx(ty, t2) = Cy(7) (8.63) The random function Z(t) in Example (8.6) is stationary in the wide sense if p(t) =const y(t) = const E(U)=E(V)=0 of =07 (8.64) whence E[Z(t)] = const Cz(ty, t2) = oJc0s w(t, — t;) — iCov(U, V)sin w(t, — t,) aj.cos wt — iCov(U, V)sinwr (8.65) While a strictly stationary random function is always stationary in the wide sense as well, the reverse is not necessarily the case. A stationary normal random function is stationary in both senses, and its higher order probability densities are uniquely determined by the mean and autocovariance functions. A pair of random functions X(t) and Y(t) are jointly stationary in the wide sense if each of them is stationary in the wide sense. In addition, their cross-correlation function depends only on ¢, — 4; Ryylty 2) = Rev(ts ~ 4) = Rxy(7) (8.66) Example 8.10 The initial imperfections ¥y(x) of an infinite beam are a stationary normal random function of the axial coordinate x with zero mean and the autocovari- ance function Cy,(£) = aPexp( — 576?) where = x2 — x,, x, and x, being the axial coordinates, a’ and a positive constants. We seek the probability of |¥o(x)| < a. We have oF, = a’, and fila) = exo(- 35] ayla The sought probability is P= fs Jo) MH = erf() - erf( 288 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS. 8.10 SPECTRAL DENSITY OF A STATIONARY RANDOM FUNCTION The spectral density Sy(w) of a stationary random function X(t) is defined as the Fourier transform* of its autocorrelation function R ,(7): 1 a ior Sy(w) = apf Rae dr (8.67) Resorting to Fourier inversion, we express R y(7) as 20 Ry(t)=f Sy(w)e!* deo (8.68) ~ 0 Equations (8.67) and (8.68), which state that the autocorrelation function and the spectral density are interrelated through Fourier transformations, are called the Wiener-Khintchine formulas. It is readily shown that the spectral density is a real function. In fact, S,(#) = ig J Ralnye War 2 a!” [Re Ry(r)cos or + Im Ry(1)sin wr] dr ES tin f° [Im Ry(7)ooswr — ReRy(r)sinwr] dr TS — 0 However, R y(7) = R%(—1), and hence the integrand in the second integral, is an odd function, so that the integral vanishes. Therefore, Sy(w) = in J _[Re Ry(t)cosor + ImRy(r)sinwr] dr (8.69) This expression indicates that S,(w) is a real function of w. For a real random function X(t), we have Im R (1) = 0, ReRy(7) = Ry(7), and Sy(w) = sf” Ry(1)cos wrdr (8.70) and since R y(r) is then an even function of 7, Sy(«) is even too: Sy(-o) = S,(w) (8.71) For a real X(t), then, the Wiener-Khintchine formula (8.68) yields R,(1) = J Sxlw)eos wr da (8.72) *It should be noted that the transform may be variously defined in different literature sources. SPECTRAL DENSITY OF A STATIONARY RANDOM FUNCTION 289. Another basic property of the spectral density of a stationary random function is that its integral equals the mean square of X(t). Indeed, since due by Eq. (8.56), Rx(0) = E(|X|*), we find, by substituting + = 0 in Eq. (8.68), B(XP) = J Sx(w) de (8.73) so that S,(w) represents the mean square spectral density. In other words, the mean square of the random process X(t) equals the integral of the spectral density over all frequencies. Analogously, the cross-spectral density of a pair of jointly stationary random functions is defined as the Fourier transform of their cross-correlation function (8.66): 1 a ior Syy(@) = ial Reve dt (8.74) Bearing in mind Eq. (8.44), we readily show that Sxy(o) = S$x(w) (8.75) That is, Syy(w) has a Hermitian property, a generalization of the symmetry property. Syy(w) = Syy(w) for real-valued random functions. The relevant representation of the cross-correlation function Ryy(rT) is again obtained through Fourier inversion: co Ryy(t) =f Sxr(w)el*"deo (8.76) -2 Letting + = 0 in the latter, we have co R yy (0) = E[X(2)¥*(1)] = J Syy(w) do 0 The spectral densities S,(w) and Sy(w) and their cross-spectral counter- parts Syy(w) and Syy(w) form the cross-spectral density matrix c= [500 Sl) “m For a vector random function { X(t)} we have [Sx(o)] = 35 f° [Ruler ar (8.78) where [R y(7)] = [Ra x(Mlaxn is the cross-correlation matrix, and [Sy(w)] = [Sy, x,(@)Inxn the cross-spectral density matrix. 290 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS Sy lw) la) So a Ry(T)=2TSo5(T) {b) 0 T Fig. 8.2. (a) Spectral density and (b) autocorrelation function of ideal white noise. The area under the spectral density curve is infinite. Example 8.11 A stationary random function with constant spectral density Sy(w) = Sp = const (8.79) is called ideal white noise* and is a useful concept because of its analytical simplicity. Its autocorrelation is given by R,(1) = [Sloe de = 25,8(r) where 4(r) is the Dirac delta function. Since R y(7) = 0 for r * 0, the random variables X(t) and X(t + 7) for any nonzero 7 are uncorrelated. Ideal white noise is physically unrealizable, since its mean square is infinite E(|X|?) = Rx(0) > (8.80) as is in fact the area under the spectral density curve (Fig. 8.2). *The term was coined in analogy to “white light,” which is characterized by an approximate uniform spectrum in the range of visible frequencies. SPECTRAL DENSITY OF ASTATIONARY RANDOM FUNCTION 291 Example 8.12 A random function X(t) with uniform spectral density in the interval |«| < w, So, jal < @, S, ={"° ‘i 8.81 () (3 otherwise (8.81) is called band-limited white noise, Equation (8.72) yields sin wr We Ry(t) = Sof * coswrda = 2S, (8.82) In contrast to its ideal counterpart, band-limited noise has a finite mean square: E(|X/?) = Ry(0) = 28,0, (8.83) For a random process X(t), w, is called the cutoff frequency. The spectral density and the autocorrelation functions of band-limited white noise are shown in Fig. 8.3. Example 8.13 Consider a random function with the autocorrelation function Ry(1) = d?e7"!cosQr (8.84) where « is a positive constant. The spectral density is e “"Icos Qre~'#* dr -e = Fel [ern(-« + iQ — iw)t} dr +f exp((—a — i — iw)r} dr +f? exp((a + i — iw)r) dr + f° exp((a - i - iw)r) dr d@* a a pe Mire eee eee eee Tlar+(w-Q)’ a? + (w+ Q) 2, 2. Ga} wt art (8.85) 7 | wt + 2w*(a? — 07) + (a2 +O?) 292 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS Ryle) 4 Sxl) So to) We 0 a, w Fig. 8.3, (a) Autocorrelation function and (b) spectral density of band-limited white noise. The area under the spectral density curve (the mean square) equals 2Syw,. For a = 0 we have Ry(1) = d*cosQr (8.86) and d? 2 ans Sy(@) = Ig J__cosQre lt de -0 2 00 00 - Fall eo de + f evi dr| a = 7 [8(w - @) + 6(w + 2)] (8.87) SPECTRAL DENSITY OF A STATIONARY RANDOM FUNCTION 293 Ry (t]=d2cos Qt (a) a2 Sylw}=>-C5lw-21+ (w+ 21) (b) 2 0 Q w Fig. 8.4, (a) Autocorrelation function and (b) spectral density of the harmonic random function. Note that the random function X(t) = AcosQt + BsinQr (8.88) where A and B are random variables such that E(A)= E(B)=0 — Var(A) = Var(B)=d? — E(AB) = 0 (8.89) has the autocorrelation function (8.86). For each possible pair of A and B, X(t) varies harmonically; hence X(t) as per Eq. (8.88) is called a harmonic random function. The corresponding S,(w) and R y(7) are shown in Fig. 8.4. There are two distinct ranges of parameters, according as S,(w) (Eq. 8.85) has a single maximum or two symmetric maxima. Range 1. For a? < 3Q, Sy(@) has two symmetric maxima at w = +8, where 5 = (a? + 9)'4[20 - (a? + 0)'7]'? > 0 294 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS. The value of these maxima is given by da S,(@) = —=—_—___. (3) 4n0[(a? + 97)? — a] Moreover, for a < 1 and a « Q, the spectral density is sharply peaked at the maxima. For such values of a, the random function X(t) with the autocorrela- tion function (8.84) is close to being harmonic. We say that X(t) is a narrow-band random function, indicating that its spectral density Sy(w) has significant values only within a narrow interval of frequencies close to & (see Fig. 8.5). Range 2, For a? > 38, S,(w) has a maximum at the origin w = 0 and decreases monotonically with increasing ||. At the origin, da S,(0) = —— (0) (a? + 07) A random function X(t) with autocorrelation function (8.84) becomes wide- band, that is, its spectral density covers a wide interval of frequencies (Fig. 8.6). For the particular case 2 = 0 we have da 1 = d2e-arl = SO Ry(7) = de Sy(w) 7 @ae (8.90) Denoting a? = naSy (8.91) we have aS, S:(a) = For a — 0, Sy(w) — So, and according to Example 8.11, R y(7) > 275,8(7). Realizations of wide-band and narrow-band random functions are shown in Fig. 8.7. In different technical problems, associated with stationary random functions, use is often made of a so-called correlation scale defined as = [e(aldr SPECTRAL DENSITY OF A STATIONARY RANDOM FUNCTION 295, bRy it) 028! eos 7 la) $Sy(w) 08 06 04 tb} $02 w a 0 1 Q Fig. 8.5. (a) Autocorrelation function and (b) spectral density of the random function in the Example 8.13, «? < 30 (range 1). 296 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS Ryle} ed%e 8 eos or Sylw) ew “10 3 Q 8 10 2 Fig. 8.6. (a) Autocorrelation function and (b) spectral density of the random function in Example 8 13, a? > 32 (range 2). where p(7) is a normalized autocorrelation function of a random function. For example, if the correlation scale is DIFFERENTIATION OF RANDOM FUNCTION 297 (a) 0 w o ° z 2 = < a < w « REALIZATION OF F(t) Fig. 8.7. Realization of (a) the wide-band stationary random function, (b) the narrow-band random function. For a tending to zero, the random function reduces to a random variable with an infinite correlation scale. For a tending to infinity, the random function becomes an ideal white noise with a zero correlation scale. 8.11 DIFFERENTIATION OF A RANDOM FUNCTION We first introduce the concept of continuity of a random function. As we already know, the latter may be viewed as a set of functions. If each of these is continuous at ¢, we can say that the random function itself is also continuous. As a less restrictive definition of continuity, we say that a random function X(t) is continuous at ¢ in the mean square, namely, lim E(LX(1 + 1) ~ X(1)) = 0 (8.92) 298 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS A random function that is continuous at each ¢ in some interval is called continuous in that interval. Denoting by R y(t), tz) the autocorrelation func- tion of X(t), we have E(\X(t + 7) — X(t)?) = EXLX(e + 7) ~ XC) X(t + 7) - X(4)]*) =Ry(t+7,t+7)+Ry(t,t) —2Ry(t,t+ 7) (8.93) so that a necessary and sufficient condition of continuity of X(t) is continuity of R y(t), t2) in t, and ¢, and ¢, = ¢, = ¢. In this case the left-hand side of Eq. (8.93) tends to zero as t approaches zero. Consequently, a random function X(t) stationary in the wide sense is continuous in the mean square if its autocorrelation function R (7) is continuous at + = 0, This follows from the identity, valid for a stationary function, EA|X(1 + 7) - X(4)?) = 2[ Ry) - Rx(7)] (8.94) We say that a random function X(t) is differentiable in the mean square if we can find a random function X’(t) such that 2 +7)— tin (| *C*)— FO) — y(y/ (8.95) 1390 Pe Seeking the mathematical expectation, or mean, of X’(t), we have E[X(t + 1) ~ X(t)] = ay(t + 7) — ax(t) so that ny(t) = E[X()] = tim 2 X(t+ 1) =H) = lim ny(t + ) = ax(t) = x(t) Consequently, E[X(1)} - fx) (8.96) that is, the mean of the derivative of a random function equals the derivative of the mean of the latter. If, in particular, X(t) is stationary in the wide sense, nx(t) is constant, and E[X(t)] =0 (8.97) DIFFERENTIATION OF RANDOM FUNCTION 299 Next we seek the autocorrelation function of X’(t): Satya time ([ A + 2 — X(t) [72 t+1)- Hed) T = fim Se ((x(4 + 7) - x) EX +) x41} (8.98) where E([X(t +7) — X(4)}[X(4 + 7) — X(2)]"} =Ry(tht1,4+7)- R(t +7) -Ry(t) +75) + Ryx(ty, t) Expanding the first three terms in a Taylor series, OR Ryly tng tre soo Te eT Ry PRy PRy , # + at? oon, at, * ane ORy ae @R Reltita +1) = Rll ta) +1 G+ aE ORy 7? #Ry a + = a Ry(t) +7, t) Rx(ty ta) +1 a, +a ae + Substituting in Eq. (8.98), we find PRy(ty, t Ry(tt) = ee (8.99) The corresponding result for a stationary random function is aR x(7) 73 (8.100) Ry(t) = — so that if the mixed derivative of the autocorrelation function at t, = tf) = 1 exists, the random function X(t) is differentiable in the mean square. 300 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS, ‘The mean square value of X’(t) equals Ry(0) = —R%(0) (8.101) or, using the Wiener-Khintchine relationship (8.68), Ry(0) = f° 8Sy(0) de (8.102) Equations (8.97) and (8.100) indicate that the derivative of a random function stationary in the wide sense is itself such a function, hence the Wiener-Khintchine relationships hold for X’(t): Ry(1) = Jf Selwye'erae Sy(o) = Sef Resear (8.103) Comparing the first of these equations with Eq. (8.102), we see that Sy(w) = wSy(w) (8.104) that is, differentiation of a random function yields its spectral density multi- plied by w?. Since continuity of a stationary random function implies continuity of its autocorrelation function at tr = 0, the mean square of the random function, which equals the value of the autocorrelation function there, has to be finite. This leads, due to Eq. (8.102), to a condition in terms of S,(w): [> 38(0) du <0 (8.105) so that Sy(w) has to decay faster then w~, Example 8.14 A random function stationary in the wide sense, represented by ideal white noise, is noncontinuous in the mean square, since its autocorrelation function, given in Example 8.11, is noncontinuous at 7 = 0. A random function with its R (7) as per Eq. (8.84) is continuous in the mean square but nondifferentia- ble, since, because of the term e~*""l, the first derivative of R y(7) is noncon- tinuous at 7 = 0, and a second derivative does not exist. Band-limited white noise (Eq. 8.82) is, however, continuous and differentiable in the mean square, and so is the random function with the autocorrelation function in Example 8.10. Consider now a random function stationary in the wide sense, with the autocorrelation function Ry(1) = dze-*l"l(cos Br + ysin B|r|) (8.106) DIFFERENTIATION OF RANDOM FUNCTION 301 where y < a/f. [Note that noncompliance with this restriction violates a basic property of the autocorrelation function as per Eq. (8.61).] The spectral density of X(t) is given by _ 2d} (a + yB)(a? + B?) + (a — yB)u? S, 1 e+ Ba Ua? — BaP + ot (8.107) and is seen to become negative for y > a/8; its calculation is left to the reader as an exercise. R’,() is given by for y = a/B: a? + B? ; Ry(1r) = — et em Atsin Br The latter is continuous and differentiable at + = 0, so that R',(0) exists. We conclude that X(t) with the autocorrelation function (8.106) is continuous and differentiable in the mean square. We next seek the cross-correlation function Ryy of X(t) and X(t). We evaluate first the corresponding function of X(t) and the difference X(t + 7) — X(t): E(X(t,)[ X(t + 7) ~ X(t)]*) = Ry(ty +7) — Rx(ty 4) (8.108) Expanding R y(t, 2 + 7) in a Taylor series, R y(t, Ry ty fa + 1) = Rutty) + exe) 17 and substituting this in Eq. (8.108), we arrive at atl *) _ IR x(t to) Rye(tys fa) = lim 2E(X(1)[ X(t + 7) ~ X()] } = GP 10T 2 (8.109) whereas for a weakly stationary random function we obtain R yx (ty 2) = Rex (1) = By(7) (8.110) It is also readily shown that Rut) = —Ryx(1) (g.111) 302 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS It follows from Eq. (8.110) that if X(t) is stationary in the wide sense, then X(t) and X’(t) are jointly stationary. Hence their autocorrelation function and spectral density are interrelated according to Wiener-Khintchine: Ryy(t) = [> Sex(w)el de 1 2 Syy(w) = sf” Ryy (rend 8.112) ele) = Ze fo Raw (rele de (8.112) Substituting Eq. (8.68) into (8.110), we have Rye(1) =f” iwSy(w)e"" do (8.113) a and, with the first of Eqs. (8.112), we finally obtain Syy(w@) = iwSy(w) (8.114) so that for the real random function, X(t), the cross-spectral density Syy(w) is a purely imaginary function, and the cross-correlation function is odd: Ruy (1) = —Ryx(-7) (8.115) so that Ryy(0) =0 (8.116) which also follows from Eq. (8.113). This indicates that both the cross-correla- tion of a random function, stationary in the wide sense, and its derivative vanish. That is, X(¢) and X’(t) are uncorrelated when calculated for the same time instances. When the random function X’(1) is differentiable in the mean square, X"(t) is called the second derivative in the mean square of X(t) at t. The higher-order derivatives are defined in an analogous manner. By repeated application of the above reasoning, we see that the nth derivative d"X(t) a" X(t) = exists if the mixed 2mth-order derivative OR y(t, tr) Ryn (ths t2) = Gin gp 1 ot3 (8.117) exists and is continuous. R yim(t), ¢2) is then the autocorrelation function of DIFFERENTIATION OF RANDOMFUNCTION 303 X(t). For a random function, stationary in the wide sense, Eq. (8.117) is replaced by Ryeo(t) = (=1)"RG(1) (6.118) so that X(t) is also stationary in the wide sense, with the spectral density Syim(@) = w"Sy(w) (8.119) Equation (8.118) indicates that X‘")(t) exists if the 2nth-order derivative of R(t) is continuous at r= 0, or, according to (8.119) and the Wiener- Khintchine relationship, Ryealt) =f Synw)erd a yin (w) el" dea Lis = [7 oSp(w)el"deo (8.120) Be Sy(w) decays faster than w~@"*) as w > 00. Example 8.15 Given a normal random function X(t), its first-order probability density function is (see Eq. 8.25): fy(,t) bea -—_! {1 [aacy(n,)]'? of 2Cx(1, 1) and we seek the probabilistic properties of X’(t). The first-order probability density function of X’(t) is fl st) = +. of - eae | (8.121) = e [2mCy(t, 1)]'7 2Cy-(t, t) where, in view of Eqs. (8.96) and (8.99), ORy(t, alt) = x(t), Cel) = Grae —In(t)? (8.122) , anmt For a stationary random function y(t) =0, Cy(t, 4) = — R50) : 2 Sel, ) - ato la | (8.123) 304 © ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS 8.12 INTEGRATION OF A RANDOM FUNCTION We define the mean-square integral of a random function X(t) as the random variable Y (a and b being given numbers): y= f'x() ae a such that lim «| Aro Y- Yo X(t,)Ag, j=l ha F,(y) is difficult to determine in the general case, since Y represents the mean-square limit of a sum of random variables X(t,)Az,, although the distribution function of such a sum is determinable in closed form for some particular cases, Accordingly, we content ourselves with finding E(y) and Var(y). The mathematical expectation equals B(x) = Bf f°x(o ae) = fPeLx(olde= f’ng(o) de (8.124) For Var(y), as per Eq. (6.142), we write var(y) = £[l¥ - e(y)I] = ely - ECL - £0) = £{ f°EX(4) ~ax(4)] at fC) ~ x4] ata} = f° PEE) ~ axle )]EXC) ~ ne} dt which in conjunction with the definition (8.39) of the autocovariance function C(t), t2) is rewritten as Var(Y d= f feu ty, ta) dt, dt, (8.125) Existence of the double integral in (8.125) is a necessary and sufficient condition for that of the integral X(t) in the mean square. We are now in a position to prove the nonnegativeness [Eq. (8.42)] of the autocovariance function C,(t,, .). To do so, we consider the integral ¥= f'[2) ~ a2] 0 ae INTEGRATION OF ARANDOM FUNCTION 305 where 9(f) is a deterministic function, Z(t) is a complex random function, and a and b are given numbers. Then, bb . Var(¥) = f° f°Co(t1s t2)0(4)9"(t2) dts ty ‘ava and since the variance is a nonnegative number, LP? Colt te) ot or(4) dtydty > 0 This proves Eq. (8.42), from which property (8.17) for a real random function follows immediately. According to Bochner’s theorem, every nonnegative definite function has a nonnegative Fourier transform, which in turn indicates that the spectral density S(w) is a nonnegative function: Sx(w) > 0 (8.126) Example 8.16 We maintain that a stationary random function with the autocorrelation function Ro, |t1<7, R(r)={ 7° . 8.127, (7) {5 otherwise ( ) does not exist. Indeed, the spectral density of such a random function, if it existed would be af? peerg, = Ro f* a fo S(o) =55 fo Ree dr= Te | coswrdr = =o sin wr, which can take on negative values as well. Accordingly, (8.127) cannot serve as an autocorrelation function. Example 8.17 Given the random function ¥(u) = f'p(u, 1) x(2) at (8.128) where (u, ¢) is a deterministic function of u and t, and X(t) is a normal random function. Then, _ y= nyu)? 2Cy(u, u) fely,u) = + —_, oxo = 8.129) [2aCy(u, u)]'? 19) 306 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS where b ny(u) = fo (us Oax(t) dt a b rb 2 Cy(uyu) = f° PPRx (4, 2) 0(us H)o*(us 2) dt dt —|ny(u)[ 130) a Ya Example 8.18 Consider a cantilever of length /, loaded by a distributed force Q(x), a random function of the axial coordinate x, with the origin at the free end. The bending moment M,(x) is also a random function of x. M,(x) = [°0,(8)(x ~ &) dé (8.131) The autocorrelation function of the bending moment is Rae (ie ¥2) =f" [Ro bre Ea) ~ EN 2 ~ 2) dB aks (8.132) where Ro (€), 2) is the autocorrelation function of the distributed force. For a beam simply supported at its ends M,(x) = ['O,(E)(x~ 8) 48-7 f'O,(E(U- 8) de (8.133) we have Rule 82) = ff "Ro,(Erba)(as ~ &)(%2 ~ fa) dB al ~ [Pf Ros br Bada — ECU ~ ba) a al -f E)(U~ &i)(xa ~ &) ab db, ne ‘Ro, &)(I— &)(1- &) dé dé, (8.134) 8.13 ERGODICITY OF RANDOM FUNCTIONS Consider a random function Y(t) = g[X(t)] (8.135) ERGODICITY OF RANDOM FUNCTIONS 307 where X(t) is assumed to be a strictly stationary random function and g is a given deterministic function. For determination of the mathematical expecta- tion BLY] =f ale(] falas ae (8.136) we must use ensemble averages, that is, we need an adequate population of realizations of the random function. The experimenter, however, usually has a single laboratory and a single experimental device available rather than a large number of them, say 10° or 10°. For a given time interval (0, 7), he effects a single realization and prefers averaging in time, namely, (HO) r= p fH at (8.137) The question is, therefore, how are the ensemble and time averages interrelated? In other words, when is it possible to determine the probabilistic characteristics of a stationary random function from a single observation? The answer is that both averages coincide for ergodic random functions with a sufficiently large observation interval (0, T). We will say that a strictly stationary random function X(t) is ergodic with respect to the mean of the function Y(t) = g{ X(t)], if P{ fim (¥()) 7 = E[y(s)]} =1 (8.138) Note that lim;,,, (Y(t)), is a random variable, whereas E[Y(t)] is a con- stant: ay = E[¥(t)] = E(a[X(1)]) (8.139) since X(t) is a stationary random function. This means that, generally, lim (¥(t)), + ELY(+)] T+0 in the ordinary sense, Equation (8.138) implies that (compare Problem 3.11) Var{ li Y(t =0 8.140 ar{ tim (¥(0)7) (6.40) We have, E{ Jim (¥(0))¢} = im 7LEVol dt TH Sec = Jim ptt = ay (8.141) 308 = ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS 0 T ty Fig. 8.8, Transformation variables in Eqs. (8.142). and Var{ lim (¥(1))7} = e{ sim 25 ff 0a) ar LY(e) ~ nT dt dy} = Howl f" Cy(t,, tz) at, dt, = jim mal fr Cy(ty ~ t,) dt, dt, (8.142) where Cy (te) = EMG) — EPC) - a} The double integral can be reduced to a single one through transformation of variables (see Fig. 8.8): ith h-t A= Va P= VE Because of the evenness of Cy(t, — ¢,), we have Lf Celt = 4) ayy = 27 Cy (VE pn) da f°” a, 0 “0 0 P2 = arf'(i - F)Cr(2) ar ERGODICITY OF RANDOM FUNCTIONS 309 so that Eq. (8.142) becomes jim rh 0 = F)ev(r) dr=0 (8.143) representing a necessary and sufficient condition for X(t) to be ergodic with respect to the mean of Y(t) = g[X(t)]. In particular, X(t) is ergodic in the mean if we set g[ X(t)] = X(t): P{ jim (X(t))p= ELX(1)]} = 1 (8.144) T+ 0 a necessary and sufficient condition being a im +f “(1- F Cel) dr=0 (8.145) The random function is said to be ergodic in the mean square if it is ergodic with respect to the mean of the function 1) = g[ X(2)] = X7(2) (8.146) when Eg. (8.143) becomes sin EL F)ev(raee= tn 4 (0-3) XE{LX(t)?L X(t + 7)]?) dr = 0 (8.147) Since the conditions for ergodicity in the mean and in the mean square are not identical, there can be random functions that are ergodic in the mean but nonergodic in the mean square and vice versa. This may be demonstrated as follows. Example 8.19 Consider the random function X(t) = Ucos wt + Vsin wt (8.148) where U and V are random variables such that E(U)=E(V)=0 of =07=07 Cov(U,V)= It is readily checked that X(t) is stationary in the wide sense. We wish to check whether X(t) is ergodic in the mean and/or in the mean square. 310 9 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS We find E[X(t)] =0 (X(t) 9= rf XO dt = [usin of ~ V(cos wT - 1)] and jim (X(1))p = 0= ELX()] so that X(t) is ergodic in the mean. Note also that condition (8.145) is satisfied, since lim rfl 7 $)ex(1) dt = lim er - eos wr dr T>0 T+ 0 2 eo See jim Gaps os oT 1)=0 Now, as for the mean square, we have (LXOP) p= FLOP a - 4 sin2eT) | 2 20T 2 20T uv — Zap (00827 — 1) (8.149) and jim 5" [ x(t) at = wry? a That is, it is dependent on the realizations of U and V, the conclusion being that in general EXE) im 5 [LX that is, in general X(t) in nonergodic in the mean square. In a particular case, U? + V? = 262, X(t) is ergodic both in the mean and in the mean square. This happens, for example, when U = Y20co0s p and V = — y2asing, where g is a random variable uniformly distributed on (0,27). Then X(t) = ¥2acos(wt + 9). ERGODICITY OF RANDOMFUNCTIONS 311 Example 8.20 Consider the random function X(t) = Ucoswt + Vsinwt + W (8.150) with U, V, and W being random variables such that E(U)=E(V)=0 of=07=07 E(UV) = E(UW) = E(VW) =0 (8.151) We observe that £[X(t)] = E(w) E((X(t)) — E[X(4)]}{ X(t2) — E[X(t2)]}) = oes w(t, - th) + Var(W) so, assuming (8.151), the random function (8.150) is stationary in the wide sense. Calculation shows that oi : 1 T. im (X7(1)) p= im #f X*(t) dt = 4(U? + V2) + W? Now, if W = a? — 1(U? + V7) (8.152) where U and V are continuously distributed on [—c, c] and [—d, d], respec- tively, and are uncorrelated. Then jim (X*(t)), = a? (8.153) In other words, X(t) is ergodic in the mean square since E[X?(t)] = 07 + E(W?) = 0? + (a? — 0?) =a? (8.154) Note that X(t) is ergodic in the mean iff B[X(.)] = E(W) = tim (X(1)) = (8.155) that is, if and only if W is a deterministic constant. This example is the reverse of the previous one, where the random function was ergodic in the mean but not in the mean square. 312 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS Example 8.21 (Scheurkogel) Consider the differential equation eit X4Xx- (4 - aU? — av?)'? with initial conditions X(t)ao = GU + (4- aU? - av)”, 00 (8.156) (8.157) (8.158) The random variables U and V are independent and identically distributed on the interval [— 1, 1] according to the probability density f(u) = |u|. The prob- lem consists in finding the mean square E[ X?(z)] of the solution X(t). We will tackle this problem in two ways: (1), assuming ergodicity of the solution, and (2) arriving at it exactly, in order to estimate the error introduced by the ergodicity assumption. By the first approach, if we assume that X(t) is ergodic in the mean square, then the mean-square value is obtainable from the initial conditions alone, without having to solve the differential equation. Thus, @ = £[x%(1)] = e[x70)] = E[gU + (4— aU? ~ av?)'7]° = 482 + 4-2 Hence, & = E[X?(t)] =8 - 2a The problem is, however, capable of exact solution. We have X(t) = Ugcos 4 + Vesin + (4 - aU? — av?)'? é Substituting this expression in (8.158) and solving for £, we obtain 4—-a(U? + V2) P= 1- (U? + V?)/2 (8.159) (8.160) (8.161) ERGODICITY OF RANDOM FUNCTIONS 313 Note that the value of é? is independent of the particular realizations of U and V for a = 2, whereas for a * 2 it is dependent. Note also that ¢ is an even function of both U and V, so that the first and second terms vanish when taking the expectation of (8.160). Hence, E[ X(t)] is independent of t. From (8.160) we have X(1)X(1-4 1) = (U2 + V2) eos F + 4(U? - V8) eos 4 + uve sin ET 4 UE(4 — aU? — av?)'? ¢ tt+r x (cos £ + cos t ) + VE(4 — aU? — av?)'? 7 aa act 2 x (sin + sin z )+6 aU? ~ aV?) with é as per (8.161). Because of symmetry, only the first and last terms in the above equation contribute to the autocorrelation function of X(t). Substituting Z=U?+yV? the autocorrelation function becomes E[X(t)X(t+ 7)] = e|( - «z)(1 + Ages 5=22),)] (8.162) and is independent of t, so that X(t) is stationary in the wide sense. The mean-square follows from (8.162) with + = 0: E[X?(t)] =2a+ 4(a- 28(z45 3) = 2a + 4(a- 2) f = tele au do —1-1u? + v? = 2a - 8(a - 2)in2 (8.163) For a = 2 we have ELr(n]=4=8 314 ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS E(x?) Exact (8.163) ‘Approximate (8.159) Fig. 8.9. Mean-square value E[ X?(t)] as a function of a 7 t 2 parameter a, Consequently, the solution is ergodic in the mean-square iff a = 2; in this case both solutions coincide. Exact and approximate mean square values are shown in Fig. 8.9. The conclusion from Example 8.21 is that caution should be exercised in applying the ergodicity assumption. The ergodicity or otherwise of the solution is decided by the equation and the boundary conditions themselves. PROBLEMS 8.1. Given a pair of random functions X(t)=Asinw,t, Y(t) = Bsine,t where A and B are random variables with mathematical expectations E(A) = 1, E(B) = 2, and the variance-covariance matrix [i | 19 Determine the autocorrelation functions R y(t), f2), Ry(t), t,) and the cross-correlation functions R yy(t), tz), Ryy(ty, t2)- 8.2. Solve Problem 8.1 with the variance-covariance matrix [ 1 3-6 3-e 9 for e < 1. What happens when e — 0? 8.3. Check whether a random function X(t) possessing the autocorrelation function Ry(r) = d?e-*"(1 + alr) 8.4. 85. 8.6. 8.7. 8.8. 8.9. 8.10. 8.11. PROBLEMS 315 if differentiable or otherwise. Compare with the result obtained for R (7) given in Eq. (8.106), and offer an interpretation. Show that for a weakly stationary random function X(t), Syen(w) = (= 1)" (fa)"Sy(w) Verify that the correlation function R y(t) of the X’ with Ry(r) as in Problem 8.3 is Ry(1) = d?e72"I(1 — alr}) Check whether X(t) in Problem 8.5 has a second derivative. The initial imperfection ¥)(x) of an infinite beam is a weakly sta- tionary, band-limited random function of the axial coordinate x. Find the spectral density of d?Yq(x)/dx?. Show that the spectral density of the sum of a pair of independent random functions equals the sum of their spectral densities. Use the nonnegativeness property of the spectral density to determine the admissible values of the parameters a and £ in the autocorrelation function oe Ry(t) = ae 4(cosh Ax +5 sinh fr) Check whether X(t) is differentiable. A beam, simply supported at its ends, is subjected to a distributed force Q, a random variable with given probability density function. Using the relations oo = ~0(x) aus) = -4,(x) show that the shear force Vy(x) and bending moment M,(x) are the random functions of x. Find ELK] ELMC)] Rye) R12) and the first-order probability densities Lux) fuss *)- A cantilever is subjected to a distributed force Q,(x) with the zero mean and autocorrelation function: Ro (x1, x2) = eel 316 © ELEMENTS OF THE THEORY OF RANDOM FUNCTIONS Using Eq. (8.8), verify that ax; ax, Ly aix,- Ry (%%2) = Ge — e 2 — axe - a —ax,e- + 1 + ax, — x2| — ax,x, + eh — x.) (4 — 3x}x, — 3x,x3 — 3x,x3 + x})] (8.164) Show that for x > 1/a the variance is Var[M,(x)] = 4 (2 - 02x? + 3a2x3) (8.165) a’ Equations (8.164) and (8.165) are due to Rzhanitsyn. 8.12. A normal random function X(t) with zero mean has an autocorrelation function as per Eq. (8.106). Find the probability of X(t) < x9, Xp being a deterministic positive constant. CITED REFERENCES Bochner, S., Lectures of Fourier Integrals, Princeton, NJ, 1959. Rzhanitsyn, A. R., “Probabilistic Calculation of Beams on a Random Load,” in (B. G. Korenev and I, M. Rabinovich, Eds. Issledovanija po Teorii Sooruzhenii (Investigation in the Theory of Structures; in Russian), Vol. 23, “‘Stroizzdat” Publ. House, Moscow, 1977, pp. 158-171. Scheurkogel, A., Private communication, Delft, 1980. RECOMMENDED FURTHER READING Melsa, J. L., and Sage, A. P., An Introduction to Probability and Stochastic Processes, Prentice-Hall, Englewood Cliffs, NJ, 1973. Chap. 5: Stochastic Processes, pp. 189-244. Lin, Y. K., Probabilistic Theory of Structural Dynamics, McGraw-Hill, New York, 1967. Chap. 3: Random Processes, pp. 34-66; Chap. 4: Gaussian, Poisson, and Markov Random Processes, pp. 67-109. Papoulis, A., Probability, Random Variables, and Stochastic Processes, Intern, Student Ed., Me- Graw-Hill Kogakusha, Tokyo, 1965; Chap. 9: General Concepts, pp. 279-335; Chap. 10: Correlation and Power Spectrum of Stationary Processes, pp. 336-384. Parzen, E., Stochastic Processes, Holden-Day, San Francisco, 1962. Scheurkogel, A., Elishakoff, I., and Kalker, J. J. “On the Error That Can Be Induced by an Ergodicity Assumption,” ASME J. Appl. Mech., 103, 654-656 (1981). chapter Random Vibration of Discrete Systems In this chapter we turn to random vibration problems with intensive reference to the results of Chapter 8. After a review of relevant deterministic results, we proceed to cases of single- and multidegree-of-freedom systems. Finally, we demonstrate the normal mode method and the dramatic effect of the interac- tion between different modes, usually overlooked in the literature. 9.1 RESPONSE OF A LINEAR SYSTEM SUBJECTED TO DETERMINISTIC EXCITATION Consider a physical system whose behavior is governed by the following differential equation with constant coefficients: b,($)e=10 (9.1) where L,,(d/dt) is the differential operator d a" a” d 1,(4) = gaat Cas Ft ay 1G + das ag #0 (9.2) and a; are real constants. f(t) constitutes the input, or excitation, of the system, and x(1) the output, or response, of the system. The linear differential equation is subject to the principle of linear superposition, namely, if x,(1) is the output of the system to input f,(t), then the output to input L7_, a, f(z) will be L7_, a,x,(t), where n is some positive integer and a, are any real numbers. 317 318 RANDOM VIBRATION OF DISCRETE SYSTEMS tlt)s Sit) x(t) eh(t) b) Fig. 9.1. (a) Unit impulse applied at ¢ = 0. (b) Impulse response function h(1). Let us consider an initial-value problem, with Eq. (9.1) supplemented by the initial conditions which, without loss of generality, can be assumed to be homogeneous ones. The impulse response function, denoted by h(t), is the response of a system with zero initial conditions to a unit impulse applied at t = 0 (Fig. 9.1). That is, instead of f(t) we have 8(t) in Eq. (9.1) d"h a" 'h dh 40 Ge + Fay at a,h = 8(t) (9.3) The general input f(t) can then be viewed as a series of impulses of magnitude f(7) Art, as shown in Fig. 9.2, where the shaded area f(r)Ar is an impulse applied at ¢ = r. The response to a unit impulse 5(t — 7) applied at ¢ = 7 is the same impulse response function lagging by the time interval 7, namely, h(t — 7). Hence the increment in output Ax at time t > 7 due to impulse f(r) Ar is Ax(t, 7) = f(r) Ath(t — 7) RESPONSE OF A LINEAR SYSTEM SUBJECTED TO DETERMINISTIC EXCITATION 319 tlt) time Fig. 9.2. General input /(1) considered as a series of impulses. Summation over all impulses applied in the interval (0, t) yields x(t) = f'f()h(t - 1) dr (9.4) 0 This equation is known as the convolution or Duhamel integral. Denoting ¢ — tr = A, we have x(t) = f'f(t—A)A(A) ad 0 or for A = 7, 7 x(t) = fre —1)h(1) dr (9.5) 0 Both Eqs. (9.4) and (9.5) are symmetric in terms of the input f(t) and the impulse response function h(t). It should be noted that A(t — 7) is identically zero for t < 7 (for the time instants preceding excitation of the system). Hence, advancing the upper limit from ¢ to infinity in Equation (9.4) would not affect the value of the integral; in these circumstances Eqs. (9.4) and (9.5) can be rewritten, for both t > 0 and ¢ < 0, as x()= f° s()aCe —1)dr (9.6) x(t) =f" fle a)ACa) dr (9.7) 320 RANDOM VIBRATION OF DISCRETE SYSTEMS Consider now the response to a harmonic input. In particular, let the input be given by the real part of f(a) =e" (9.8) The solution of Eq. (9.1) consists of complementary and particular compo- nents. We will say that a system, governed by Eq. (9.1), is asymptotically stable if, irrespective of the initial conditions, the complementary solution (which may be identified with the response of the system, with zero input, to the initial conditions) eventually decays to zero as ¢ becomes larger. This means that the auxiliary equation agr" tar"! +--+ +a, 7t+a,=0 (9.9) has roots with negative real parts. Assume that the system under consideration is asymptotically stable; that is, that after a sufficiently long time (compared with the transient process), the steady state response (characterized by the presence of only a particular solution) will be reached. We seek it in a form similar to (9.8) x(t) = X(w)e (9.10) Substitution of Eqs. (9.8) and (9.10) in Eq. (9.1) yields = 7 Tay (9.11) where L,,(iw) is obtained by replacing the parameter of differentiation d/dt by iw: L, (iw) = ag(iw)" + a,(iw)""' +++ +a, (iw) +a, (9.12) and is called the impedance of the system. Denoting Tliay 7H) (9.13) we obtain x(t) = H(w)e (9.14) Here H(w) is referred to as the complex frequency response or receptance. RESPONSE OF A LINEAR SYSTEM SUBJECTED TO DETERMINISTIC EXCITATION 321 If the general input f(t) is representable by the Fourier integral f()= [o Flo)et de ea tw F(w) = aa J fie at (9.15) we may represent the output in a similar manner: x(t) = f° Xo) de 1 fe ie X(o = ag fe " de (9.16) Substitution of the first of equations (9.15) and (9.16) in Eq. (9.1) leaves us with X(w) = H(w) F(w) (9.17) and x(t) becomes, in view of Eq. (9.16), x(t) -f* H(w) F(w)e! dw (9.18) Now, for a unit impulse applied at ¢ = 0, S(t) = 8(t) and, due to the second of Eqs. (9.15) F(w) = ag J beet ad -+ (9.19) so that the response to 5(t) being the impulse-response function A(t), and, bearing in mind Eq. (9.18), we have A(t) = xf H(w)e do (9.20) and finally H(w) -f" h(t)e~"™" dt (9.21) Here the Fourier transform equations have been replaced by the impulse- response function multiplied by the factor 27 and the complex frequency 322 RANDOM VIBRATION OF DISCRETE SYSTEMS response, respectively. Had we defined the Fourier transform not as in Eq. (9.15) but as f(t) = ie I Flue de Flo) = f° flea and x(t) in a similar way, we would have found that H(w) is the Fourier transform of A(t) [not of 27h(t), as in Eq. (9.20)]. In that case, however, we also would have had to modify the Wiener-Khintchine relationships derived in Chapter 8. Thus it is preferable to retain Eqs. (9.15) and (9.16) as the Fourier transform pair. Example 9.1 Consider the mass-spring-dashpot system governed by the differential equation mk + cx + kx = f(t) and calculate the impulse-response function h(t), which satisfies mh + ch + kh = 8(1) (9.22) with the initial conditions h(0) = h(0) = 0 (9.23) The external force 6(t) may be assigned a duration Ar tending to zero. Accordingly, we integrate Eq. (9.22) over the time interval Az, to yield [Cn + ck + kx) dt = m[%(At) - x(0)] 0 telx(At) - x(0)] + [ex dt = [°'8(1) at 0 0 and take the limit im e[x(A1) —x(0)] =0 There are no jumps in the displacements x(t), because of the too-short time interval allowed. On the other hand, jim m[x(at) — &(0)] = mx(0 +) RESPONSE OF A LINEAR SYSTEM SUBJECTED TO DETERMINISTIC EXCITATION 323 and since f'3(0) a =1 we have mx(0+)=1 or x0+)=2 implying that the unit impulse 5(f) is actually equivalent to the instantaneous change in the velocity. We thus may view the impulse applied at ¢ = 0 as the initial velocity equal to 1/m, and instead of the nonhomogeneous equation (9.22) with homogeneous initial conditions (9.23), solve the homogeneous equation mh + ch + kh =0 (9.24) with nonhomogeneous initial conditions, x(0)=0, x(0) = (9.25) 1 m We next rewrite Eq. (9.24) in the form h + ah + wh =0 where wo = yk/m is the natural frequency and { = c/2ma, the viscous damping factor. Integration of the latter yields Ayexp|(-£ +? -1 oot] + Ayexp|(-£ -ye-1 oot], f>1 h(t) = (A, + tA, )exp(—wot), g=1 [Arexp(iw,t) + Azexp(—ia,t)]exp(—Suot), g<1 representing an overdamped, a critically damped, and an underdamped struc- ture, respectively, and w, = woy1 — §? is called the frequency of the damped free vibration. 324 ~~ RANDOM VIBRATION OF DISCRETE SYSTEMS. Subject to the initial conditions (9.25), h(t) becomes EST exp(—Suot)sinh(\f?—Tayt), $>1 0 H(A) = | © exp(—at), g=1 (9.26) nos exp(—Swot)sin(w,t), {<1 In perfect analogy with this example, it can be shown that for a system described by the differential equation (9.1) the impulse-response function h(t) can be found as the solution of the following homogeneous equation: d"h Gr +4,\> +4,h =0 (9.27) supplemented by the nonhomogeneous initial conditions dh d"~*h ah _ 1 aaGee Sees rl yale aa? t=0 (9.28) 9.2 RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION Let us visualize now that the excitation is represented by a random function F(t) with given mean m,(t) and the autocorrelation function R p(t), t2). The linear system with the impulse-response function A(t) then transforms F(t) into another random function X(t): X(t) =f" F()a(t= 7) de =f" F(t r)a(r) dr (9.29) -o ~00 where the integral is understood in the mean-square sense, as defined in Sec. 8.12. The mean function m y(t) and the autocorrelation function R y(t), 2) are found as my(t) -f" mp(t)A(t— 7) dr = [mele —r)h(r) dr (9.30) Ryltst)=f” t. R(t mat — A(t — %) andr (9.31) The latter equation can also be written Relist) = ff” Rel tte a)Mn)A(m) ded (0.32) RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOMEXCITATION 325, When F(t) replaces a stationary random function, we have instead of (9.30), m,(t) -{" mgh(r)dr= mpf h(1) dr = 00 -00 where m, is a constant. Finally, substituting w = 0 in Eq. (9.21) and compar- ing the result with the above equation, we arrive at my(t) = m,H(0) = const = my (9.33) The autocorrelation function R;-(t),f2) is a function only of the difference t, — t, so that Rp(t; — 7, t2 — 7) in Eq. (9.32) is a function only of t, — % — (t) — 1) =, — t) — + 7, OF, with t, — t) = 7, Rela [Of Belt t Da (m)AG) didn = Rel) (9.34) Equations (9.33) and (9.34) imply that the output of a linear system with constant coefficients subjected to random excitation stationary in the wide sense, is itself of the same kind. To find its spectral density, we apply the Wiener-Khintchine relationship (8.67) to find 1 peo poo 20 Hee Sy(w) = et J Aa) dn, anf Reo — tne dr Denoting + — 1, + 7, = A and making use of Eq. (9.21), 00 00, 1 fe Sy(w)= fo A(n)eerd h(m)e dm +5~f Re(A)e "dd x() ‘i (n)e nf (m)e oy inl. F(A)e = H*(w) H(@)Sp(w) = |H(o)/S-(@) (9.35) The autocorrelation function R y(r) can be found through (8.68): Ry(r) =f” Sy(w)e**do =f" |H(o)/Sp(w)e™dw (0.36) 0 00 while the mean square E( X7) is 2 a 2 E(X?) = Ry(0) =f" |H(w)['Se(w) do 00 20 p00 =f [Rel — n)a(n h(n) dnd (9.37) = 004-00 Example 9.2 The system governed by the differential equation X - 2aX - 8a?X = F(t) 326 = RANDOM VIBRATION OF DISCRETE SYSTEMS where a is a real constant and F(t), a stationary random function, does not admit a stationary solution, since it is asymptotically unstable. One of the roots of the auxiliary equation r? — 2ar — 8a? = 0 is positive, irrespective of the sign of a. It is readily shown that X(t) = Ce + Cea + af [etat—n) — e240] U(t ~ 7) F(r) dr 6aJ_ oo and the probabilistic characteristics of X(t) depend on the initial conditions imposed. Example 9.3 Consider a spring-dashpot system with negligible mass, so that is is governed by the differential equation cX + kX = F(t) The complex frequency response is 1 He) = Goek and Eq. (9.35) becomes Sr(w) S, = Fs («) c2u* + k? Assuming that F(t) is a band-limited white noise, Sor lol < w, Sp(w) = (a) 0, — otherwise Equation (9.36) then yields ed x(t) = Sof ag + k? The mean-square value equals % de 25 cw, 2) = = 0 na E(X?) = Sf Gohan he ( Z) For w, > 00, we have at input an ideal white noise, with : ee el®™dw Som Ry(1) = Sy lim SFO 8 POT otk /entel x(7) Or Iu ctw + ke ch” (9.38) (9.39) (9.40) (9.41) (9.42) (9.43) (9.44) RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 327 L 1 L 1 -2 1 0 1 k Tet Fig. 9.3. Nondimensional autocorrelation function of displacements of spring-dashpot system with negligible mass tends to exp(—|T'/) at large values of 2. (see Fig. 9.3) with the mean-square Som Cl B(x?) == (9.45) Note that in the latter case the mean square of the output turns out to be finite, in spite of the fact that the mean square of the input is infinite. Of interest is also the mean-square velocity: BU) = 8h" oa Sle Fae (FE) 649) =o, 070? +k? 2 for band-limited white noise. With w, > 00 we obtain 2. B(42) = f° 24 =o ut + Ke (47) for ideal white noise. This is obvious, since X = [F(t) — KX]c~', where F(t) has an infinite mean square. 328 ~~ RANDOM VIBRATION OF DISCRETE SYSTEMS Example 9.4 Consider a single-degree-of-freedom system with band-limited white noise F(1) as input: X + Bay, X + wh X = — F(t) (9.48) 1 m under light damping 0 < { < 1. The spectral density of the displacements is readily found as S,(w) = = |(iw)? + 2(ieo) wo + al’ 1 Sy >: Il <<, (9.49) m (wd — w?)? + 48%? 7 and zero otherwise. The autocorrelation function is Sy pee ior Ry(r) = 2 f° ——* __ (9.50) 2 m® "1. (wd — w?)” + Aap 0? The mean square is given by dw ——_———_ (9.51) (3 = w)? + 4f%02u? E(2) = Ry(0) = It is readily verified that the denominator in Eq. (9.51) can be represented as (a3 — w2)? + 4f%udw? = [(w — wy) + 0387] [(w + 04)? + 038?) where w, = w (1 — §7)!/? is the frequency of damped free vibration. Using the method of undetermined coefficients, the integrand in Eq. (9.51) is represented as 1 Mot+tN ,_ MwtN, (c2 — w?)? + 4f%02u? — (w — wy) + 3? (w + wy)” + WE? where RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 329 and Eq. (9.51) is a sum of two integrals: pari (- wfding) + 1 - (w/2w,) +1 X*)= d . ee (= wy) + ok? wae Heit It is seen that the second integral is readily obtainable if we formally replace —w, by +, in the result of the first integral. However, (pxtv)dx bya ay ll wp -(4) ere pee gh IME te) te (» tan q}*e (9.53) for q? > p?/4, where a = /q? — p?/4, t = x + p/2, and C is the integration constant. For the first integral in (9.52) we have Baz val p= 2u, = 03 t+ wh? = wh a= Yos-wr=af t=w- wy with fi eBoy Lg (ee = ea) + 8? 0, (w@ — wy) + wf? Beg (a, + wy)? + wRg? + rage (AS) + (8)] The final result for the mean square E(X7), E(X?) = S 1 yy (et tog)? + w2g? i, (a, = wy)? + wg? + ee +m (25%)]} os may be written down as E(X?) = Se (2.8) (9.55) 0 2m ap ”\ &o 330 = RANDOM VIBRATION OF DISCRETE SYSTEMS Tolwe/ug 1 4 y2001 08 06 —— Eq. 9.56 ———Eq.9.57 On 02 > uu 0.0 10 20 ee Fig. 9.4, Integral factor [o(w,/w09,) for mean-square displacement of single-degree-of-freedom system under band-limited white noise. where the integral factor I[g(w,/wp, $) is given by (“ : ty (e/a + ft) + 82 Sey) = —— 3. fn aml (ajay f= BY +? pare ie stan" vi- | f £ ® 1 = ape 7 tan (9.56) On summation of the terms in brackets, Eq. (9.56) yields the formula due to Crandall and Mark. The integral factor Jp(w,/wo, {) is shown in Fig. 9.4, as a function of w,/wo, for different values of £. A simple asymptotic expression can be obtained for w, beyond the natural frequency w). By evenness e the integrand, Eq. (9.51) may be written as E(X?) = Grays Gielen? o aml = w/uR)? + (2f0/w9)* RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 331 Further, 2S, 20 dw E(X?) = aon PeSEESESESESSSEES. _SEDESSSEESESEESESS : wpm? f (i- woh)? + (2ft0/ty)” . foe % (l= w/a)” + (2/09) 2p 4? wom’ te (1 = 02/08)? + (28/09)? 0 d eer In the last integral we change the variable y = wo/w, dw = —(wo/y?) dy, i dw a of" yrdy # (1 = 0/08)” + (26e/u)? 9 (y? = 1)" + (28)? Using asymptotic series expansion for the integrand f(y) = y?[(y? -— 1)? + gy, 10) =10) +E FV) ine” where f(y) denotes the ith derivative of f(y). Taking only four terms of the series, with fO)=0 f()=0 Ff(0)=2 F'"(0)=0 and L(y) = 48(1 — 28?) we obtain a9 /Me wel 2y? 48(1 — 2?) y* Lr) 9 = L" [22 + See — 2¢2 1 reo eines Bafa) 5 (w/e)? 332 9 RANDOM VIBRATION OF DISCRETE SYSTEMS and finally Som 4 £ 8 1-25? 1-H | 090577 aim | 38 (aya 3° (osay | OM E(X?) = This approximate formula is due to Warburton. Comparison with the exact Egs. (9.55) and (9.56) reveals immediately that the expression in square brackets in (9.57) represents an approximation of the integral factor I,(w,/, §). This approximation is also shown in Fig. 9.4, and its results are practically coincident with their exact counterparts. Consider now the case of an undamped system ({ = 0) under band-limited white noise. Instead of Eq. (9.51), we have te dw Sc E(x) =f" eo Wo + w, 20,0 in ‘0 0 sz} (9.58) e l + MH — & wh — 3 m2 Zoom The mean square is finite if w, < w , and for w, > wW, E(X*) > 00. For the mean square of the velocity we have ‘ So pv w do E(x?) = —2 f° ——-* ___ (9.59) mi = 8. (w3 — wt) + 4g 7he0? This can be rewritten as w o E(X?) = i" = Amu 7a] (w@ — wy)? + wet? — (w+ wy)? + wh? which yields, with the aid of Eq. (9.53), with p, = —#, = 1 and », =», =0, 5 1 evo 0-47 +8 2mny \a(1 — £7)" [anfay + 1-7] +P + tan( 24% -(1- ar) (9.60) E(X?) = £ + tani( 248 +(1= a § RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOMEXCITATION 333 This result can be put in a form analogous to Egs. (9.55) and (9.56): £0) = a(S =.) (9.61) ImXeoy sft.) =o fede a 1 2/1 — ra [w./ 0 + a sas yp +2 Wy” +t fan=(2eee=6 eta). 4l " § |. tan~! (9.62) J, is shown in Fig. 9.5 for { = 0.01 and § = 0.1. The mean-square acceleration is So f utde and we obtain aaa _ 7 1/2]2 2 #(¥4) = Sof 4 1-8 (22)in [e/ey ~ (1 - 2) Ess ig a(1 - $2) [e/a + (1 = §2)'7) + +g (22 awe 2e=$ (=) ”) in +tan~ (ag) (9.63) Example 9.5 Consider a single-degree-of-freedom system as in the preceding example, but with ideal white noise F(t) as input. Results for this case are obtainable from those for band-limited white noise F(t) when w, — 00. 334° RANDOM VIBRATION OF DISCRETE SYSTEMS Tylwy Zug, 0) 08 as 04 02 we /uy 0.0 05 10 18 Py teesivatea Fig, 9.5. Integral factor [,(w,/w,$) for mean-square velocity of a viscously damped single- degree-of-freedom system under band-limited white noise. The autocorrelation function becomes, instead of Eq. (9.50), Sy” el**dw a ce oa att a (9.64) m**—e (wR — w)” + 4f?aPe Introducing the notation 2\1/2 a 2 Som aah B= a(1- 2)" =a, Y=B = adm? 0 (9.65) the corresponding spectral density coincides with Eq. (8.107). Therefore, Ry(1) = dS!" cos wy + Seo sin eat (9.66) ‘a RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 335, and since E(X?) = R ,(0), Som E(X?) =d?= () 2Scogm? (9.67) It is interesting to note that on substitution of { = c/2mup, w = k/m in Eq. (9.67), we get E(X?) = 7Sy/ck, which coincides with the result (9.45) for the mean square of a massless system. The conclusion is that the mass of a single-degree-of-freedom system under ideal white noise excitation does not influence the mean-square displacement. Comparison of the mean-square displacements found for the systems with band-limited (Eq. 9.55) and ideal (Eq. 9.67) white noise shows immediately that the former equals the latter multiplied by the integral factor J) in Eq. (9.56). As is seen from Fig. 9.4 for w, > 9 and a lightly damped system (§ « 1), the integral factor differs negligibly from unity. This means that although ideal white noise is a “mathematical fiction,” it may yield a highly satisfactory result for the mean-square displacement. As we saw in Example 8.14, the random function X(t) is differentiable, so that X has a finite mean square obtainable from Eqs. (9.61) and (9.62) when @, > 00: Sot Som E(X?) = Sogn 7 me” (9.68) implying that the stiffness of a single-degree-of-freedom system under ideal white-noise excitation does not influence the mean-square velocity. Eq. (9.68) is also obtainable directly, by the residue theorem, from the expression B(x) = 2 ° of de m* "0 (wh — w*)" + 46?erp00 It is worth noting that for a system under ideal white noise excitation, comparison of Eqs. (9.67) and (9.68) yields E( X?) = E(X?) As is seen from Eq. (9.63), the mean-square acceleration tends to infinity with w,. This is explained by the fact that X(t), the displacement of a system with ideal white noise, is not doubly differentiable, that is, X(t) is not physically realizable since the spectral density does not satisfy the condition co Jf #*S¢(w) do < co = 336 = RANDOM VIBRATION OF DISCRETE SYSTEMS 1 1 wg (1- wy 212640 w?/? Splu,! Wg -1.0 0 1.0 Fig. 9.6. Illustration of Laplace's asymptotic evaluation of integral (9.69). This is also seen from Eq. (9.48) itself, since here X = F(t)/m — 2S) X — 03 X, where X and X have finite mean squares, but F(1) has an infinite one. Equation (9.65) for a single-degree-of-freedom system with ideal white noise can be extended to a lightly damped system with nonwhite (colored) noise inputs. The term 1 (co2 — w?)? + 487020? exhibits very sharp peaks in the vicinity of +wo. In these circumstances the dominant contribution to the mean square E(X7) derives from the frequencies close to +, (see Fig. 9.6). Accordingly, we resort to Laplace’s asymptotic method for evaluation of the integral: Sp(w) do ; = oth + alah? E(X?) = (9.69) The dominant contribution to the integral derives from the values of the integrand function close to those of w for which the integrand is maximized. The values in question can be the minimum points w = +, of the frequency RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 337 function in the denominator if S,;(w) is smooth in the neighborhood of two: 1 pret Sp(o) dw E(X?) = =f JEHESe() deste m**w-# («3 — 0?) + 4670500? lt wh a © («2 - w Py 4S age? &) po deo m a © (a3 = w?)? + 4g%wh0 ee) pig eeeeee wore (a3 — 0?) + 4g uu? and since S;(w) is an even function, Sp(— 9) = Sp(wo), and (x2) = Sele) | ome do m [Pome (ug — wt)? + 48ahu? Wo te dw +f 2)? wo—© (a3 — w*)” + 4S?eabe0 ~ Slee) j= do Sp (Wo) 7 ~0 (af — u?) + 4fugu? 26h? (9.70) Comparing Eqs. (9.70) and (9.57), we see that application of Laplace’s asymp- totic method is actually equivalent to the assumption that the input is ideal white noise with intensity S;(w,). As we have shown [see discussion of Eq. (9.67)], the assumption of ideal instead of band-limited white noise as input is very satisfactory for a lightly damped system if w, > wo. This implies that for such a system Laplace’s asymptotic method yields a result (9.70) very close to the exact one of Eq. (9.55). Note that the approximation given by Eq. (9.70) is perfectly exact for S,(w) = Sy(a?w? + b*), where a and b are arbitrary constants (see Appendix C, Example C2). For some cases, (9.70) can be refined. If S,(w) takes on significant values at w = 0 and is a decreasing function, we can also take into account the contribution of the peak in S,(w) at zero frequency and Sp(@)m 1 1 20 E(X?) = = +. — | ——_______. Sp(w) do 2ahm™ ~ m?| (a2 — w?)? + 48adu? ie fe — Selo) , ECF?) _ Sr(wo)™ , E(F?) 2agm? — wh mn? ck ke (9.11) 338 © RANDOM VIBRATION OF DISCRETE SYSTEMS The significance of the second term in this expression is obvious: It represents the mean-square displacement of the system under static conditions, where Eq, (9.48) reduces to kX = F(t). To illustrate application of Eq. (9.71), we com- pare it with the exact expression (9.55) for E(X*) of the system with band- limited white noise at w, < w . Then S;(w)) = 0, and only the second term remains in Eq. (9.71). For w, «< wy and { < 1, we have in Eq. (9.56), 2/9 ~ ¢ oe) w/o + ( Stier so that aif @/% = (= eye £ tan < 12 J (deter) x Using the addition formula for a pair of inverse circular functions, 2,42. tan~'z, + tan7!z, = tan"( 722) 12,2, we can write Eq. (9.56) as (2, )- a pees (1 )7P ee “o 2a(1 = 87)? fa. /oy — (1 $2)'7P + 2 and then [eer +(1-9)7] +2 = ph t Ha/o)(1 = 2)” " [e/a (1 = 8) #1 = 2(a,/ing)(1 = 7)? i 4221 - v2 Moreover, tan" 2S t0e/ Wy ~ ope 2 2 1 = w2/ug} “eo RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 339 w(t) (2)(2) so that and with Eq. (9.55) Sow ae ’0! B(x =r ke However, since E(F?) = f™ Syd = 20,5, oe we finally have E(X?) = E(F?)/k?, which is our approximation 9.71. Example 9.6 The system considered in Examples 9.3—-9.5 involved viscous damping. Here we consider a single-degree-of-freedom system under so-called structural damping, due to dissipation of energy generated through internal friction.* The relevant equation of motion is obtained formally by replacing the real stiffness coeffi- cient k by k(1 + i), which is called the complex stiffness: mX + k(1 + in) X = F(t) (9.72) We now rewrite this equation as oeann Xt ewx= mi (t) (9.73) where K=k(1+p2)'? 9 p=tan“'w (9.74) We treat F(t) as band-limited white noise as per Eq. (9.41). The complex frequence response is H(0) - ——_+___— w)* + 02(cos @ + ising) (9.75) *See, for example, Meirovich, pp. 55-57, or Warburton, pp. 17-19. Although some recent publications maintain that the structural damping notion may yield unsatisfactory results for transient vibration, it is widely referred to in engineering practice. 340 = RANDOM VIBRATION OF DISCRETE SYSTEMS where 2, = /K/m is a frequency parameter. The spectral density of displace- ments is, therefore, 1 So of — | lel<% Sx(@) = m?| (Q2cos. p — w?)? + Mésin®p (9.76) 0, otherwise The corresponding autocorrelation function is readily obtainable as So pie ior nyo) =f — "ta ___ om m**—w. (Q3cos p — w*)” + Qgsin’p with the mean square So fe dw E(x?) = 2 (9.78) m? “—«.(Q2cos p — w*)” + Qésin’p For an undamped system, » = 0, p = 0, and the result coincides with Eq. (9.58). Consider now the case where p + 0. The expression for E(X*) can be put in the following form: E(X?) = ——_—_*__ 2[2(1 + cos p)]'73m? 7 ie —w + Q4[2(1 + cos p)]'” : ~, 0? + 22 — w[2(1 + cos p)]'”” +f" © + %,[2(1 + cos p)]'/7 0,0 + 22 + wy [2(1 + cos p)]'7 which, making use of Eq. (9.53), reduces to PES Hee Ho HEE SEC 2m?93[2(1 + cos p)]'7 |i E(X?) = Leta Carli ssa) 1+ (0,/2)° ~ («,/M)[2(1 + cos @)]'”? 2 +2(5 + eos) 1 —cos@ eee 2w,/% + [2(1 + cos p)]' [2(1 — cos p)] [2(0 — cos)" ee stan [24 = [2(1 + cos a”) RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOMEXCITATION 341 When @, > 00, the expression for the autocorrelation function becomes Sof? e'*"dw Ry(t)=— 5 Pa aaa m*"— co (Q2cos p — w)” + QAsin?p Introducing the notation 1 a, a= 7121 -cosg)}'? B= 9,(1- a2)" = 9, y= FP Syn qd? =—2 9.80 2m? Qa (9.80) the corresponding spectral density coincides with Eq. (8.107), and conse- quently, Ry (1) = d2e~ aol (co Qyr + so sin Q,|7| ‘d with the mean-square displacement Sot E(X?) = d? =~ 9.81 e) 2m?Qha (981) Note that the expressions in this example can be written in terms of the frequency w) = yk/m of natural vibrations of an undamped system and of the damping parameter y4, as defined in Eq. (9.74). We have 1 1 oss 2y\l/4 Eee eee eee eee Qq = wo (1 + p?) cos p G+ n'y)? Gay (9.82) and Eq. (9.81) becomes i Syr [i + (1 + wy? E(X?) = = (9.83) wom? p[2(1 + w?))'” For » < 1, Eq. (9.83) becomes Som E(X?)= adm (9.84) i For a system with band-limited white noise, we have, instead of Eq. (9.79), So FO) = Rai ~ cose) Tole 100s ») 342. RANDOM VIBRATION OF DISCRETE SYSTEMS where 1 —cosp a 1+ cosp | Lt (2/%) + (w-/M%)[2(1 + cos 9)]'? 1+ (/%)” = (12¢/M%)[2(1 + cos p)]'? + a 7 ee | ¥2@,/M + (1 + cos p)'/? (1 — cos)! +tan- | Bele @)'? | (9.85) or, in terms of wy and p 2121/2 acy = 2 [+0 + 0)7] (: mop [2(1 + u?)]'7 esa) (9.86) 1 asey-y)" Qn [a + eye i? (@/e0)? + (w/o V2[(1 + w?)'? + 1]? + (+ oP)? (cog/ tog)? ~ (woe/eoy V2 [(1 +p?) + 1]? + (1 + 42)? ss (ee PBe/eoy + [(1 + 9?)'7 + 1]? +e)? +1]? [(a + wt)? - 1]? ase 2\172 172 +tan7! Peofeg= (+e) + le +e)" + 1] (9.87) [a+ 4)'? - 7 i The integral factor Jo(w,/w, #.) is shown in Fig. 9.7, as a function of w,/w» for different values of p. It is worth emphasizing that the coefficient preceding the integral factor in Eq. (9.86) for a system with band-limited white noise is no RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 343. Jol tg “tty: ft} 1=0.01 10 p01 05: . & W/W 00 1.0 2.0 Fig. 9.7. Integral factor Jo(w,/w,u) for mean-square displacement of a structurally damped single-degree-of-freedom system under band-limited white noise. other than the mean-square displacement for that with ideal white noise, as per Eq. (9.83). As is seen from Fig. 9.7, the integral factor J approaches unity for w,/@ > 1. This again implies, as in the case of the viscously damped system, that for rapid calculation, the ideal white noise assumption may be very handy. Example 9.7 Let F(t) in Eq. (9.48) have an autocorrelation function R,(t) = d?e~*"" so that the mean square of F is E( F? = RO) = 2 The spectral density S,-(w) is S;(w) = xf Pere 8 dy = exp an avr 344 RANDOM VIBRATION OF DISCRETE SYSTEMS making use of integral (A.2) in Appendix A. In accordance with Eq. (9.71), @ [va «3 1 E(X?) =| _~#o),4 ) 1% on 40? ®o It is readily seen that for certain combinations of parameters, the second term in this equation may contribute significantly to the total result. For a normal random function it suffices to know the mean and autocorre- lation functions, since the first-order probability density depends only on the mean and on the variance; these are the quantities of interest to us in this case. If the system obeys Eq. (9.2), we have for the mean m,(t), in terms of the mean of the input m,(t), d L,(-$)mx(2) = me() For a stable system and stationary F(t), the solution reads: Mr m x(t) = 7" = const = my (9.88) In The mean square E(X7) can be found as the integral over the spectral density Sy(w) (see 9.13 and 9,37): © Sp(w) dw E(X2) = [° —Dehe) do i (= Tliw)L,(—i0) (9.89) and the variance is found as Var( X) = E(X?) — mi, (9.90) When S,(«) is a polynomial function, the integral (9.89) can be evaluated in closed form (see Appendix C). The nonstationary response of simple mechanical systems is discussed below. Example 9.8 Consider the transient response of a single-degree-of-freedom system, with equation of motion as per Eq. (9.48). We assume the initial conditions X(0)=A =-X(0) =B (9.91) where A and B are random variables and F(t) is a stationary random function RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOMEXCITATION 345, with spectral density S;(w): y ; X(t) = Ae F( 008 wogt + i sin wgt ~keolsin wat a + fine ~1)F(1) dt (9.92) 0 where A(t — 7) is given in Eq. (9.26) for { < 1 Teeeeeretetes dart torre C—)sin w(t — 7) The mean m,(t) of X(t) is then = a —$wot Su | my(t) = E[X(t)] = E(A)e fo COS wyl + “>” sin wyt Md , BB) en folsin wyt + ful — 1) E[F(r)] dr Og 0 Assume further for simplicity that with probability unity, 4 and B are deterministic constants and E[ F(7)] = 0. Therefore, my(t) = Aeon foyt + Se sin out) + Se tetsin wt (9.93) so that the mean function of the output depends only on the initial conditions. For the variance Var[ X(t)] we obtain Var[ X(t)] = = [face = n)h(t~ m)Re(11%) drdty Bearing in mind Eg. (8.72), we have Varl x()] =f" f'f Sp(w)oos w(1, ~ m)A(t~ m)A(t~ my) ddr dry 0°0%—oco Thanks to convergence of the integrals, we can change the order of integration and obtain Var[ X(t)] = ft "SNE [fool tout - 7 — %)|sinw,y(t - 71) X sin w(t — %)cos w(1, — %) dt,d7, (9.94) 346 =~ RANDOM VIBRATION OF DISCRETE SYSTEMS Evaluation of the double integral yields sat Sz(w) Var[ X()] = ale (8 2) + atakur x ( + eo 20k 2. 1+ —fsin wytcos wyt a wos 26 {cos ot +—© sin tes wt Og 7 2 2 o. ‘ @ —wetw . — Devel — sin w,tsin wt + {oof =a + o# sea) do 4 ea (9.95) This result is due to Caughey and Stumpf. It is seen that as t > oo S; di e(o) de (9.96) 1 0 Var[ X(1)] = 5 m? = (2 — w?)? + 4fe%0? and coincides with Eq. (9.51), that is, with the solution of the associated stationary problem. For F(t) represented by ideal white noise S,(w) = Sy, we have Uhlenbeck’s result from Eq. (9.95): ~2uht Var[ X(1)] = Som : oo 253m? «3 (a3 + wwf sin 2wyt + natant) ‘d (9.97) For excitation with slowly varying spectral density in the vicinity of the natural frequency w, and for a lightly damped structure, Laplace’s method may again be used, to yield Var[ X(t)] = Sp(@o)™ : ee adm? —— (09 + wowy$ sin 2wyt + 2st) wom (9.98) Plots of Eq. (9.97) are shown in Fig. 9.8 for { = 0, 0.025, 0.05, and 0.10. It is seen that the response variance approaches the stationary value as time RESPONSE OF A LINEAR SYSTEM SUBJECTED TO RANDOM EXCITATION 347 wwgn? Var xt} Tele) {20 20 ASYMPTOTE FOR [=0.025 ASYMPTOTE FOR [0.05 ASYMPTOTE FOR [20.10 Wet 0 2m 4m 6 er Fig. 9.8. Transient response of a single-degree-of-freedom system under random excitation with ideal white noise. (Reproduced from Caughey and Stumpf). increases, so that only a small error is involved in treating the output process as though it were stationary, provided the input is applied for a sufficiently long time. The larger damping values result in lower stationary values and allow the response to become stationary in a shorter time. Another interesting feature of Fig. 9.8 is that the time-varying variance does not overshoot the stationary variance level. This is, however, not a universal property of the transient response. Figure 9.9 shows the nondimensional variance of a single-degree-of-freedom system, with the input autocorrelation function R,(7) = Roe "cos pr (9.99) In this case, as is seen from the figure, the mean-square value of the response does overshoot the stationary value. Example 9.9 Consider the response to nonstationary excitation of a linear, time-invariant system which obeys the equation eX + kX = c(B + yt)U(t) F(t) (9.100) c, k, B, and y being deterministic constants and F(t) a random function stationary in the wide sense, in particular, ideal white noise with E(F) = 0, Si-(w) = Sp. It is seen that with B = 1/c, y = 0, Eq. (9.100) reduces to Eq. (9.38) in Example 9.3. The initial condition is assumed to be a zero one, and $48 = RANDOM VIBRATION OF DISCRETE SYSTEMS 0 1 2 3 4 Wot Fig. 9.9. Transient response of a single-degree-of-freedom system under random excitation; F(t) has an autocorrelation function as per Eq, (9.99), A = wgm{ E[ X?(1)]/Ro). (Reproduced from Barnoski and Maurer). we seek the mean square of the particular solution. With kK/c = a, we have X + ax = (B + yt)U(t) F(t) (9.101) The response is anticipated to be nonstationary, as is the excitation. The autocorrelation function of the excitation Y(t) = (B + yt)U(t) F(t) is Ry(t), t2) = 2mSy(B + yt;)(B + ytz)8(t, — 4)U(t,)U(t,) (9.102) X(t) is readily found from Eq. (9.100): X(t) = freon) dy (9.103) The mean-square response is E[X(t)] = Ry(t, 0) = [Rug gerarme an dt, dt 0°0 Integration yields E[X?(1)] = 208 f° ['(B + vt, )(B + 14)8(t2 - 41) 0°“0 x exp{—a(2t— t; — t,))} dt, dt, ~2arf B? ¢ 201 ot Oy git 1 = nye 2 5 (2 = 1) + 2By| (2ae" — 1) ~ zala -1) 2 + gale edt? — 2at + 1)- i} (9.104) a RANDOM VIBRATION OF A MULTI DEGREE-OF-FREEDOM SYSTEM 349 Note that for 8 = 1/c, y = 0 we arrive at S E[Xx2(t)] = ri = en 2hi/ey (9.105) and for t > oo, Eq. (9.105) coincides with the stationary solution as per Eq. (9.45). Examples 9.8 and 9.9 are illustrations of determination of a nonstationary random response. Further examples for analysis of linear discrete systems can be found in the papers by Barnoski and Maurer, and Holman and Hart. Spectral analysis of nonstationary random functions is discussed by Priestley and by Bendat and Piersol. 9.3 RANDOM VIBRATION OF A MULTIDEGREE -OF-FREEDOM SYSTEM The equation of motion of a system having several degrees of freedom is given by [m](X) + [eo] X) + [k](X} = (F(4)) (9.106) where {X) and {F) are the vectors of the generalized displacements and generalized forces, respectively; [m], [c], and [k] are the n X n mass, damping, and stiffness matrices, respectively. (X(t)} is a velocity vector, (X(o) is an acceleration vector. We are given the mean vector function {m,(t)} and the cross-covariance matrix [Crt 2)] = [Crm (4 ta)| cn of the generalized forces, where Crt) = (LR (4) ~ mat] [5 (4) — mrg(42)]} 0.107) We wish to find the mean vector {m y(1)} and the cross covariance matrix [ex(4, %)] = [Cx (4 12) nxn of the generalized displacements, where Cyxyxg(tes ta) = E([X(4)) — my, (4))][% (42) — mx,(2)]} (9-108) 350 = RANDOM VIBRATION OF DISCRETE SYSTEMS We first confine ourselves to the deterministic problem of free undamped vibration. 9.3.1 Free Undamped Vibration. This is obtained by putting [c] = [0], (F(t)} = (0) in Eq. (9.106). Converting to lowercase notation for the deterministic displacement function, we have [m]{%) + [k]{x) = (0) (9.109) For free vibration we express the solution of Eq. (9.109) in the form {x} = (y}sin(wt + a) (9.110) where w is a natural frequency and y is the vibration amplitude. Substitution of (9.110) in (9.109) yields [k — w?m]{ y)}sin( wt + &) = (0) Since this equation has to hold for any #, we are left with [k - w*m]{y} = 0) This equation has nontrivial solutions if the determinant of the matrix [k — wm] vanishes, that is, if A(w?) = det[k — w*m] =0 (9.111) This equation, called the characteristic, generally yields n positive roots oo}, 03,..., 2, where wt < w} <-+- < wand w,,w),..., w, are the system’s natural frequencies. With each natural frequency w,, we can associate the corresponding natural mode {y“), satisfying the homogeneous set of equa- tions [& — oP] {y} = (0) (9.112) which has (by Eq. 9.111) a nonunique solution, so that (9) = 2{w) A, being arbitrary nonzero constants. We normalize the natural modes by setting (vo) = (70) = 2e(utry (9.113) so that (0}"[m]{(o} = 1 RANDOM VIBRATION OF A MULTI DEGREE-OF-FREEDOM SYSTEM 351 The normalized natural modes are referred to as the normal modes. Note that the normal modes associated with different natural frequencies are orthogonal, that is, T ate (0)"[m]{o) = 0, iJ Consequently, the two latter equations can be put in condensed form as (0) [m]{(o) = 8,, (9.114) where 6,; is Kronecker’s delta l, isj a= {9 Sy (9.115) If the natural frequencies happen to coincide, the corresponding natural modes are not necessarily orthogonal to each other, but can be orthogonalized through their linear combinations (see Example 9.11). Premultiplication of the equation obtained by substitution of Eq. (9.113) in (9.112) by (v)", yields (0}"[k](0) = a? (9.116) We now define a modal matrix [v], whose ith column is the vector (vo): [eo] = [ve vo --- o] Equations (9.114) and (9.116) then become, respectively, {o]’tm][o]= [7] [o]"[k][o] =f oJ (9.117) where [/] is an identity matrix with elements 6,, (diagonal matrix with unities on the diagonal), and [ w* J a diagonal matrix of the natural frequencies squared. Consider now the linear transformation {x} = [o](q} (9.118) Since [v] is a constant matrix, we have also {#) = [o](9) and Eq. (9.109) becomes [m}[ol(4) + [k]l0l{9} = © 352. RANDOM VIBRATION OF DISCRETE SYSTEMS Premultiplying this equation by [v]” and with Eq. (9.117) in mind, we have (4) + fo? J(q)) = (0) (9.119) For fixed i, Eq. (9.119) coincides with that of an undamped single-degree-of- freedom structure. The q,(t) are called the principal coordinates, and transfor- mation of the set of coupled equations of motion into an uncoupled one is referred to as the normal mode method. 9.3.2 Deterministic Response via the Normal Mode Method. We shall now deal with the deterministic version of Eq. (9.106). We assume, for the sake of simplicity, that the damping matrix [c] is representable in the form [ce] = afm] + B[k] (9.120) where a and £ are some nonnegative constants. For a treatment of nonpropor- tional damping, consult, for example, Hurty and Rubinstein. Then, through the transformation (9.118), Eq. (9.106) becomes [m][o](4) + [e]lo](a) + [Allel(g) = (4) (9.121) or, premultiplying by [0]” and in view of Eq. (9.117), it takes the form (g) + Pa + Bo? J(g) + Po* Iq) = [0] 7(9) where [ « + Bw? | = afI] + Bf w? J. Denoting a + Bu; = 2f,0,, we obtain (4) +P go Ja) + Bo? Ma) = [ol F(0) (9.122) which represents n uncoupled equations of motion n G+ Bog, + wf, = LD oF, (1) (9.123) kal or, again denoting n a(t) = oA () (9.124) k=l the equations become 4, + 2,04, + wg, = @,(t) (9.125) This equation is analogous to Eq. (9.48) for a damped single-degree-of-freedom RANDOM VIBRATION OF A MULTI DEGREE-OF-FREEDOM SYSTEM 353. structure. For other particular cases in which the damping matrix becomes diagonal, the reader is referred to the paper by Caughey (1960). Now we refer to Eqs. (9.22) and (9.26), where we replace m — 1, wy > § > §, @4 > (1 — §7)'7, h(t) > h,(t) to yield exp(—fat)sinb| (3? - 1) a], §>1 1 w,(¢? ~ 1)'7 h(t)= texp(—«t), §=1 1 . ay exp(— Sut )sin(wg,t), g<1 (9.126) and, in accordance with Eq, (9.29), q(t) = f nt t)hj(t— 1) dr (9.127) The output functions x,(t), /= 1,2,..., 1, are found then by making use of Eqs. (9.118) and (9.127): n n x(t) = D ofa = Dol f' olraj(e—s)dr (9.128) jal j=l — 20 For inputs representable by Fourier integrals, Git) =" Boye" de -0 eee ~iet gy F(#) =f We t (9.129) we have a(t) = JP aloyem deo ,(w) = ag flied (9.130) where, following Eq. (9.124), ®(w) = xz oF, (w) (9.131) k=1 354 RANDOM VIBRATION OF DISCRETE SYSTEMS The steady-state outputs of Eq. (9.123) are representable in the familiar form q(t) = [ ,(w)e de Q,(w) = xf aie de (9.132) and substitution of Eqs. (9.129) and (9.132) in Eq. (9.123) yields Q,(w) = H,(w)® (0) = Y oP H,(w) F,(w) (9.133) k=l where 1 Ai $8 9.134) eo) = (Pe Buy ia) of a and a(t) = [7 H(0)9(o)e de 7 2 o f° Hilo) Fader deo (9.135) The original generalized coordinates are found, as before, by applying Eq. (9.118): x,(t) = ry (t)= Lely 1j(@)®(w)el! deo -Ey y “Hilo Fa)e do] (9.136) =1 kel which is another form of Eq. (9.128). Equations (9.128) and (9,136) can also be put in a different form. For this purpose we assume that in Eq. (9.128) S(t) = 8(t) Bums K = 12,000, 0 (9.137) that is, that unit impulse excitation is applied in the direction of the mth generalized coordinate In(t) = (4) A(t) = A(t) = + = Snail) = frat) = = fit) = 0 (9.138) RANDOM VIBRATION OF A MULT! DEGREE-OF-FREEDOM SYSTEM 355 and denote the response, represented by the (/, m)th impulse response func- tion, by q),,(t). For p(t), we have, from Eqs. (9.137) and (9.124), a Olt) = YL of B(t) Bem = Oy 3(t) kml Now we substitute this expression into Eq. (9.128) and obtain aint) = * off wast = 2) dr 7) => off" ofp 8(r)A,(t— 7) dr j=) n -z of oh, (t) (9.139) In terms of the (/, m)th impulse response function, Eq. (9.128), the response due to the general excitation, reads n x(t)= Ef’ sult— h(t) dr (9.140) kal 0 On the other hand, as in Eq. (9.19), for f,(t) as per Eq. (9.137) we have F,(0) = 328 kms k=1,2yyn (9.141) and by Eq. (9.136) and the definition of the (/, m)th impulse response function ety PoP f° H,(w)elet d (9.142) 8im “Oa ren S AC lw 7 Comparison of Eqs. (9.139) and (9.142) shows that ee fee ot At) = 50 flee dw H,(w) -f" hj(t)en i" dt (9.143) -0 which is in perfect analogy with Eqs. (9.20) and (9.21). Denoting now the (/, m)th complex frequency response G,,,(w) as n Gin (@) = Y vf oH; (w) (9.144) jm 356 = RANDOM VIBRATION OF DISCRETE SYSTEMS Eq. (9.136) becomes x= 5 [Gy w) F(w)e!" dea (9.145) k=17- 2 whereas the relationship between g,,,(¢) and G,,,(w) is readily obtained by comparing Eqs. (9.142) and (9.144); re Bint) = 32 [Gin we deo » oe tet Gin (2) =f Sinton!" at (9.146) © Now we define the frequency response matrix [g(1)] as follows: La()] = [8in() axe and the matrix of the complex frequency responses [G(w)] [6(0)] = [Gin()] nxn Equations (9.146) then can be put as 1 7 : [e(O] = 35 J [G(w)] edo [G(w)] = J [e(eye rae (9.147) that is, the Fourier transform pair is replaced, respectively, by the impulse response function matrix [g(1)] multiplied by 27 and the complex frequency response matrix [G(w)]. Now, instead of Eqs. (9.140) and (9.145) we can operate with (x)= f° [ale NIG) ar (x0) = f° [6(w)(F(o)) ede (0.148) In Eq. (9.148) the fact that [g(t — )] = [0] for ¢ < 7 is taken into account in analogy with the single-degree-of-freedom system. 9.3.3. Random Response via the Normal Mode Method. We now recapitulate the original problem of the random response of a multidegree-of-freedom RANDOM VIBRATION OF A MULTI DEGREE-OF-FREEDOM SYSTEM 357 system, Eq. (9.106). This linear system transforms the random vector function {F(t)) into another such function { X(1)}. In accordance with Eq. (9.148) for the realizations (f()} and (x(t)) of (F(t)} and (X(t)}, respectively, (X(1)) is representable either as (x()) =f [ele — CR) ar (0.149) or as (x) =f Ler ~ 7) ar The mean vector function (m y(t)} = E[{ X(t)}] is readily obtained as (m0) =f [a(t = 1) emg(2)) dr For a nonstationary random vector function (F(t)}, this equation yields {m x(t)) if (m,p(2)) is given. For a stationary function, (m,(7)) = {c), where {c) is a vector of constants, and {m x(t)} = cc -1) ar] (9.150) so that {m y(t)} is also a vector of constants. Without loss of generality, we put (m,(t)) = (0), so that (m y(t)} = (0). The cross-covariance matrix then coin- cides with the cross-correlation matrix. [R y(t, ¢2)] is then [Ret ta)] = BCX)” =f f° [els nel Lala a) ania (9.151) where T denotes the transpose operation. [R y(t), f2)] can also be written as follows, in conjunction with the second of Eqs. (9.149): Re(aedl =f f° [el LRe(4 ~~ eLala)l” ana (9.152) Seeking the mean-square values of random responses X,(z), we denote the jth row of the matrix [g(1)] as [elo] = [2 2? -- 8”) (9.153) 358 RANDOM VIBRATION OF DISCRETE SYSTEMS. and Eq. (9.152) yields 20 poo r ELXPO] =f" [aC ][Re(t- 10 al[gs(m)]" andre = 004 = 00 (9.154) If (F(t)} is a stationary vector random function, the cross correlations Rg. ¢,(t) — 7, t, — %) in Eg. (9.152) are functions of the difference of arguments, Hence, [Ret — 11 ~ 2)) = [Rep 4-2 + 1)] Denoting t, — t; = 7, we have [Reliedl =f J [a Ree ~ 2+] La(a)]” arian = [Rx(1)] (9.155) Again, in perfect analogy with the single-degree-of-freedom system, the output of a stable linear multidegree-of-freedom system is also stationary in the wide sense. Recalling the Wiener-Khintchine relationships (8.67), (8.74), and (8.78), [Re(o)] =f" [Se(w)]e%" da 1 [sx(o)] = 35 [Rel )]ererae (9.156) where [Sy(«)] is the cross spectral density matrix of displacements, we have 0 a 1 0 ae 20 yt [se(a)] = [f° s(nderan][ sf Rede aall f® elndern] = [G(#)][S-(#)][G(w)]™ (9.157) which generalizes Eq. (9.35). Bearing in mind Eq. (9.144), which reads in matrix notation as (G()] = [e][#][e]” (9.158) we obtain [Sx(o)] = [elf A(4) Ile)” [s-(#)][o]t # Jfe]” (9.159) This expression can be condensed if we note that by Eq. (9.131) the following RANDOM VIBRATION OF A MULTI DEGREE-OF-FREEDOM SYSTEM 359 relationship is obtainable between the spectral densities of (®} and {F): [50(#)] = [e]”[S-(#)][o] (9.160) and finally [Sx(o)] = Lo] H(4) I[so(o)]f H(@) J*fo]” (9.161) The cross-correlation matrix becomes, in view of the first of Eqs. (9.156), [Rx(7)] = Lol ff H(e) I[So(o)]f H(w) te" deol 0]? (9.162) At 7 = 0 we have [Re] = olf” fH(e) I[so(w)]E Ho) Jao[o]” (0.163) For componentwise representation, we denote the jth and kth row matrices of [o], respectively, as tel=[y? oo] [vo] = [0 0® --. of] (9.164) Then E(AXt) = Ry (0) = Lyf f H(o) I[So(o)]E A(w) J* dol] (9.165) or E(X XP) = LY of Eg (9.166) a=1 p=) where 20 Ewa = f So,o,(@)Ha(@) Hp(w) do (9.167) 0 Equation (9.166) can be rewritten as follows: hn aH E(X,Xf) = Y PE + LLY ofl, g (9.168) a=! a=1B=1 ot 360 RANDOM VIBRATION OF DISCRETE SYSTEMS Since So,0,(2) = S$,0,(0) (9.169) also Eqn = Ef (9.170) For j = k, we have the mean-square displacements as EXP) = Lo fP£.e+ LY ofPejErg a=l a=1B=1 a+p For fixed a and B, we can begin by summing the terms containing E,, and Egy in Eq. (9.168), to give vO Eg + (VE, = 2o°0\Re( Ep) where Re(E,,) denotes the real part of E,. Consequently, Eq. (9.168) can be rewritten as E(\X%P) = ¥ [oP +2 L YE ol oMRe(L,g) (9.171) a=1 a=1 B=1 a

0 we should find the result (9.67) for E(|X,|?), whereas for the mean-square value of the displacement X,(+) we should find zero, as the response of an unexcited system. The desired mean-square values will be obtained by the normal mode method. The differential equations governing the motion of the system are (see the free-body diagrams in Fig. 9.105) mX, + (1 + )cX, — ecX, + (1 + e)kX, — ekX, = F(t) mX, — ecX, + (1 + e)cX, — ekX, + (1+ e)kX, =0 (9.177) The mass, stiffness, and damping matrices are, respectively, tm [3 a] k(1 +e) —ke [A}= | -ke k(1+ | _fe(l +e) —ce [e] = | —ce c(1+ | (9.178) so that the coefficients a and B in Eq. (9.120) are a=0 B=z (9.179) Substituting matrices [m] and [k] into Eq. (9.111) we arrive at the characteris- tic equation k(1 + e) — mo? —ke 2) A(a’) | —ke k(1 +e) — mo? = (mo)? — 2(1 + e)kmo? + [(1 + 2)? e]k2=0 (9.180) The natural frequencies are ace (4). ee [£a + 26)” (9.181) m ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 363 F(t) 7 X(t) xalt) k ek k m m (a) (E TS L SYMMETRY AXIS OF STRUCTURE an E PF cca ttl xg) Lom x,it) fo \ exalt) ete fo | - > kxelt] ” 7 (bo) t 2 > chit) ec ylth -X y(t] chit Fig. 9.10. Two-degree-of-freedom symmetric system, illustrating the influence of the cross corre- lation terms. (a) When e tends to zero, the natural frequencies of a system shift together toward a natural frequency of a single-degree-of-freedom system. (b) The derivation of Eqs. (9.177). For the first natural mode, we obtain from Eq. (9.112), on substituting w?, k(1 + e) — mw? —ke yf? 0 -ke k(1 +e) — mat] | yf] lo which in turn yields G}=a(1) where A, is an arbitrary nonzero real number. For the second natural mode, we obtain from Eq. (9.112), on substituting 2 k(1 + ©) — maz —ke yP\ _ fo —ke k(1 +e) — mad | | yy 0 2, so that (9) =a, _1} 364 RANDOM VIBRATION OF DISCRETE SYSTEMS We now normalize the natural modes by letting, as in Eq. (9.113), (0%) EY 1) (om = 221) Ho and construct the modal matrix i, Ad Ay By [e] = EEN My Ho The first of Eqs. (9.117) then becomes A OM] off 22) Ja 0 By A mm Ha | d2 da Oo m eine ol 2 Ho Ay Hy which in turns yields Are ADH HE Le My Ba 2m 1] (9.182) and the matrix [ w? J is kfi 0 | zjoa— ery alt 1 +26 (9.183) The spectral matrix of the excitations in the original coordinates X\(1) and X,(t) reads [s-(w)] = [® ‘| (9.184) and in the normal coordinates, as in Eq. (9.160), [so(o)] = 0l"[Se(w)]fel=22[! 1] (o.185) ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 365 Before proceeding to determine [S,(w)], we first consider the expression 1 wo) H¥(« Palo) Is(oit-ae) = 38] anccvanca) irae (9.186) For [v,] and [v,] we have lol=Zet N lel=Zen - (9.187) and from Eq. (9.161) we obtain Sq(o) = [of H(w) JLS_(o)]f #(o) J*[o,]7 = iho)? + (oP + [Ah (w) He(0) + Ht) Ha(e))} (9.188) Sy,() = [oa] Ho) IESe(o)] EHC) J*Le.]” = 2 (mo)? + a(o)P ~ [2h (w)H8(0) + 40) Hy(«)]) (9.189) Using the identity H\(w)H}(w) + H#(w)H,(w) = 2Re H,(w)H¥(w), the ex- pressions for Sy(w) and Sy,(w) can be written as Sy(0o) = SE [I (o)P + [h(o)P + 2Re A (w)Hz(«)] Sy) = 725 [1M (a)? + [H(«)P ~ 2RetH(«)9(a)] (9.190) The spectral densities of the velocities are Sy,(@) = w°Sy(@) — Sj,(@) = w°S,,(w) (9.191) For the mean-square values, we obtain, using the corresponding spectral 366 RANDOM VIBRATION OF DISCRETE SYSTEMS. densities, e(4P) = {f° [len (o)? + Ut(«)P] de +2f" Re H,(#) H3(«) du} (9.192) i S, 20 BK) = (fo ln(oyr + wt(9)?] do 42° o? Re H,(w)HE(w) de} (9.193) Ss os B (X22) = 25 { J [in (o)? + 1Ha(o)P] de -2f" ReH,(w) H#(w) ae} (9.194) 7 S, B (Xa?) = (J ol (oP + (0)? de -2f" wRe H,(o )Hs(«) de} (9.195) From Eq. (9.67) we have, with wy) > w,, § > §;, and m > 1, 00 0 dw T |H,(w)/? dw = SS EEEEEIEIC Rann 9.196 ee Lgr ay apa eae Now, to evaluate the integral af Re H,(w) H¥(w) deo de =2f" Re L ? 4 2,0, (io) + ae iw? + 2f,0,(-iw) + ) + a3] ve ([(iw)? + 04] [(iw)? + 03] + 48,5 0000, (—iw)(iw)) deo = 20 (deo)? + 28 60,(ie) + wHl?((— iw)” + 2,009 (—iw) + 0 |? (9.197) ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 367 we resort to Eq. (C.1) in Appendix C. We first note that the roots of equation (io)? + 28, w,(iw) + w7 = 0 have positive imaginary parts, that is, they lie in the upper half of the w plane. Hence we denote Lia) = [(ia)? + 28,0; (iw) + 02] [(iw)? + 26500,(i0) + 03] = (iw)* + 2(S,w, + £,4,)(iw)* + (w? + 02 + 48,6010 )(io)? +2(S,00,003 + $20.04 (iw) + wo} so that the a,’s in Bq. (9.2) have the form a= 1 a, = 2A(f,0, + $0) a, = wf + w+ 48,00, 4 = 219,00, (S0, + $0) a, = wie} The expression S;(w) in Eq. (C.2) reads S,(@) = 2{[(~i0)? + 07][(io)? + 03] + 485,00, (—i)(iw)) and by =0 b= 2 db, = Awe + w — 4b h,0,0.) bs = Zw} Substitution in Eq. (C.6) yields 0 af Re H,(w) H#(w) dw 0 cena nti See Sata innate ett (9.198) (af — 08)” + 4[Sbsoren(ot + 08) + (6 + 8 )ote3] The mean-square displacements are 7S, 1 1 E(|X,)?) =—-2 ( —— + (5) ~ Gm? |e * Baek 8(Se01 + S202) | (ut = wh)? + 4[S,S 0 022(03 + wf) + (7 + £7) ote} (9.199) + 368 RANDOM VIBRATION OF DISCRETE SYSTEMS 7S, 1 1 E(|x,|?) = —> + (ar) anal 23023 8(S,04 + Lr) (ot - oi) + 4[ $8200.02 (03 + 02) + (£7 + $7) wta}] (9.200) Note that the weighted averages A,, and A, are not affected by the cross correlations. Analogously, from Eq. (9.68), we have, with w) > w, § > §, and m > 1, f w|H,(w)/? do = (9.201) TO 2a, Using again Eq. (C.6), we arrive at the following expression for the cross correlation term: - Re H,(w) H3(w) do 4a(S@2 + $0) oe. (0? = 03)? + 4[f,830 0) ( 0? + 03) + (2 + $2) w2a2] (9.202) The mean-square velocities are v2 = 750 1 lL B(KP) = | Ie, * We, 8(Sye0y + $400) ) e010 a (w3 — 02)? + 4[$,f,0,0,(0} + 03) + (6? + #) mal (9.203) v2). 7% f 1 1 EUAP) = Ge (at * the, (ot = 08)? + 4[S S00 ( 03 + 08) + (57 + 2) ote} (9.204) ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 369 Substituting the explicit expressions of the natural frequencies (9.181) and the damping coefficients §; = cw,/2k, we arrive at the following equations for the mean-square values in terms “of the original parameters: 7S 1 2(c?/km) EP) = =) * (1 + 2e)? * 2a tet (1+ are | (9.205a) So Teer 2(¢?/km) E(") = ie . (1422? &/(1+e)+(1 + aes (9.205b) v2) — 780. 1 2(¢?/km) E(x, )- ea eae) 5} (9.205c) x 12) — 750. 1 2(c?/km) E(\X,)") = a "Tet" Base +4+ a (9.2054) For e tending to zero, E(|X,|") > 7Sy/ke, and E(|X,|2) > Sj/me. That is, we again have the results (9.67) and (9.68) for the single-degree-of-freedom system. Moreover, E(|X,|*)-> 0 and E(|X,|?) > 0, since the second mass becomes a separate unexcited system. These results are expected, as noted above. The first two terms in Eqs. (9.205) are associated with the modal autocorre- lations, and the third term with the modal cross correlation. Significantly, the contributions of the two correlations for e = 1, c?/km = 0.01 are 96.72% and 3.28%, respectively. However, for e tending to zero, they tend to contribute equally, so that the error incurred by disregarding the cross correlations is 50%. With the cross correlations omitted we can no longer arrive at the results (9.67) and (9.68) associated with the single-degree-of-freedom system. The percentage error 7, in evaluating E(|X,|*), defined as E(XP exact ~ E (iI? approxi m= x 100% (9.206) ; E(IX I? exact 370 ~~ RANDOM VIBRATION OF DISCRETE SYSTEMS ay) & 50% 40% 30% 20% 10% 0.0 02 04 06 08 1.0 ee Fig. 9.11. Percentage error associated with omission of modal cross correlations in evaluation of E(|X;|?) for different values of c?/km. where (E|X;|?)exact is given by the first of Eq. (9.205) and E(|X,|?) approximate DY (9.172) mS, 1 E(X\)?)approxinate = el! an = obtained by omitting the cross-correlation term, is plotted in Fig. 9.11. The error incurred hy omission of the cross correlation terms is even larger for the second mass. The corresponding value of the percentage error 7), defined in analogy with Eq. (9.206), tends to infinity as e approaches zero. Indeed, when e — 0, E(|X;|?), — 0, whereas exact mS, 1 TS, E(\X21? approximate = el! + ary ke (9.207) e and consequently , > 00. 7, is plotted in Fig. 9.12. (Note analogy with Example 6.12.) ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 371 19% 4 200% 150% 100% 50% € 00 02 O4 06 08 10 a Fig. 9.12. Percentage error associated with omission of the modal cross correlations in evaluation of ( £|X,|*) for different values of c?/km. The reason for this rather dramatic contribution is revealed by Eq. 9.198 for the cross-correlation term: At e — 0, the natural frequencies crowd together, as is seen from Eq. (9.181), and in these circumstances the contribution of the cross-correlation term is of the same order of magnitude as that of its autocorrelation counterpart. By contrast, when the natural frequencies are far apart, the cross-correlation term is of an order of magnitude of {?/(w, — #,)? times the autocorrelation term and may be omitted. In other words, the cross-correlation term can be omitted if the following strong inequality holds: | (9.208) This qualitative statement is due to Bolotin. Let us now consider what happens when the coupling coefficient e tends to infinity. The two-degrees-of-freedom system then degenerates into the single- degree-of-freedom system (Fig. 9.13), and the relevant differential equation 372 RANDOM VIBRATION OF DISCRETE SYSTEMS. |__» Ft! fee X(t) 2m FAL 72 X(t) 7. Fig. 9.13. When e tends to infinity in the two-degrees-of-freedom system shown in Fig 9.9, two masses act in concert as a single one (a) under half the orginal force, and the system behaves as in (b). reads mX, + cX, + kX, = 4F,(t) (9.209) that is, the resulting system behaves as a single-degree-of-freedom system with half the original force F,(t). Since the spectral density of F,(1) is So, so that of 4F,(t) is 4S, and Eqs. (9.67) and (9.68) yield, respectively, the mean-square displacement and velocity are S 7 E(MP)= Ze E(KP) = Som ~ Ame (9.210) These results are deducible from the first and third of Eqs. (9.205) when e — 06. In this limiting case the cross correlation term tends to zero. For a relatively large value of e, when the natural frequencies are far apart and the cross correlation term may be disregarded, the following interesting conclusions follow from Eqs. (9.205): E(X?) = E(%P) — E(\X?) = (X17) (9.211) implying that the mean-square displacements of the two masses are equal, although the excitation force is applied to one of them! The same is the case with the mean-square velocities. An analogous, and equally unexpected at first glance, finding was arrived at by Crandall and Wittig for some continuous systems, and will be discussed in Chapter 10. The symmetric distribution of the mean-square displacements and velocities when the modal cross correlations may be disregarded, can be explained by the ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 373 symmetry of the structure about the axis through the midpoint of the coupling spring. In fact, as is seen from the modal matrix [v] of (Eq. 9.182), the first normal mode is associated with movement of both masses in the same direction with equal amplitudes v{ = v§) (symmetric normal mode). The coupling spring then behaves as though it were free, while the masses behave as though they were uncoupled. The second normal mode is associated with movement of masses in opposite directions with equal amplitudes, so that vf = —v?) (asymmetric normal mode). Equation (9.172) contains, however, only squares of the amplitudes, corresponding to the symmetric and asymmetric modes, and therefore fp? = [ory [oP] = for? As a consequence, we arrive at Eq. (9.211) directly from Eq. (9.172). It is worth emphasizing that such a symmetric response, despite the lack of symmetry in the excitation, is an approximate result. However small (but nonzero) the cross correlation, E(X?) > EG?) (XP) > £047) (9.212) Only at e > 00 do we have E(|X,|? > — E(\X,|*). However, already for e = 2 and c?/km « 1, E(X,|? )= = E(\X,|?), and they differ from the e > oo result only by about 4%. E(|X,|*) also approximately equals E(x, ), the difference from the corresponding e — oo result being about 20%. Note that approximate results like (9.211) are characteristic of a symmetric system; they are no longer valid for a nonsymmetric system (see Prob. 9.4). Example 9.10 Having solved the problem of determining a probabilistic response by the normal mode approach, we will demonstrate how it can be evaluated directly without recourse to this approach, for the example of the two-degrees-of-free- dom system considered in Sec. 9.4, We first put in Eq. (9.177): F(t) =e! (9.213) with a view to determining (as in Sec. 9.1) the complex frequency responses, or receptances, H,(w) and H,(w), so that X(t) = H(w)e —-X,(t) = Hy(w) el (9.214) Substitution of Eqs. (9.213) and (9.214) in (9.177) yields [-mw? + i(1 + e)cw + (1+ e)k] H,(w) + [-iecw — ek] H,(w) = [-iecw — ek] H,(w) + [—ma? + i(1 +e)cw + (1+ e)k] = 374 RANDOM VIBRATION OF DISCRETE SYSTEMS. with [-mo? + ie(1 + e)e + (1 + e)k] (icw + k)e where A(w) = mot — 2iw3(1 + e)om — w?[(1 + 2e)c? + 2(1 + e)km] +2iw(1 + 2e)ck + (1 + 2e)k? The second pair of frequency responses is obtained as follows: Hy,(w) =iwH,() — Hy,(w) = iwH,(w) E(|X/") = [7 1 (w)PSe(o) dw B(|iP) = J Wot (w)?S-(w) de Since these quantities were already obtained in Sec. 9.4 by the normal mode approach, we shall deal here only with E(|X,|?), for S-(w) = So, B (KP) = Sf" Uh(o)? da 7 6)" aytll-m* + (1+ ek]? + c2u?(1 + e)) do (9.215) This integral is evaluated according to Appendix C and yields a result coincident with Eq. (9.205a). For further examples of direct evaluation, see book by Crandall and Mark. This direct evaluation method accounts automati- cally for both modal auto- and cross correlations. For many-degrees-of-free- dom systems, however, its realization becomes cumbersome, and use of normal mode method is then advisable. Example 9.11 Consider now a system with coincident natural frequencies (see Fig. 9.14). White-noise excitation is applied to the bottom mass. We are interested in the ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 375 Fig. 9.14. Three-degrees-of-freedom system with two coincident natural frequencies. mean-square value R,, = E(|X,|?) and the error incurred by omission of the cross-correlation term. The equations of motion are readily obtainable as Eq. (9.106) with m 0 0 ¢ 0 0 im-[0 m | fa-[p c | 0 Om 0 0 ¢ 3k -k -k [k]=|-k 3k -k (9.216) -k -k 3k The natural frequencies are found from 3k — mw* -k -k -k 3k — mo? -k | =0 —k —k 3k — mw* 376 = RANDOM VIBRATION OF DISCRETE SYSTEMS which reduces to (4k — mw?)[(3k — mw*)(2k — mw?) — 2k?] =0 The natural frequencies are oj = £, 0} = w} = 4— (9.217) The natural mode associated with the first natural frequency is (y=, 1 yt whereas that with the second is {y®)=A,[1 1 -2]7 The third mode can be found from the requirement that it be orthogonal to the first two. Denoting (yJ=As[1 a 8)” we have (y©}"[m]{y) =mh,A,(1 + a+b) =0 (y®)"[m]{y®) = mdA,(1 + a - 2b) = 0 so that b = 0 and a = —1. Therefore, Gy} =As[1_ -1 Oy? The modal matrix becomes 1 ¥3 [vo] = 1 -¥3 -2 0 Substituting in Eq. (9.106) {X} = [vQ) and premultiplying the result by [o]”, we obtain [o]"[m]lo](8) + Lo] Tell) + [o]"[e][o(Q) = [o](F) 9.218) ILLUSTRATION OF THE ROLE OF MODAL CROSS CORRELATIONS 377 After manipulations, we find (e]"[m][e] = 1] fel"[ellel = [U1 1 0 0 [ol Lello) = {0 4 | 0 0 4 Equations (9.218) finally become Peele teehee Qt Qt Qi = Beh 1 Cc. k 1 Qt Ot 4 = Yon 1 c. eee Q,+ m2 + 4 = Vim F (9.219) The correlation matrix of generalized velocities Q,(1), Q,(1), and Q(t), [r] = E(Q){O)", is readily obtainable as Som Som Sow n= fan ty = a fy = ry (9.220) aettt Amy (Seog + $qt0j) 05004, A jy iV J 2 (uf — ah)’ + 4[kayog(f + of) + (87 + 82) ajuk forj*+k 1 1 1 oe 2 3v2m Ym 2 23m ts = V3 In Now we return to the sought quantities [R] = EC 4)" = E[o(O}([0](0})" = E([o}(O)(0)"[v]”) = [o] £((2){0)" Jo)” = (ollrlo]” 378 RANDOM VIBRATION OF DISCRETE SYSTEMS the expression for R,, being 1 i ; Rut (drt tat dt to tory + Bee eae We? ee Omission of the cross-correlation term r;/ V3 entails the error ~ fh Tys/V3 X 100% = ———_—=_—___— drt Wn + ts + t/V3 X 100% = 30% PROBLEMS 9.1. A single-degree-of-freedom system (Eq. 9.48) is subjected to a random load (see figure) with exponential autocorrelation function R,(T) = mraSye~""l, (a) Find the mean-square value of the displacement, velocity, and accel- eration. (b) Check that for a > 00, the values obtained in (a) coincide with those of a system under ideal white-noise excitation. Problem 9.1 9.2. A cantilever with a concentrated mass attached to its tip is subjected to random loading with autocorrelation function in the form of ideal white noise with intensity Sj. The cantilever itself is massless. Geometric F(t) El a Problem 9.2 9.3. 9.4. 9.5. PROBLEMS 379 Problem 9.3 dimensions are indicated in the accompanying figure. Find the mean- square displacement and the mean-square velocity. A road vehicle travels with uniform velocity v on a rough surface (see figure) and in the process is subjected to a time-varying displacement excitation. The roughness of the profile is a random function of the coordinate x, hence both the excitation and response of the vehicle are random. Derive an expression for the response mean-square value if the autocorrelation function of the profile is R(x), x.) = d? exp{—a|x. — x). Consider a nonsymmetric two-degrees-of-freedom system under ideal white-noise excitation F\(r). (a) Verify that the error incurred by disregarding the cross-correlations in evaluating E(|X,|*) reaches 40% when e tends to zero. (b) Verify that at e = 1 the cross-correlation terms can still be omitted but £(|X,|*) differs from E(|X,|*), unlike the example of a symmet- ric system considered in Sec. 9.4. The system shown in the figure is subjected to random loading with the autocorrelation function R,(1) =? evah(cos Br + § sin Bir |e» Flt} Le x,t) Le x,tt) eee coe| eee’ Problem 9.4 of 380 9.6. 9.7. 9.8. RANDOM VIBRATION OF DISCRETE SYSTEMS Problem 9.5 The masses are attached to a massless beam simply supported at its ends, with stiffness modulus EJ. Find E(|X,|*) and E(X,|?). A beam is clamped at its ends, but free sliding is permitted in the axial direction. The beam itself is massless. The stiffness modulus of the beam between the masses is eZ/, where « is a nonnegative parameter. F,() represents an ideal white-noise excitation with intensity Sp. (a) Examine the dependence of £(|X,|?) versus the parameter e. (b) Verify that for e > 0 the result of Prob. 9.2 is obtained. Find BX?) and E(|X,|?) for the system shown in the figure, where R,(t)=e-*". Use the approximate method for determining the mean-square values. The system shown in the figure is subjected to ideal white-noise excita- tion. (a) Verify that the modal matrix is 1 1 v2 1 Problem 9.6 9.9. PROBLEMS 381 Problem 9.7 where the corresponding natural frequencies are k k 2k, w= @+v2)* k a=0-W2)K, of (b) Verify that £(/X\|?) = E(X,|?) and £(X,|?) = £(X)?). An n-degrees-of-freedom system is subjected to ideal white-noise loading with intensity S). Each mass is connected both with the ground and with the other masses, the springs being identical with stiffness k. Damping proportional to the mass is provided by a dashpot attached to each mass, the damping coefficient being c. (a) Verify that this system possesses n — 1 identical natural frequencies. (b) Estimate £(|X,|) in the particular case n = 4 (for n = 3 the problem reduces to Example 9.11.) J» Fit) Le x,tt) xa it) X(t) ‘ . f a m m of Problem 9.8 382 RANDOM VIBRATION OF DISCRETE SYSTEMS k k Problem 9.9 CITED REFERENCES Barnoski, R. L., and Maurer, J. R., “Mean-Square Response of Simple Mechanical Systems to Nonstationary Random Excitation,” ASME J. Appl. Mech., 36, 221-227 (1969). Bendat, J. $., and Piersol, A. G., Random Data: Analysis and Measurement Procedures, Wiley-Inter- science, New York, 1971, Chap. 9. Bolotin, V. V., Statistical Methods in Structural Mechanics, State Pub. House for Building, Architecture and Building Materials, Moscow, 1965 (translated into English by S. Aroni), Holden-Day, San Francisco, 1969, p. 122. Caughey, T. K., “Classical Normal Modes in Damped Linear Dynamic Systems,” ASME J. Appl. Mech,, 27, 269-271 (1960). Caughey, T. K, and Stumpf, H. J., “Transient Response of a Dynamic System under Random Excitation,” ASME J. Appl. Mech., 28, 563-566 (1961). Crandall, S. H., and Mark, W, D., Random Vibration in Mechanical Systems, Academic Press, New York, 1963, p. 79. Crandall, S. H., and Wittig, L., “Chladni’s Patterns for Random Vibration of a Plate,” in G. Herrmann and N. Perrone, Eds., Dynamic Response of Structures, Pergamon Press, New York, 1972, pp. 55-71. Holman, R. E., and Hart, G. C., “Structural Response to Segmented Nonstationary Random Excitation,” AIAA J., 10, 1473-1478 (1972). Hurty, W. C., and Rubinstein, M. F., Dynamics of Structures, Prentice-Hall, Englewood Cliffs, NJ, 1964, pp. 313-337. Meirovich, L., Elements of Vibration Analysis, McGraw-Hill, New York, 1975. Priestley, M. B., “Power Spectral Analysis of Nonstationary Random Processes,” J. Sound Vibration, 6, 86-97 (1967). Warburton, G. B., Dynamic Behaviour of Structures, 2nd ed., Pergamon Press, Oxford, 1976, p.33. RECOMMENDED FURTHER READING Caughey, T. K., “Nonlinear Theory of Random Vibrations,” Advan, Appl. Mech., 11, 209-253 (1971). Corotis, R. B., and Vanmarcke, E. H., “Time-Dependent Spectral Content of System Response,” J. Eng. Mech. Div., Proc. ASCE, 01, 623-637 (1975) RECOMMENDED FURTHER READING 383 Eringen, A. C., “Response of Tall Buildings to Random Earthquakes,” Proc, 3rd U.S. Nat. Cong. Appl. Mech., ASME, 1958, pp 141-151. Hammond, J. K., “On the Response of Single and Multidegree of Freedom Systems to Non-Sta- tionary Random Excitations,” J, Sound Vibration, 7, 393-416 (1968). Iwan, W. D., “Response of Multi-Degree-of-Freedom Yielding Systems”, J. Eng. Mech. Div., Proc. ASCE, 94, 421-437 (1968). Lin, Y. K., Probabilistic Theory of Structural Dynamics, McGraw-Hill, New York, 1967. Chap. 5: Linear Structures with Single Degree of Freedom, pp 110-154; Chap. 6: Linear Structures with Finitely Many Degrees of Freedom, pp. 155-202; Chap. 8: Nonlinear Structures, pp. 253-292. Madsen, P. H., and Krenk, S., “Stationary and Transient Response Statistics,” The Danish Center for Applied Mathematics and Mechanics, Rept No. 194, Oct 1980. Newland, D. E., An Introduction to Random Vibrations and Spectral Analysis, Longman, London, 1975. Chap. 6: Excitation-Response Relations for Linear Systems, pp. 53-66; Chap. 7: Transmission of Random Vibration, pp. 67-81 Robson, J. D, “The Random Vibration Response of a System Having Many Degrees of Freedom,” Aeronaut. Quart., 17, 21-30 (1966). Sagirow, P., Stochastic Methods in the Dynamics of Satellites, Intern. Centre Mech. Sci., Udine, Course No. 57, Springer, Vienna, 1970 Vanmarcke, E. H., “Some Recent Developments in Random Vibration,” Appl, Mech. Rev., 32(10), 1197-1202 (1979) chapter Random Vibration of Continuous Structures Having so far confined ourselves to discrete structures, it is natural now to proceed to continuous structures in which excitation and response are gener- ally random not only in time but also in space. The solution via the normal mode method will be derived, and the effect of cross correlation between different modes will be shown again to be of marked importance. 10.1 RANDOM FIELDS In Chapter 9 we referred to a family of random variables depending on a single deterministic parameter ¢ as a random function X(t). In applications ¢ is usually time, in which case X(t) is referred to as a random process. A family of random variables depending on more than one deterministic parameter will be called a random field. Examples of such functions of space and time are the ordinates of the sea surface, stresses induced by turbulent boundary-layer pressure, and so on. We will confine ourselves to at most four deterministic parameters: time ¢ and a point in space r = xi for a one-dimen- sional case, r = xi + yj for a two-dimensional case, and r = xi + yj + zk for the most general, three-dimensional case. In analogy to the definition of a stationary function, in the wide sense, we will consider a random field as homogeneous if its mathematical expectation is constant and the autocorrela- tion function* Ryley stay fo) = EL X(t 4) X(t, 4)] (10.1) *In this chapter the random functions are denoted by lowercase symbols, whereas capitals are reserved for the corresponding random spectra. 384 RANDOM FIELDS 385 depends only on the differences p = r, — r, and + = t, — ty. Ry (hs 325 to) = Rely = My tp ~ 4) = Ry (9,7) (10.2) where p= — ry = (x2 — Et (H~ I+ (2 — Ak = i+ G4 + &k (10.3) For a random field that is stationary in the wide sense in time, we define the cross-spectral density 1 Qn S,(,%,.%) = Lf. R, (1,8. )e7!8" dr (10.4) where (TET) = fs 5.(w, F158, eX ™ dw (10.5) For a random field that is both stationary in the wide sense in time and homogeneous in the wide sense in space, S,(w, &) > p)exp(—ik + p — iwr) dp dr (10.6) Ry(1,0) = ff ff S.C, wexplie+p + ior) dede (10.7) —b0 where k= Kit Kj + gk e=fi+6j+ &k (10.8) and «*p is an inner product, wep = Kb, + Habs + Waly For the one- and two-dimensional cases, the integrals in Eqs. (10.6) and (10.7) are double and triple, respectively. Example 10.1 Consider a plate infinite in both directions under distributed loading q(t, x, y), where ¢ denotes time and x and y the space coordinates of a point in the 386 = RANDOM VIBRATION OF CONTINUOUS STRUCTURES middle surface of the plate. Assume that g(t, x, y) is a homogeneous field with zero mathematical expectation. We represent the loading by its expansion alt.x,y) = ff f expli(ot + mx + e2y)}O(w, m1, Kp) do de, dey (10.9) The autocorrelation function (coincident with the autocovariance function at zero mathematical expectation) reads Reg ths Xt Ins tay Xay Ya) = ELaM(ths Ms i) ata» Xa» 2) | = fff fff et0%tovsive2) 0m) xexp[—i(wr, + Kx, + 2 y))] xexp[i(w't, + Kix, + «4y2)] X dw dx, diy dos! de' di’, (10.10) We note that this function depends on the differences + = t, — t),€ = x2 — x), &, = y) — 1, providing the following relation is valid: E[O*(0, ey, k2)O(w', Kj, &2)] = S,(00, 15 Hy) 800" — w(K} — m1) 8(1eg — a) (10.11) Then Ror, 1 ba) = fff S,(os 41, Wadexp[i(or + 018, + Kaf2)] do die dey (10.12) which is analogous to Eq. (10.7). The spectral density S,(w, «,, x.) becomes S,(, i, y) = i JL J Rao £1.) xexp[—i(w7 + 1€, + K2€)] drdé,dé, (10.13) The displacement of the plate w(t, x, y) is governed by the differential equation aw ga by) (10.14) otw atw atw —t =z | + ph (& ax2 ay? ay4} ? RANDOM FIELDS 387 where Eh? 12(1 — »?) h is the thickness of the plate, » Poisson’s ratio, p the mass density, and E Young’s modulus, treated as a complex quantity D (10.15) E=E,(1 + ip) (10.16) incorporating the structural damping effect. (Compare the treatment of the stiffness coefficient k in Example 9.6.) For the displacement w(t, x, y) we have the following expansion w= Jf fesolitoe + yx + Kay) Wo, y, 2) dw dn, dn, (10.17) Substitution of Eqs. (10.9) and (10.17) in (10.14) yields Q(e, te, &2) W(w, eK) = D(«? + 42)" — pho? (10.18) Thanks to Eqs. (10.11) and (10.18), the autocorrelation function of the dis- placements can be put in the following form: Fp expli(wr + m8 + k2£2)] exp[i(w Ry (1,812) = ff f SPACER BS Baba 5 (we, ya) deo de deg -3 |[D(x? +03) - phos| (10.19) For the autocorrelation function involving the time lag r only, we have R,(1r) = R,,(7,0, 0) t cu) -f pier —5,(w, Ky, ty) dw de, dre, =o |D(«? + 03)” - pha?| (10.20) Consider the case where the loading is represented by spacewise white noise, that is, R(t, b1s €2) = Ro(7)8(€1)5(E2) (10.21) 388 = RANDOM VIBRATION OF CONTINUOUS STRUCTURES Equation (10.13) then yields 1 se ~iwe S,(@, 1, 2) = S,(w) = raf Ralrde dr where S,(w) is a spectral density. Then Eq. (10.20) becomes RA(7) =f” S,(w)e"A(e) do dk, dk, Greece (10.22) ~ 2 |D( x? + 43)? — pho? Introducing the new variables k,=rcos@ =k ,=rsin@ z=r? then 7 dz 2m A(o) =47"a9 [°° ————___# ____ if f D?(1 + p?)z4 — 2phw*D,2? + (phw)? For evaluation of the integral we use the formula (Gradshteyn and Ryzhik, p. 293) t dz _ meos(a/2) a (4) ens b 0 at+bz*+cz* 2cd?sina c 2(ae)'”? and the final result (due to Pal’mov) is 1+ (14 py?) 2 A(w) = [recs ey]? = (10.23) 2V2 phlw*u|(D,ph)'? 2 ph|w*a|( D,ph)'/ where D, = #3E,/12(1 — v?) and p <1. The autocorrelation function be- comes nna [7 7S, (w)exp(iwr) des (10.24) -2 2ph|a%u|(D,ph)'”? and depends on the particular form of S,(w). As is seen from Eq. (10.24), the vibration intensity is a function of damping, mass density, and stiffness and moreover decreases as these parameters increase. NORMAL MODE METHOD 389 10.2 NORMAL MODE METHOD We illustrate this method on an example of a uniform beam under distributed random loading g(x, ¢), stationary in the wide sense in time, so that its mathematical expectation is independent of time and the autocorrelation function depends only on the time lag 7. The relevant differential equation reads atw ow aw Eye tay + ATS = q(x, t) (10.25) where E is Young’s modulus, J the moment of inertia, c the viscous damping coefficient, p mass density per unit area, A the cross-sectional area, and w(x, f) the displacement. This equation is supplemented by the boundary conditions at the ends of the beam. For a free end, both the bending moment M, and the shear force V, vanish: ew Bw M,=EI>Z=0 V,= EIT =0 (10.26) For a simply supported end, the displacement w and the bending moment M, vanish: 2 w=0 M= eye =0 (10.27) ax Finally, for a clamped end, the displacement and the slope of the displacement curve vanish: w=0 7-=0 (10.28) The boundary conditions can be put as HH i=1,2 forx=0 fies (io3a forx = 1 (10.29) where / is the span of the beam and P,(--- ) are some operators. For example, for a beam simply supported at its ends, we have P,\(w) = P,(w) =1-w=w 7 P,(w) = P,(w) = EIT (10.30) 390 © RANDOM VIBRATION OF CONTINUOUS STRUCTURES, Additional conditions are the initial values at t = 0, which for convenience we take as zero: dw(x, t) w(x, t) ao 7 a =0 (10.31) 10 10.2.1 Free Undamped Vibration. We first consider the free undamped vibration problem. Accordingly, we put c = 0 and q(x, t) = 0 in Eq. (10.25), to obtain otw aw Ere + pac =0 axt ag We put also w(x, 1) = W(x, w)e', which results in aw if EIT ~ PAwW = 0 (10.32) x This is an ordinary differential equation, with a solution of the form W(x, ) = C,sin rx + C,cos rx + Cysinh rx + Cycosh rx (10.33) where ay 1/4 pa (242 EI This solution must satisfy the boundary conditions (10.26), and since GC are functions only of w we have the set C,P,(sin rx) + C,P,(cos rx) + C,P,(sinh rx) + C,P,(cosh rx) = 0, J=1,2,3,4 The above are the homogeneous equations of the system in terms of C\,C,,C ,C,, and a nontrivial solution is conditional on the following de- terminant vanishing: P\(sinrx) P\(cosrx) P,(sinhrx) P,(cosh rx) P,(sinrx) P,(cosrx) P,(sinhrx) P,(cosh rx) P,(sin rx) P(cosrx) P,(sinh rx) —P,(cosh rx) | — P,(sinrx) P,(cosrx) P,(sinhrx) P,(cosh rx) 0 (10.34) For example, for the beam simply supported at its ends, we have in accordance NORMAL MODE METHOD 391 with Eq. (10.27), 0 1 0 1 0 —r? 0 rte 7 eee tee aeo (10.35) -Ps -re PS Cc where s = sin rl, c = cos rl, S = sinh rl, and C = cosh ri. Evaluation of the determinant yields s = 0, which means rl = jz, j = 1,2,..., with EI\'? jn? pA Re (10.36) where w, is the jth natural frequency. The corresponding mode shape W(x, «,) turns out to be W(x, w)) = sin a (10.37) so that j is the number of half-waves over the span of the beam. The first three mode shapes for this case are shown in Fig. 10.1. As another example, consider the beam clamped at both its ends. Then, instead of Eq. (10.35) we have o ol ool r 0 Te Oi lrg s c Ss Cc re -rs rC rS Fig. 10.1, Mode shapes of beam simply supported | i at its ends. 992 RANDOM VIBRATION OF CONTINUOUS STRUCTURES which is equivalent to 1-cC=0 The solutions of this equation are rl = 4.7300 ry = 7.8532 yl = 10,9956 rl = 17.2787 rel = 20.4203 rl = (j +4) with the natural frequencies 3 w= n( Zt)" = i \ oA The appropriate mode shapes are (10.38) rl = 14.1371 (10.39) W(x, w,) = sin 4x — sinh yx + A,(cos rx — cosh 4x) _ 08 1 — cosh rl J sin rl + sinh rf (10.40) The first three mode shapes for this case are shown in Fig. 10.2. Hereinafter we will denote the mode shapes by w(x). Note that in accor- dance with Eq. (10.32) we have d*y,( au) — pAwyy;(x) =0 (10.41) {=== === = —> * |__| Fig. 10.2. Mode shapes of beam with clamped ends. NORMAL MODE METHOD 393 We now seek the orthogonality condition for the different mode shapes 1 foayj(xee(x) d= 0, j*k (10.42) To do this, we multiply Eq. (10.41) by },(x) and integrate with respect to x over the span: =< y “ Hf Ay, (x) (x) dx = f W(x)EI—2 de (10.43) Double integration of the right-hand side of this equation by parts yields oF f'0AY, (x) be) dx = oy EI ue)! dy, (x) _ d*y,(x) ; -f a BI d x ry ' = [Ht )EI— “A - 24) ) ppt Wi) uo] o dy, (x) ay, (x) + fame se ={¥e0005(2) ~ 2 o} | gy Aud) ax) + | EI—— ——dx 10.44 i dx? dx? ( ) Since the nonintegral terms for all combinations of free, simply supported and clamped ends at x = 0 and x = / vanish, the latter equation reduces to d’ ay, 9} f'o4y,(x)¥alx) dx = fe) UO ae (10.45) Analogously, we have the kth natural frequency (x) d(x) 4, dy ob [oad (x) ¥4e(2) ax = fe i (10.46) Comparison of the last two equations indicates that, because of the nonequal- ity of w, and w, Eq. (10.42), the orthogonality condition, is valid. 10.2.2 Deterministic Response via the Normal Mode Method. Let us now consider the deterministic version of Eq. (10.25). To do this we expand the 394 RANDOM VIBRATION OF CONTINUOUS STRUCTURES given distributed loading q(x, t) in series in terms of the mode shapes of undamped free vibration: a(x.) = Fa ()¥,(x) (10.47) jn Multiplying the above by y,(x) and integrating over the span of the beam, we arrive at an expression which, with k replaced by j, becomes a(t) = ['a( x. 144) ax, wpm f(x) ax (10.48) We also expand the displacement w(x, t) in a familiar series: ES w(x,t)= Dw (1¥,(x) (10.49) inl Substitution of Eqs. (10.47) and (10.49) in (10.25) yields ¥ [em au) + tly ca) j=l wy 04 4 (x) ao 00)]=9 With Eq. (10.41) in mind, we obtain z [eserm¢o + oN 4 pg HO | w(o}eco =o j-l and since this equation is valid for any x, the expression in square brackets vanishes for every j and d?w,(t) dw,(t) 1 : ae + Gay—G- + wpw(t) = pat)» J=1,2,... (10.50) where ” Ide, (10.51) NORMAL MODE METHOD 395, q,(t) being generalized deterministic forces. Equation (10.50) for a beam has precisely the same structure as Eqs. (9.125) for a discrete system. Accordingly, the solution to Eqs. (10.50) may be put in the form w(t) =a alae +) dr (10.52) where ,(t) is the impulse response function associated with the jth mode as per Eq. (9.127), and Eq. (10.49) becomes ae w(x, 1) = a, WOOP’ alar(t —1)dr or, with Eq. (10.48) in mind, w(x.) = oa (x) f° y(t 1) ae fal 829, (8) a8 or, denoting t- 7 =A, w(x, 1) = me 7 4,() [1,0 anfalé t= d)u,(€) a Since shifting of the lower limit for 0 to — oo does not affect the integral over A, we have ~_§,-2 " = wee da sg LMS byl ar fair n)¥(@ ds 00.53) For the generalized forces, representable in the form al) = f° Oaer' de 1 2 Q;(w) = im qj(t)e i" dt (10.54) =o we seek the displacement in an analogous manner w,(t) -(* W(w)e"*" do -00 W(w) = * (a (10.55) 396 RANDOM VIBRATION OF CONTINUOUS STRUCTURES Substituting Eqs. (10.54) and (10.55) in Eq. (10.50), we find W,() = Hy(«)Q,(w) (10.56) a = Wt bw = tt pA(w? — w? + 2a) (10.57) where H,(w) is the complex frequency response associated with the jth mode, and L,(w) the mechanical impedance of this mode. 10.2.3. Random Response via the Normal Mode Method. The formal solu- tions derived in the preceding section represent the response space-time function w(x, ¢) due to a particular excitation function q(x, ¢). In the case of random vibration, we regard the excitation as an ensemble of space-time functions, while the ensemble of response functions constitutes a random field depending on the excitation and the system. The mathematical expectation E(q(x,t)] = m,(x, 1) and the autocorrelation function El q(x, 1)4( x2, t2)] = Ral, 95 tf) of the force are supposed to be given. The problem consists in determining the mathematical expection and the autocorrelation function of the displacements. The mathematical expectation of the displacements is Elo) = HM Ca) f° byl) ae ['mg( Est = 2) ¥(8) db If the excitation is stationary, then m,(é,t— 7) = m,(€) that is, m, is independent of the time coordinate, whence Ieee 0 1 Elw(x O] = 5g Lj Ca) flr) arf mal Ev, (8) a jn For random excitation, q,(t),j = 1,2,..., is replaced by an infinite-dimen- sional vector, so that E[q(0] = fing (3) a (10.58) NORMAL MODE METHOD 397 so that if m,(x, £) is zero, so is also E[q,(t)]. For simplicity, we assume the latter to be the case. The cross correlation between q,(t,) and 4,(¢3) is, for jk =1,2,..., Rayabtis 2) = E[ y(t) 4e(t2)] = 70g? [PEL aCe tra eas te) Cnr) val ea) dx ey = 205? ff Ryle a5 ta jm Wal) ey de where x, and x, are dummy variables representing two distinct points of the beam 0 < x,, x, j, B > k) -2.-2f! £h Sool) = 95 29:7 [Sys x25 0) (21) Ya 2) dm dea (10.68) An equation for the displacement analogous to (10.63) follows from Eqs. (10.49) and (10.55): w(x, t) = Evo” W(w)el de jn 0 The corresponding autocorrelation function reads Rylxy x95 ty) = E] LY (ad f” Wee tte ol med YD dala) [~ Wi (wa)ettr? deny k=l 72 400 RANDOM VIBRATION OF CONTINUOUS STRUCTURES With Eq. (10.56), we have Ry (X11 X25 fis fa) =E Lyf” HY (04) O7(w,)e“ "de, j=l ees x X val) J Hila) On(o Jel” ae = EL oy larvel (=) ffm 64) Hy (02) E[OF(«,) Qx(w2)] julket X exp(—iw,t) + twtr) dw, dw, Taking into account Eq. (10.64), we have Ry (13 25 fy te) -E Eva) Yl) {~ S0,0,(0)Hf(w)Hy(a)el" de jaikel = R, (x1, 2,7) (10.69) so that the response also turns out to be stationary in the wide sense in time. For the cross-spectral density of the displacements, we have, in perfect analogy with Eq. (10.66), R(x X27) = £ S,(x1, Xo, @) el" do (10.70a) 1 re iwe Sy (15 X20) = act R,(%,,%2, Te dr (10.70b) — 0 Therefore, Klex0)- 5 2 So,o,() p(w) #2) ¥, (41) Ya) jalkel (10.71) The cross-spectral density can be put in the form Sesmee EE (# ea ee jot kai \ E(w) Ly (@) weve J4oat dk S,(w) = S,(0,0, w) (10.72) NORMAL MODE METHOD = 401 where 4,06 @) is called the joint acceptance and is defined as So,o,(w) 7»? 4oal#)= 5 Typ = ff Slee x0 09, (24 (2) ds by Sy 228) So) (10.73) Spe X20) = while Ao 9,(#), j * k is called the cross acceptance and is defined as So0,(@) rE Sat SAS 200) Cn) Yel) de dey Aoo,(@) = (10.74) The cross-spectral density at point x is given by & V(x) So)? 7a AQ,0,(0) j © ulxvalx) Slo)? + LY These) oii jek Ago) (10.75) The first sum is associated with the modal autocorrelations, identical modes being involved, and the second, with the modal cross correlations, since nonidentical modes are involved. The mean-square value of the displacements is 42(x) = R(x, x,0) =f” S,(x,x,0) do ~ Evplady + EE wlsdvuleddy (1026) j= =lk=l i ie where 2 5,(4)!"o,0,(¥) d,={° —————a ‘ii Jes [1,(w)|ef o wo S,(w)I"Ag o,(w) dy = f° dw (10.77) 0 Ej (a) Ex wo) Poh 402 RANDOM VIBRATION OF CONTINUOUS STRUCTURES 2 0 222-Z)7 I Fig. 10.3. Change of variables (Sec 10.3). Of interest is also the span average s* of the mean-square values d2(x): ss 5 fae dx Pee ae = Yd POH) ae LY deg [CVC de in julkal jwk (10.78) The second sum vanishes identically, by orthogonality of the mode shapes (10.42), and we have sal yay (10.79) that is, s? depends on the joint acceptances only. A case where the exact mean-square value of the displacement is also independent of the cross acceptances is described in Sec. 10.4. 10.3. DETERMINATION OF JOINT AND CROSS ACCEPTANCES While the double integrations in Eqs. (10.73) and (10.74) for the joint and cross acceptances mostly call for numerical procedures, in certain specific cases they can be realized in a closed form, depending on the cross-correlation function S,(%), x2, @) and the mode shapes (x). We consider here one such common DETERMINATION OF JOINT AND CROSS ACCEPTANCES 403 case, namely, where the external loading q(x, /) is a homogeneous random field, that is, the cross-correlation function is a function of the separation 2 — X,. Here the double integration can be reduced to a single one, irrespec- tive of the mode shapes y,(x). First of all we rewrite Eq. (10.74) as follows: Ao,od 9) = ff Sy(xa ~ x1 0)¥, (x be 2) do dea and introduce new variables X, + x2 X2~ xX; (see Fig. 10.3). Then eerie eae A(x, x2) a, az = ZT dz, dz. = abs dz, di dx, dx, = abs 8(z,, 2) iz, dz, = abs Ox, ax, iz, dzy dz, dz, eee Se = abs 2 [az dz, = dz, dz, ye and Aoso'®) = K,(w) + Kw) where Ky(v) = ['°5,(2992,0) def" 22 K,(w) =f gSleo?: wo) af? 42, Introducing further new variables, 404 RANDOM VIBRATION OF CONTINUOUS STRUCTURES we obtain K,(o) = (56,0) 46" y,(9— 5) ta(a+ §) an ) K,(0) = f 5,(- 0) dé ae v(1 +$)u(n-£) an and finally Ago) = [56 )M,(&) dé m0 [oo S)alarg] +4 (a+ Sole) (10.80) Example 10.2 Consider a beam simply supported at both its ends, so that = sin 2% ¥,(x) = sin Equation (10.80) yields (¢ = x/1) Aoa,(#) = 4f" We [o- soos jg + 5 sin jg a (10.81) At Ca] $b) Oy nt — jain kat) a, 0 S,(w) a(k? ~ 7?) Ao) = j*k (10.82) Equation (10.81) is analogous to Powell’s formula, and Eq. (10.82) to Lin’s. From Equation (10.82), it follows that the modes symmetric about the half-span cross section x = //2 of the beam (those with an even number of half-waves) do not correlate with the antisymmetric ones (those with an odd number of half-waves): Aoo,(#) =0 j+ kodd Very often, in applications, the cross-spectral density has the form S,(§, 0) = S,(w)e~A8lcos BS (10.83) DETERMINATION OF JOINT AND CROSS ACCEPTANCES 405, Fig. 10.4. Cross acceptance Ag,g, as a function of nondimensional parameters e, and d,. where A and B are positive nondimensional quantities. Then bog, tts tis 2 4 20) aj Ry” a? PR? (8? - 2e?)(-1)/e~4sin B + aaglE — 4e?(1 + 24?)][1 —(~1)/e~4c0s B] (10.84) dog, =A O fe =F 6g ak ca ae) ie PR, x[-2d,e,sin B + (E; — 2€?)cos B] A Fea 26k (yk ead kn RB, + (Me “ee x [-2d,e,sin B + (E, - 2ej)eo 8}, j*k, (10.85) 406 = RANDOM VIBRATION OF CONTINUOUS STRUCTURES Fig. 10.5, Cross acceptance Ag,g, as a function of nondimensional parameters e, and d,. where A B 4 -e )* Ga ep 2 2 = F2 2 E=ltd}?+e? Ry) = 5} -4e} As an example, Figs. 10.4 and 10.5 show Ao o, and Ago, as functions of the nondimensional parameters e,; and d,. 10.4 CASE CAPABLE OF CLOSED-FORM SOLUTION Let us consider the case where the autocorrelation function of the loading is given as space-time white noise: Ry(x 295 fy) = B66 ~ )8(t4 ~ 4) = £0(6)8(7) (10.86) where R is some positive constant, { the separation of the observation cross sections of q(x, ¢), and 7 the time lag. The beam is simply supported at its ends, so that the mode shapes are as per Eq. (10.37). CASE CAPABLE OF CLOSED-FORM SOLUTION 407 The cross-spectral density S,(x,, x2, #) is, in accordance with Eq. (10.6.2), fe -iwr Siu x0) = 39 [Bolen xa Ne dt =” = ~ierde =F 5x, — = smi f_,80 x1)8(r)e~" dr = 57 8(x2 — x1) and, by Eq. (10.68), 1 $0.0, = eae 78 (x2 — xy)sinZ sin kax. 2 dx, dx» = Tbe (10.87) where 6,, is Kronecker’s delta, so that the cross acceptances vanish identically and all joint acceptances equal 1/2/. Consequently the second sum in Eq. (10.75) vanishes, and we obtain for zero time lag by Eqs. (10.75) and (10.70a), © ; ; R,,(x,, 2,0) = R 7 ¥ sin a sin a f dw m(pAl) j=1 = (u? — wt)? + 4g?uha? ROS 1 C jmxy imxy = s sin + sin — pAle joy wp ! ! RPS jmX, . jmx. sin 2! gin 2% mEle ja} ! ! 2 © in an __Rl LY] ogf@™1 = ¥2) _ ggg I + 2) 2n*Ele j~ i 7 (10.88) bearing in mind Eq. (9.65.4) for the integral. Using the following summation formula (Gradshteyn and Ryzhik, p. 39) $ cose wt atx? mx? xt je 90 «12 12 a8” O Pata | sn|* Ce eal B/2 CRANDALL'S PROBLEM 411 For the velocity integral we obtain Sy_ BW, Wes We) + O'( wy, 0/5) es era oe et (pA) (co? — w})’ + 28?(u? + w2) (10.102) uk = where 2. 2 gd 2. 2 2 D'( 0), @g5 @) = wile) = ot) + (ai + wi) ei) +e (a+ a {uf - 8/4) w? + wt — 2u,(w? — p274)'” w+ o? + 20,(w? — B?/4)'? ~ (@2 — p24)! + Bo? + sp fin| as /4) Jone (g- eal") B/2 xin +tan™ For j = k, Eq. (10.101) is analogous to (9.56) for a single-degree-of-freedom system under band-limited white noise, whereas Eq. (10.102) is analogous to Eq. (9.62). As follows from Figs. 9.3 and 9.4, J, and Jj; are very small at «; > w,. The conclusion is that only terms with w; < w, and w, < w, have to be taken into account in summations (10.97) and (10.99). The number N, of modes in the excitation band is defined as the largest value of j such that @; < w,. As a result, expression (10.97) and (10.99) can be approximated as E[w?(x, 0] = . Even Vex) oj (a) Y,(4) 4 27 Ty ja=lk= E[o%(x,1)] = ‘ Eviow (x) de(a) vg (4)9; 295 The Peon (10.103) Let us consider the expression for E[v?(x, ¢)] in more detail. It resolves into two sums, E[w?(x, 1)] = s,(x, 1) + 52(x, 1) 412° RANDOM VIBRATION OF CONTINUOUS STRUCTURES e(v2¢x,0],81 3200 2 400 1600 800 MEAN SQUARE VELOCITY 0 0.2 04 0.6 08 1.0 x/t Fig. 10.7. Mean-square velocity E[v?(x, ¢)] and component sums s, and s2 for beam without elastic foundation, H = 0.01, K = 0, 2, = 27, B = 0.02. where N, , aecttt a (4) =40 sin sin? 28 jal fy s(x, 1)=4 D0 DY sin aa sin ae sin ia sin kea 4 a Jnl k=l jek tj, =8y x sin 222 sin £7 gin J74 gin Kd Ht ee 7 7 7 Pp j

You might also like