Professional Documents
Culture Documents
Essential Mathematics For Engineers and Scientists 1St Edition Pence Online Ebook Texxtbook Full Chapter PDF
Essential Mathematics For Engineers and Scientists 1St Edition Pence Online Ebook Texxtbook Full Chapter PDF
https://ebookmeta.com/product/advanced-mathematics-for-engineers-
and-scientists-duchateau-paul/
https://ebookmeta.com/product/applied-mathematics-for-scientists-
and-engineers-1st-edition-youssef-n-raffoul/
https://ebookmeta.com/product/mentoring-scientists-and-engineers-
the-essential-skills-principles-and-processes-1st-edition-john-
arthurs/
https://ebookmeta.com/product/statistics-for-engineers-and-
scientists-6th-edition-william-navidi/
Quantum Mechanics: For Scientists and Engineers 1st
Edition Harish Parthasarathy
https://ebookmeta.com/product/quantum-mechanics-for-scientists-
and-engineers-1st-edition-harish-parthasarathy/
https://ebookmeta.com/product/cost-analysis-for-engineers-and-
scientists-1st-edition-fariborz-tayyari/
https://ebookmeta.com/product/data-analysis-for-scientists-and-
engineers-edward-l-robinson/
https://ebookmeta.com/product/scientific-computing-for-
scientists-and-engineers-2nd-edition-timo-heister-2/
https://ebookmeta.com/product/scientific-computing-for-
scientists-and-engineers-2nd-edition-timo-heister/
ESSENTIAL MATHEMATICS FOR ENGINEERS
AND SCIENTISTS
INDREK S. WICHMAN
Michigan State University
University Printing House, Cambridge CB2 8BS, United Kingdom
www.cambridge.org
DOI: 10.1017/9781108671354
2 Linear Transformations
2.1 Linear Transformations
2.2 Change of Basis for Linear Transformations
2.3 Projection Tensors
2.4 Orthogonal Projection
2.5 Invariant Subspaces
2.5.1 The Null Space
2.5.2 The Range
2.5.3 One-Dimensional Invariant Subspaces – A
First Look
Exercises
References
Index
Preface
The title of this book states that the mathematical subject matter
covered herein is “essential.” This assertion itself requires
clarification. For what is the mathematics discussed here essential?
Why is the mathematics discussed here essential? For whom is the
mathematics discussed here essential (engineers and scientists,
clearly, but that must also be explained)?
To begin answering these complicated questions it is necessary
to discuss how and why they first arose.
The fact is that in the four decades between approximately
1980 and 2020 there has been an explosion in the uses of
computers for the solution of engineering and scientific problems
using techniques of computational modeling and simulation.
Computers are also used for other purposes ranging from
entertainment (videos, cell phones, etc.) to the apprehension of
fugitives and for espionage (facial recognition), to large scale data
processing in health care and social statistics, to the increased
financialization and economization of daily life. Simply put, the
computer is now ubiquitous.
One of the downsides of increased sophistication is the
associated loss of contact with fundamentals. In everyday life this is
observed as the inability to make repairs on ones’s car or to replace
a “heating, ventilation and air-conditioning system” in one’s home.
Once computers were able to solve long-standing difficult problems
of engineering, what was left for the replaceable, interchangeable
engineer? The computer made it all so easy. It never asked for
vacations, bonuses or raises, and, best of all, unlike real-life
engineers it never made any mistakes. Furthermore, if engineers and
scientists are not needed, why should engineering and science even
be taught?
We were led in the face of such circumstances to ask a more
immediately relevant set of questions: What level of mathematical
sophistication is necessary for our engineers and scientists
(assuming they are still necessary) in an era where machine
computation occupies the prominent place? A slightly different
version of the same question is: How grounded should our students
be in the fundamentals of mathematics? One cannot help but notice
that the need to write one’s own code has seemingly withered away
as pre-packaged and commercial software programs became
universally available. And so it became legitimate to ask even more
questions: If a solution of some kind is always computable is there
any longer a need to worry whether the mathematical algorithms
and modeling assumptions are consistent? Is it worth one’s time to
ask whether the mathematical problem is well posed? Is there still a
need on the part of the engineer or scientist to deploy strict
algorithmic thinking during the problem-solving process or should his
or her role end with the problem statement?
We learned by subsequent experience through many
exaggerations, misstatements and mishaps that the consequences of
shorting human technical ability and expertise can range from the
comical to the harmful. Bridges collapsed, airplanes crashed, wind
tunnels and other large-scale experimental facilities were still
needed, and experimental diagnostics became even more
sophisticated and accurate. Hence there arose a distinct need for
oversight in the form of practicing engineers and scientists able to
provide a competent technical scrutiny of proposed “solutions” to
difficult technical problems. By necessity we were returned to asking
modern variants of age-old questions: what is a problem, and what
does it mean to solve it? We were forced to acknowledge that
though we lived in an era technologically far superior to that in
which much of our mathematics was developed, we had not yet
outgrown the constraints it has imposed on us: everyday life
requires mathematics as much, or more than it ever did.
With the change of scope and emphasis in modern engineering
practice — our nearly ubiquitous dependence on machine
computation — new and different ways were needed to present
essential mathematical knowledge to students of engineering and
the sciences. Even so, the path forward begins by casting a long
glance over one’s shoulder toward the solved problems from the
venerable past. How did these problems arise? What questions did
they seek to answer? How were the solutions generated? These
questions provide valuable clues to the construction and solution of
modern mathematical models. Although computers have altered the
way we do our mathematics, and how we construct solutions, and
how we consider the parts of a given problem in relation to the
whole, it was a change in form, not content. As one simple example,
the reader may reflect on the fact that though we are not about to
discard our software codes we can nevertheless be more vigilant
about testing them against established mathematical solutions. This
goes from being merely important to crucial when we extend these
codes and programs past the simpler problems for which they were
initially designed. Man’s reach exceeding his grasp, we seek the
means for closing that gap whenever and however possible.
The preceding discussion, of course, was centered on the two
questions, “why and for what is technical competence in advanced
mathematics essential?”
Concerning “essential for whom?”, the aim of this introductory
graduate-level book is to visit, with sharpened tools and perspective
honed by precedent and experience, a subset of the most important
— essential — topics of mathematics that the aspiring graduate level
engineer or scientist should know. Since this knowledge is
foundational, it builds upward and outward. Each new subject
matter, however, has a partial foundation of its own so the entire
assembly might be taken by analogy to resemble a Roman
aqueduct: a superstructure resting on separate but strongly linked
segments, or foundations. The mathematics of engineering science
builds upward firstly from a knowledge of exact mathematics
unencumbered (except for the occasional example) by
approximation, estimation and asymptotization. We believe that
although there is available an assortment of approximate and
asymptotic solution techniques for various classes of engineering
and applied science problems, extended coverage of them in the first
year of graduate study would amount to putting the cart before the
horse.
Key Features
Key features of this book include the following
The order of the material is not “random” (to use the current
jargon) or arbitrary. Linear Algebra should be studied first
followed by Complex Variables, and then ODEs and PDEs.
There is a constant “build” as these topics are gradually
developed, until, at the end of Chapter 13, much of the
intellectual content of the previous twelve chapters is
represented in the form of a final example in Chapter 13,
Section 13.6. Indeed, detailed worked examples serve as a
vehicle for introducing and advancing the theory. While
certain of these may seem discipline specific by virtue of their
particular formulation, they are in fact general with respect to
the presentation of the underlying key mathematical concept.
This is made clear in the narrative discussion. In addition to
demonstrating procedures and treating previously raised
issues, these examples serve to pose new questions. A
version of this is the introduction of the pseudo-inverse in
Section 3.5 which fulfills a specific need that, at first glance,
appears to be a loose end but is then revealed to be a
concept of fundamental importance. The continued follow-up
on issues raised by prior examples is a key feature of the text.
Linear Algebra
1
Linear Algebra and Finite-
Dimensional Vector Spaces
◈
In this and the following three chapters we study the subject of linear
algebra by applying general concepts to a series of informative and
successively more challenging examples. Linear algebra, by which we
typically mean the solution of finite-dimensional sets of linear algebraic
equations (as contrasted with nonlinear algebraic equations, or the solution
of “infinite-dimensional” differential equations), requires consideration of
dimensionality, linear independence, orthogonality, rank, and the careful
evaluation of special cases when one or more of these issues are either
violated or unfulfilled. For example, dimensionalities may not match, or the
actually relevant space may be a subspace of a larger space, or an original
set of spanning vectors may be geometrically skew, etc. As always, a major
concern in the solution of any mathematical problem is the existence of a
unique solution, and what meaning may be attached, if any, to cases where
that condition is violated.
After the notion of the dimension of a space has been established, and is
sufficiently well understood, no concept of linear algebra is more basic or
more important than linear independence. The vector space in which the
solution to our linear system of equations is sought must satisfy the
requirement of linear independence. Following that, the “span” of the vector
space must be examined. A “space” is “spanned” by a set of “linearly
independent” vectors whose components are non-overlapping. For example,
a three-dimensional space is “spanned” by the action of three perpendicular
unit vectors.1 Their principal advantage in any calculation is that they are
orthogonal unit vectors having magnitude unity and absolutely no mutual
overlap. One might imagine describing a space for which some overlap was
unavoidable, such as traversing an optimum trajectory across a mottled,
irregular surface, or some other process for which a constraint requires using
a set of spanning but non-orthogonal vectors.
The construction of vectors that properly “span” some special vector
subspace leads naturally to the Gram–Schmidt procedure for deriving sets of
orthogonal spanning vectors from any given non-orthogonal spanning set.
This elegant but also highly workmanlike technique (hence its appeal)
subtracts the overlap of the vector being considered with all previously
derived vectors, which were found by the same procedure, in order to ensure
that each newly added vector is orthogonal to all previous vectors.
Transforming the orthogonal vectors thus derived into orthogonal unit vectors
is the next logical step.
Additional topics discussed are the transformation matrix for effecting a
change of basis, systematic methods for decomposing a vector space, and
other operations that collectively form the arsenal of a competent
computational analyst. The student who masters this material will be able to
read, with relative ease, books and articles that are focused on advanced
numerical analysis.
For the usual matrix operation rules, answer the following questions.
Q: Is the set of all square matrices a vector space?
A: No, the operation of addition, at least as originally defined, does
not allow two of these objects to be added together if the “squares”
have different sizes.
Q: Is the set of all matrices, such that the third row is twice the
second row, a vector space?
A: Yes, all the operations are defined, and the new matrix obtained via
such an operation continues to be a matrix in which the third
row is twice the second row.
Q: Is the set of all matrices, such that the third row consists of
entries that add up to 4, a vector space?
A: No. Although all of the operations are defined, the new matrices
obtained do not always satisfy the defining rule that the third row
consists of entries that add up to 4. Equivalently, the defining rule
does not allow the existence of a zero vector.
Q: Is the set of all matrices, such that the third row consists of
entries that add up to 0, a vector space?
A: Yes, all the operations are defined, and the new matrices obtained
continue to satisfy the defining rule that the third-row entries add up
to 0.
With this as our background, we shall often work in the standard vector
space of column vectors. This vector space, called , consists of
objects x of the form
(1.1)
(1.2)
can be satisfied via the choice of the scalars , … , . Clearly taking all of
the scalars equal to zero, i.e., , , …, , will cause Eq. (1.2) to
be true. If this is the only way in which Eq. (1.2) can be satisfied, then we
say that the set { , , …, } is linearly independent. Conversely, if there is
any solution to Eq. (1.2) with at least one of the scalars nonzero, say
for some fixed i, then we say that the set { , } is linearly dependent.
In this case we can write the vector (the one we have identified to have
the nonzero ) as a linear combination of the others (because we can divide
by there is no problem solving Eq. (1.2) for ). In contrast, if the set is
linearly independent, then no vector in the set can be expressed as a linear
combination of the others.
EXAMPLE 1.2.2
The case of two vectors is rather trivial because two vectors are linearly
dependent if and only if they are multiples of each other (which and
were not in the above example). Let us now modify the example by asking
the same question for the set of three vectors consisting of the same and
along with another vector .
EXAMPLE 1.2.3
(1.4)
The final column in the augmented matrix in each of the two examples
consisted only of zeros. It therefore remained zero during all of the
row operations, merely being carried along. This column, in other
words, was inessential to the solution procedure.
Language: Hungarian
Őszi csillagok
NORVÉG EREDETIBŐL FORDITOTTA
ÁCS GÉZA