LLN Excel Subs 2024-6-21 1992867

You might also like

Download as xlsx, pdf, or txt
Download as xlsx, pdf, or txt
You are on page 1of 54

Time

11s
14s
15s
21s
24s
29s
33s
37s
44s
47s
50s
54s
59s
1:04
1:09
1:10
1:14
1:22
1:29
1:33
1:38
1:43
1:50
1:53
1:55
2:01
2:06
2:11
2:14
2:18
2:24
2:28
2:31
2:33
2:35
2:38
2:38
2:43
2:44
2:49
2:57
3:00
3:02
3:05
3:10
3:13
3:20
3:29
3:36
3:42
3:45
3:50
3:57
4:00
4:05
4:08
4:11
4:13
4:15
4:20
4:29
4:33
4:38
4:45
4:52
4:54
4:55
5:00
5:06
5:09
5:11
5:16
5:18
5:19
5:20
5:24
5:33
5:38
5:41
5:43
5:51
5:54
5:59
6:03
6:07
6:11
6:12
6:16
6:18
6:23
6:28
6:31
6:35
6:42
6:47
6:49
6:53
6:55
7:00
7:03
7:06
7:10
7:15
7:19
7:28
7:31
7:35
7:37
7:40
7:44
7:49
7:51
7:55
7:59
8:02
8:04
8:08
8:13
8:19
8:25
8:27
8:33
8:38
8:47
8:55
9:02
9:06
9:13
9:15
9:20
9:28
9:31
9:35
9:37
9:41
9:45
9:51
9:53
9:56
10:01
10:08
10:09
10:12
10:14
10:18
10:20
10:22
10:25
10:29
10:34
10:41
10:46
10:50
10:50
10:53
10:55
10:57
11:03
11:06
11:09
11:15
11:18
11:19
11:23
11:28
11:35
11:37
11:43
11:47
11:51
11:52
11:56
11:57
12:00
12:08
12:13
12:14
12:18
12:19
12:21
12:23
12:27
12:31
12:32
12:34
12:38
12:40
12:45
12:51
12:55
12:57
12:59
13:01
13:05
13:09
13:12
13:17
13:20
13:24
13:32
13:37
13:39
13:41
13:42
13:47
14:02
14:07
14:11
14:14
14:19
14:26
14:34
14:37
14:41
14:47
14:51
14:57
15:01
15:07
15:15
15:18
15:23
15:26
15:30
15:31
15:34
15:35
15:42
15:44
15:46
15:47
15:48
15:50
15:58
15:59
16:00
16:05
16:08
16:17
16:27
16:30
16:38
16:42
16:45
16:47
16:50
16:53
16:57
17:01
17:03
17:15
17:17
17:19
17:23
17:27
17:28
17:31
17:35
17:36
17:37
17:41
17:49
17:54
17:59
18:05
18:08
18:10
18:17
18:20
18:25
18:26
18:29
18:32
18:35
18:39
18:41
18:45
18:51
18:54
18:55
19:01
19:04
19:09
19:14
19:18
19:18
19:19
19:26
19:30
19:32
19:34
19:40
19:44
19:47
19:49
19:50
19:55
19:59
20:02
20:05
20:09
20:10
20:10
20:13
20:15
20:19
20:21
20:23
20:28
20:29
20:31
20:33
20:35
20:36
20:37
20:43
20:47
20:49
20:49
20:53
21:05
21:12
21:16
21:19
21:23
21:27
21:28
21:33
21:38
21:41
21:46
21:49
21:53
21:57
22:03
22:05
22:07
22:13
22:16
22:19
22:24
22:27
22:32
22:34
22:36
22:41
22:47
22:50
22:54
22:57
22:59
23:06
23:11
23:15
23:17
23:20
23:23
23:25
23:27
23:31
23:32
23:36
23:40
23:46
23:49
23:51
23:58
24:02
24:05
24:10
24:12
24:15
24:18
24:22
24:24
24:29
24:34
24:37
24:41
24:44
24:46
24:49
24:54
24:57
25:03
25:11
25:16
25:20
25:24
25:33
25:34
25:38
25:38
25:41
25:46
25:48
25:51
25:54
26:02
26:05
26:10
26:14
26:20
26:23
26:25
26:27
26:32
26:35
26:39
26:46
26:50
26:56
27:01
27:06
27:11
27:15
27:22
27:27
27:30
27:34
27:38
27:43
27:46
27:51
27:54
28:00
28:05
28:07
28:10
28:11
28:12
28:14
28:19
28:21
28:23
28:26
28:30
28:36
28:39
28:43
28:48
28:50
28:56
28:59
29:01
29:06
29:13
29:15
29:20
29:26
29:32
29:33
29:43
29:43
29:45
29:49
29:52
29:57
29:59
30:00
30:01
30:04
30:07
30:09
30:14
30:17
30:17
30:18
30:18
30:21
30:22
30:24
30:32
30:37
30:41
30:43
30:46
30:49
30:53
30:54
30:55
30:56
30:56
31:08
31:10
31:14
31:19
31:33
31:40
31:43
31:48
31:51
31:54
31:57
31:59
32:03
32:07
32:17
32:20
32:28
32:32
32:34
32:36
32:42
32:51
32:52
33:04
33:09
33:12
33:15
33:16
33:18
33:21
33:23
33:26
33:32
33:37
33:41
33:44
33:49
33:54
33:57
34:00
34:01
34:10
34:11
34:12
34:16
34:19
34:21
34:22
34:44
34:48
34:54
34:56
35:00
35:02
35:07
35:20
35:24
35:25
35:29
35:35
35:40
35:47
35:51
36:00
36:02
36:06
36:10
36:13
36:18
36:20
36:23
36:29
36:36
36:37
36:41
36:42
36:42
36:44
36:46
36:49
36:54
36:57
36:59
37:03
37:05
37:11
37:13
37:18
37:21
37:25
37:30
37:33
37:38
37:41
37:43
37:45
37:52
37:55
37:58
38:02
38:09
38:12
38:14
38:18
38:21
38:23
38:25
38:27
38:32
38:33
38:34
38:40
38:43
38:47
38:52
38:54
38:58
39:00
39:03
39:05
39:05
39:16
39:18
39:22
39:24
39:24
39:25
39:30
39:32
39:35
39:38
39:47
39:53
39:58
40:03
40:05
40:09
40:14
40:15
40:16
40:21
40:24
40:25
40:26
40:27
40:28
40:29
40:35
40:39
40:44
40:48
40:55
41:01
41:05
41:06
41:14
41:20
41:23
41:26
41:32
41:33
41:37
41:42
41:47
41:53
41:54
42:04
42:08
42:10
42:13
42:14
42:15
42:17
42:27
42:30
42:33
42:39
42:45
42:49
42:52
42:55
42:59
43:03
43:05
43:07
43:10
43:14
43:14
43:22
43:25
43:28
43:35
43:37
43:40
43:48
43:50
43:55
43:58
44:01
44:12
44:15
44:17
44:21
44:24
44:26
44:33
44:37
44:41
44:44
44:47
44:49
44:52
44:56
44:59
45:03
45:07
45:09
45:13
45:18
45:20
45:23
45:25
45:32
45:35
45:37
45:38
45:40
45:42
45:46
45:48
45:51
45:56
45:59
46:04
46:09
46:11
46:15
46:16
46:19
46:22
46:27
46:29
46:32
46:38
46:46
46:49
46:53
46:58
47:02
47:06
47:09
47:12
47:13
47:16
47:23
47:26
47:30
47:33
47:36
47:41
47:44
47:49
47:57
48:01
48:11
48:18
48:24
48:26
48:30
48:40
48:46
48:51
48:54
48:55
49:12
49:17
49:18
49:22
49:26
49:29
49:31
49:34
49:34
49:35
Subtitle
OK.
cameras are rolling.
This is lecture fourteen, starting a new chapter.
Chapter about orthogonality.
What it means for vectors to be orthogonal.
What it means for subspaces to be orthogonal.
What it means for bases to be orthogonal.
So ninety degrees, this is a ninety-degree chapter.
So what does it mean --
let me jump to subspaces.
Because I've drawn here the big picture.
This is the 18.06 picture here.
And hold it down, guys.
So this is the picture and we know a lot about that picture
already.
We know the dimension of every subspace.
We know that these dimensions are r and n-r.
We know that these dimensions are r and m-r.
What I want to show now is what this figure is saying,
that the angle --
the figure is just my attempt to draw what I'm now going to say,
that the angle between these subspaces is ninety degrees.
And the angle between these subspaces is ninety degrees.
Now I have to say what does that mean?
What does it mean for subspaces to be orthogonal?
But I hope you appreciate the beauty of this picture,
that that those subspaces are going
to come out to be orthogonal.
Those two and also those two.
So that's like one point, one important point
to step forward in understanding those subspaces.
We knew what each subspace was like,
we could compute bases for them.
Now we know more.
Or we will in a few minutes.
OK.
I have to say first of all what does it mean for two vectors
to be orthogonal?
So let me start with that.
Orthogonal vectors.
The word orthogonal is -- is just another word
for perpendicular.
It means that in n-dimensional space
the angle between those vectors is ninety degrees.
It means that they form a right triangle.
It even means that the going way back to the Greeks that this
angle that this triangle a vector x, a vector x,
and a vector x+y -- of course that'll be the hypotenuse,
so what was it the Greeks figured out and it's neat.
It's the fact that the --
so these are orthogonal, this is a right angle, if --
so let me put the great name down, Pythagoras,
I'm looking for -- what I looking for?
I'm looking for the condition if you give me two vectors, how do
I know if they're orthogonal?
How can I tell two perpendicular vectors?
And actually you probably know the answer.
Let me write the answer down.
Orthogonal vectors, what's the test for orthogonality?
I take the dot product which I tend to write as x transpose y,
because that's a row times a column,
and that matrix multiplication just gives me the right thing,
that x1y1+x2y2 and so on, so these vectors are orthogonal
if this result x transpose y is zero.
That's the test.
OK.
Can I connect that to other things?
I mean -- it's just beautiful that here we have we're in n
dimensions, we've got a couple of vectors,
we want to know the angle between them,
and the right thing to look at is the simplest thing that you
could imagine, the dot product.
OK.
Now why?
So I'm answering the question now why --
let's add some justification to this fact, that's the test.
OK, so Pythagoras would say we've got a right triangle,
if that length squared plus that length squared equals
that length squared.
OK, can I write it as x squared plus y squared equals
x plus y squared?
Now don't, please don't think that this is always true.
This is only going to be true in this --
it's going to be equivalent to orthogonality.
For other triangles of course, it's not true.
For other triangles it's not.
But for a right triangle somehow that fact
should connect to that fact.
Can we just make that connection?
What's the connection between this test for orthogonality
and this statement of orthogonality?
Well, I guess I have to say what is the length squared?
So let's continue on the board underneath with that equation.
Give me another way to express the length squared of a vector.
And let me just give you a vector.
The vector one, two, three.
That's in three dimensions.
What is the length squared of the vector x equal one, two,
three?
So how do you find the length squared?
Well, really you just, you want the length of that vector that
goes along one -- up two, and out three --
and we'll come back to that right triangle stuff.
The length squared is exactly x transpose x.
Whenever I see x transpose x, I know
I've got a number that's positive.
It's a length squared unless it --
unless x happens to be the zero vector,
that's the one case where the length is zero.
So right -- this is just x1 squared plus x2 squared plus
so on, plus xn squared.
So one -- in the example I gave you what was the length squared
of that vector one, two, three?
So you square those, you get one, four and nine,
you add, you get fourteen.
So the vector one, two, three has length fourteen.
So let me just put down a vector here.
Let x be the vector one, two, three.
Let me cook up a vector that's orthogonal to it.
So what's the vector that's orthogonal?
So right down here, x squared is one plus four plus nine,
fourteen, let me cook up a vector that's orthogonal to it,
we'll get right that that -- those two vectors are
orthogonal, the length of y squared is five,
and x plus y is one and two making three,
two and minus one making one, three and zero making three,
and the length of this squared is nine plus one plus nine,
nineteen.
And sure enough, I haven't proved anything.
I've just like checked to see that my test x transpose
y equals zero, which is true, right?
Everybody sees that x transpose y is zero here?
That's maybe the main point.
That you should get really quick at doing x transpose y,
so it's just this plus this plus this and that's zero.
And sure enough, that clicks with fourteen plus five
agreeing with nineteen.
Now let me just do that in letters.
So that's y transpose y.
And this is x plus y transpose x plus y.
OK.
So I'm looking, again, this isn't always true.
I repeat.
This is going to be true when we have a right angle.
And let's just -- well, of course,
I'm just going to simplify this stuff here.
There's an x transpose x there.
And there's a y transpose y there.
And there's an x transpose y.
And there's a y transpose x.
I knew I could do that simplification because I'm just
doing matrix multiplication and I've just followed the rules.
OK.
So x transpose x-s cancel.
Y transpose y-s cancel.
And what about these guys?
What can you tell me about the inner product of x with y
and the inner product of y with x?
Is there a difference?
I think if we -- while we're doing real vectors,
which is all we're doing now, there isn't a difference,
there's no difference.
If I take x transpose y, that'll give me zero,
if I took y transpose x I would have the same x1y1 and x2y2
and x3y3, it would be the same, so this is --
this is the same as that, this is really
I'll knock that guy out and say two of these.
So actually that's the --
this equation boiled down to this thing being zero.
Right?
Everything else canceled and this equation
boiled down to that one.
So that's really all I wanted.
I just wanted to check that Pythagoras for a right triangle
led me to this -- of course I cancel the two now.
No problem.
To x transpose y equals zero as the test.
Fair enough.
OK.
You knew it was coming.
The dot product of orthogonal vectors is zero.
It's just -- I just want to say that's really neat.
That it comes out so well.
All right.
Now what about -- so now I know if two --
when two vectors are orthogonal.
By the way, what about if one of these guys is the zero vector?
Suppose x is the zero vector, and y is whatever.
Are they orthogonal?
Sure.
In math the one thing about math is you're
supposed to follow the rules.
So you're supposed to -- if x is the zero vector,
you're supposed to take the zero vector dotted with y
and of course you always get zero.
So just so we're all sure, the zero vector
is orthogonal to everybody.
But what I want to --
what I now want to think about is subspaces.
What does it mean for me to say that some subspace is
orthogonal to some other subspace?
So OK.
Now I've got to write this down.
So because we're defining definition of subspace
S is orthogonal so to subspace let's say T,
so I've got a couple of subspaces.
And what should it mean for those guys to be orthogonal?
It's just sort of what's the natural extension
from orthogonal vectors to orthogonal subspaces?
Well, and in particular, let's think
of some orthogonal subspaces, like this wall.
Let's say in three dimensions.
So the blackboard extended to infinity, right, is a --
is a subspace, a plane, a two-dimensional subspace.
It's a little bumpy but anyway, it's a --
think of it as a subspace, let me take the floor as another
subspace.
Again, it's not a great subspace,
MIT only built it like so-so, but I'll
put the origin right here.
So the origin of the world is right there.
OK.
Thereby giving linear algebra its proper importance in this.
OK.
So there is one subspace, there's another one.
The floor.
And are they orthogonal?
What does it mean for two subspaces
to be orthogonal and in that special case
are they orthogonal?
All right.
Let's finish this sentence.
What does it mean means we have to know
what we're talking about
here.
So what would be a reasonable idea of orthogonal?
Well, let me put the right thing up.
It means that every vector in S, every vector in S,
is orthogonal to --
what I going to say?
Every vector in T.
That's a reasonable and it's a good
and it's the right definition for two subspaces
to be orthogonal.
But I just want you to see, hey, what does that mean?
So answer the question about the --
the blackboard and the floor.
Are those two subspaces, they're two-dimensional, right,
and we're in R^3.
It's like a xz plane or something and a xy plane.
Are they orthogonal?
Is every vector in the blackboard orthogonal
to every vector in the floor, starting from the origin
right there?
Yes or no?
I could take a vote.
Well we get some yeses and some noes.
No is the answer.
They're not.
You can tell me a vector in the blackboard
and a vector in the floor that are not orthogonal.
Well you can tell me quite a few, I guess.
Maybe like I could take some forty-five-degree guy
in the blackboard, and something in the floor,
they're not at ninety degrees, right?
In fact, even more, you could tell me
a vector that's in both the blackboard plane and the floor
plane, so it's certainly not orthogonal to itself.
So for sure, those two planes aren't orthogonal.
What would that be?
So what's a vector that's --
in both of those planes?
It's this guy running along the crack
there, in the intersection, the intersection.
A vector, you know --
if two subspaces meet at some vector, well then for sure
they're not orthogonal, because that vector is in one
and it's in the other, and it's not orthogonal to itself
unless it's zero.
So the only I mean so orthogonal is
for me to say these two subspaces are orthogonal first
of all I'm certainly saying that they don't intersect
in any nonzero vector.
But also I mean more than that just not intersecting
isn't good
enough.
So give me an example, oh, let's say in the plane, oh well,
when do we have orthogonal subspaces in the plane?
Yeah, tell me in the plane, so we don't --
there aren't that many different subspaces in the plane.
What what have we got in the plane as possible subspaces?
The zero vector, real small.
A line through the origin.
Or the whole plane.
OK.
Now so when is a line through the origin orthogonal
to the whole plane?
Never, right, never.
When is a line through the origin orthogonal to the zero
subspace?
Always.
Right.
When is a line through the origin orthogonal
to a different line through the origin?
Well, that's the case that we all have a clear picture
of, they --
the two lines have to meet at ninety degrees.
They have only the -- so that's like this simple case
I'm talking about.
There's one subspace, there's the other subspace.
They only meet at zero.
And they're orthogonal.
OK.
Now.
So we now know what it means for two subspaces to be orthogonal.
And now I want to say that this is true for the row
space and the null space.
OK.
So that's the neat fact.
So row space is orthogonal to the null space.
Now how did I come up with that?
But you see the rank it's great, that means that these --
that these subspaces are just the right things,
they're just cutting the whole space up
into two perpendicular subspaces.
OK.
So why?
Well, what have I got to work with?
All I know is the null space.
The null space has vectors that solve Ax equals zero.
So this is a guy x.
x is in the null space.
Then Ax is zero.
So why is it orthogonal to the rows of A?
If I write down Ax equals zero, which
is all I know about the null space,
then I guess I want you to see that that's telling me,
just that equation right there is telling me
that the rows of A, let me write it out.
There's row one of A.
Row two.
Row m of A. that's A.
And it's multiplying X.
And it's producing zero.
OK.
Written out that way you'll see it.
So I'm saying that a vector in the row space
is perpendicular to this guy X in the null space.
And you see why?
Because this equation is telling you
that row one of A multiplying that's a dot product, right?
Row one of A dot product with this x is producing this zero.
So x is orthogonal to the first row.
And to the second row.
Row two of A, x is giving that zero.
Row m of A times x is giving that zero.
So x is --
the equation is telling me that x
is orthogonal to all the rows.
Right, it's just sitting there.
That's all we -- it had to be sitting there because we
didn't know anything more about the null space than this.
And now I guess to be totally complete here
I'd now check that x is orthogonal
to each separate row.
But what else strictly speaking do I have to do?
To show that those subspaces are orthogonal,
I have to take this x in the null space and show that
it's orthogonal to every vector in the row space,
every vector in the row space, so what --
what else is in the row space?
This row is in the row space, that row is in the row space,
they're all there, but it's not the whole row space,
right, we just have to like remember, what does it
mean, what does that word space telling us?
And what else is in the row space?
Besides the rows?
All their combinations.
So I really have to check that sure enough
if x is perpendicular to row one,
row two, all the different separate rows,
then also x is perpendicular to a combination of the rows.
And that's just matrix multiplication again.
You know, I have row one transpose x is zero,
so on, row two transpose x is zero,
so I'm entitled to multiply that by some c1, this by some c2,
I still have zeroes, I'm entitled to add,
so I have c1 row one so -- so all this when I put that
together that's big parentheses c1 row one plus c2 row two
and so on.
Transpose x is zero.
Right?
I just added the zeroes and got zero,
and I just added these following the rule.
No big deal.
The whole point was right sitting in that.
OK.
So if I come back to this figure now I'm like a happier person.
Because I have this --
I now see how those subspaces are oriented.
And these subspaces are also oriented.
Well, actually why is that orthogonality?
Well, it's the same statement for A transpose
that that one was for A.
So I won't take time to prove it again
because we've checked it for every matrix
and A transpose is just as good a matrix as A.
So we're orthogonal over there.
So we really have carved up this --
this was like carving up m-dimensional space
into two subspaces and this one was carving up
n-dimensional space into two subspaces.
And well, one more thing here.
One more important thing.
Let me move into three dimensions.
Tell me a couple of orthogonal subspaces in three dimensions
that somehow don't carve up the whole space,
there's stuff left there.
I'm thinking of a couple of orthogonal lines.
If I -- suppose I'm in three dimensions, R^3.
And I have one line, one one-dimensional subspace,
and a perpendicular one.
Could those be the row space and the null space?
Could those be the row space and the null space?
Could I be in three dimensions and have
a row space that's a line and a null space that's a line?
No.
Why not?
Because the dimensions aren't right.
Right?
The dimensions are no good.
The dimensions here, r and n-r, they add up to three,
they add up to n.
If I take --
just follow that example --
if the row space is one-dimensional,
suppose A is what's a good in R^3,
I want a one-dimensional row space, let me take one, two,
five, two, four, ten.
What's the dimension of that row space?
One.
What's the dimension of the null space?
Tell what's the null space look like in that case?
The row space is a line, right?
One-dimensional, it's just a line through one, two, five.
Geometrically what's the row space look like?
What's its dimension?
So here r here n is three, the rank
is one, so the dimension of the null space,
so I'm looking at this x, x1, x2, x3.
To give zero.
So the dimension of the null space is we all know is two.
Right.
It's a plane.
And now actually we know, we see better, what plane is it?
What plane is it?
It's the plane that's perpendicular to one,
two, five.
Right?
We now see.
In fact the two, four, ten didn't actually
have any effect at all.
I could have just ignored that.
That didn't change the row space or the null space.
I'll just make that one equation.
Yeah.
OK.
Sure.
That's the easiest to deal with.
One equation.
Three unknowns.
And I want to ask --
what would the equation give me the null space,
and you would have said back in September
you would have said it gives you a plane,
and we're completely right.
And the plane it gives you, the normal vector,
you remember in calculus, there was this dumb normal vector
called N.
Well there it is.
One, two, five.
OK.
What is the what's the point I want to make here?
I want to make --
I want to emphasize that not only are the --
let me write it in words.
So I want to write the null space and the row space are
orthogonal, that's this neat fact, which we've --
we've just checked from Ax equals zero,
but now I want to say more because there's a little more
that's true.
Their dimensions add to the whole space.
So that's like a little extra information.
That it's not like I could have --
I couldn't have a line and a line in three dimensions.
Those don't add up one and one don't add to three.
So I used the word orthogonal complements in R^n.
And the idea of this word complement
is that the orthogonal complement of a row space
contains not just some vectors that are orthogonal to it,
but all.
So what does that mean?
That means that the null space contains all, not just
some but all, vectors that are perpendicular to the row space.
OK.
Really what I've done in this half of the lecture is just
notice some of the nice geometry that --
that we didn't pick up before because we didn't discuss
perpendicular vectors before.
But it was all sitting there.
And now we picked it up.
That these vectors are orthogonal complements.
And I guess I even call this part
one of the fundamental theorem of linear algebra.
The fundamental theorem of linear algebra
is about these four subspaces, so part one
is about their dimension, maybe I should call it part two now.
Their dimensions we got.
Now we're getting their orthogonality, that's part two.
And part three will be about bases for them.
Orthogonal bases.
So that's coming up.
OK.
So I'm happy with that geometry right now.
OK.
OK.
Now what's my next goal in this chapter?
Here's the main problem of the chapter.
The main problem of the chapter is -- so this is coming.
It's coming attraction.
This is the very last chapter that's about Ax=b.
I would like to solve that system of equations
when there is no solution.
You may say what a ridiculous thing to do.
But I have to say it's done all the time.
In fact it has to be done.
You get -- so the problem is solve --
the best possible solve I'll put quote Ax=b when there is no
solution.
And of course what does that mean?
b isn't in the column space.
And it's quite typical if this matrix A is rectangular if I --
maybe I have m equations and that's bigger than the number
of unknowns, then for sure the rank is not m,
the rank couldn't be m now, so there'll be a lot of right-hand
sides with no solution, but here's an example.
Some satellite is buzzing along.
You measure its position.
You make a thousand measurements.
So that gives you a thousand equations for the --
for the parameters that -- that give the position.
But there aren't a thousand parameters,
there's just maybe six or something.
Or you're measuring the -- you're doing questionnaires.
You're measuring resistances.
You're taking pulses.
You're measuring somebody's pulse rate.
There's just one unknown.
OK.
The pulse rate.
So you measure it once, OK, fine,
but if you really want to know it,
you measure it multiple times, but then the measurements have
noise in them, so there's --
the problem is that in many many problems
we've got too many equations and they've got
noise in the right-hand side.
So Ax=b I can't expect to solve it exactly right,
because I don't even know what --
there's a measurement mistake in b.
But there's information too.
There's a lot of information about x in there.
And what I want to do is like separate the noise, the junk,
from the information.
And so this is a straightforward linear algebra problem.
How do I solve, what's the best solution?
OK.
Now.
I want to say so that's like describes the problem
in an algebraic way.
I got some equations, I'm looking for the best solution.
Well, one way to find it is -- one way to start,
one way to find a solution is throw away equations until
you've got a nice, square invertible system and solve
that.
That's not satisfactory.
There's no reason in these measurements
to say these measurements are perfect
and these measurements are useless.
We want to use all the measurements
to get the best information, to get the maximum information.
But how?
OK.
Let me anticipate a matrix that's going to show up.
This A is typically rectangular.
But a matrix that shows up whenever you have --
and we chapter three was all about rectangular matrices.
And we know when this is solvable,
you could do elimination on it, right?
But I'm thinking hey, you do elimination
and you get equation zero equal other non-zeroes.
I'm thinking we really -- elimination is going to fail.
So that's our question.
Elimination will get us down to --
will tell us if there is a solution or not.
But I'm now thinking not.
So what are we going to do?
OK.
All right.
I want to tell you to jump ahead to the matrix that
will play a key role.
So this is the matrix that you want to understand
for this chapter four.
And it's the matrix A transpose A.
What's -- tell me some things about that matrix.
So A is this m by n matrix, rectangular, but now
I'm saying that the good matrix that shows up in the end
is A transpose A.
So tell me something about that.
Is it -- what's the first thing you know about A transpose A.
It's square.
Right?
Square because this is m by n and this is n by m.
So this is the result is n by n.
Good.
Square.
What else?
It's symmetric.
Good.
It's symmetric.
Because you remember how to do that.
If we transpose that matrix let's transpose it,
A transpose A, if I transpose it,
then that comes first transposed, this comes second,
transposed, and then transposing twice is leaves it --
brings it back to the same so it's symmetric.
Good.
Now we now know how to ask more about a matrix.
I'm interested in is it invertible?
If not, what's its null space?
So I want to know about -- because you're going to see,
well, let me -- let me even, well I shouldn't do this,
but I will.
Let me tell you what equation to solve
when you can't solve that one.
The good equation comes from multiplying both sides
by A transpose, so the good equation that you get to
is this one.
A transpose Ax equals A transpose b.
That will be the central equation in the chapter.
So I think why not tell it to you.
Why not admit it right away.
OK.
I have to --
I should really give x.
I want to sort of indicate that this x isn't I mean this x was
the solution to that equation if it existed,
but probably didn't.
Now let me give this a different name, x hat.
Because I'm hoping this one will have a solution.
And I'm saying that it's my best solution.
I'll have to say what does best mean.
But that's going to be my -- my plan.
I'm going to say that the best solution solves this equation.
So you see right away why I'm so interested in this matrix
A transpose A.
And in its invertibility.
OK.
Now, when is it invertible?
OK.
Let me take a case, let me just do an example and then --
I'll just pick a matrix here.
Just so we see what A transpose A looks like.
So let me take a matrix A one, one, one, one, two, five.
Just to invent a matrix.
So there's a matrix A.
Notice that it has M equal three rows and N equal two columns.
Its rank is --
the rank of that matrix is two.
Right, yeah, the columns are independent.
Does Ax equal b?
If I look at Ax=b, so x is just x1 x2, and b is b1 b2 b3.
Do I expect to solve Ax=b?
What's -- no way, right?
I mean linear algebra's great, but solving
three equations with only two unknowns usually
we can't do it.
We can only solve it if this vector is b is what?
I can solve that equation if that vector b1 b2 b3
is in the column space.
If it's a combination of those columns then fine.
But usually it won't be.
The combinations just fill up a plane
and most vectors aren't on that plane.
So what I'm saying is that I'm going to work
with the matrix A transpose A.
And I just want to figure out in this example what
A transpose A is.
So it's two by two.
The first entry is a three, the next entry is an eight,
this entry is --
what's that entry?
Eight, for sure.
We knew it had to be, and this entry
is, what's that now, getting out my trusty calculator, thirty,
is that right?
And is that matrix invertible?
Thirty.
There's an A transpose A.
And it is invertible, right?
Three, eight is not a multiple of eight, thirty --
and it's invertible.
And that's the normal, that's what I expect.
So this is I want to show.
So here's the final -- here's the key point.
The null space of A transpose A --
it's not going to be always invertible.
Tell me a matrix --
I have to say that I can't say A transpose A is always
invertible.
Because that's asking too much.
I mean what could the matrix A be, for example,
so that A transpose A was not invertible?
Well, it even could be the zero matrix.
I mean that's like extreme case.
Suppose I make this rank --
suppose I change to that A.
Now I figure out A transpose A again and I get --
what do I get?
I get nine, I get nine of course and here I
get what's that entry?
Twenty-seven.
And is that matrix invertible?
No.
And why do I --
I knew it wouldn't be invertible anyway.
Because this matrix only has rank one.
And if I have a product of matrices of rank one,
the product is not going to have a rank bigger than one.
So I'm not surprised that the answer only has rank one.
And that's what I --
always happens, that the rank of A transpose A
comes out to equal the rank of A.
So, yes, so the null space of A transpose A
equals the null space of A, the rank of A transpose A
equals the rank of A.
So let's -- as soon as I can why that's true.
But let's draw from that what the fact that I want.
This tells me that this square symmetric matrix is invertible
if --
so here's my conclusion.
A transpose A is invertible if exactly when --
exactly if this null space is only got the zero vector.
Which means the columns of A are independent.
So A transpose A is invertible exactly
if A has independent columns.
That's the fact that I need about A transpose A.
And then you'll see next time how A transpose A
enters everything.
Next lecture is actually a crucial one.
Here I'm preparing for it by getting us
thinking about A transpose A.
And its rank is the same as the rank of A,
and we can decide when it's invertible.
OK.
So I'll see you Friday.
Thanks.
Machine Translation
好的。
摄像机正在滚动。
开始新的篇章。
关于正交性的章节。
向量正交意味着什么。
子空间正交意味着什么。
基正交意味着什么。
一个九十度的章节。
那么这是什么意思——
让我跳到子空间。
大局。
这是18.06的照片。
按住它,伙计们。
对这张照片有了很多了解

每个子空间的维度。
维度是 r 和 n-r。
维度是 r 和 m-r。
这个图所说的,
即角度——
画出我现在要说的,
是九十度。
是九十度。
这意味着什么?
子空间正交意味着什么?
这幅图画的美妙之处,
子空间将
是正交的。
那两个人,还有那两个人。
所以这就像一点,

理解这些子空间方面向前迈进的一个重要点。
子空间是什么样的,
我们可以计算它们的基数。
现在我们知道更多了。
或者我们会在几分钟内。
好的。
两个向量
正交意味着什么?
那么让我从这个开始吧。
正交向量。
是垂直的另一个词

n 维空间中
是九十度。
形成一个直角三角形。
回到希腊人的
一个向量x、一个向量x
这将是斜边,
而且很整洁。
事实是——
这是一个直角,如果——
名字,毕达哥拉斯,
我在寻找什么?
你给我两个向量,我正在寻找条件,
我如何知道它们是否正交?
垂直向量?
可能知道答案。
让我把答案写下来。
正交性的检验是什么?
倾向于将其写为 x 转置 y,
乘以列,
恰好给了我正确的结果,
这些向量是正交的
转置 y 为零。
这就是测试。
好的。
与其他事物联系起来吗?
我们有 n
几个向量,
它们之间的角度,
是你能看到的最简单的东西
想象一下,点积。
好的。
现在为什么?
问题——为什么——
为这个事实添加一些理由,这就是测试。
我们有一个直角三角形,
该长度的平方等于
该长度的平方。
平方加 y 平方等于
x 加 y 平方吗?
这总是正确的。
在这方面是正确的——
于正交性。 当然,
,情况并非如此。
对于其他三角形则不然。
这个事实应该以某种方式
与那个事实联系起来。
这种联系吗?
这个正交性测试
正交性声明之间有什么联系?
长度的平方是多少?
使用这个方程。 请
向量长度平方的方法。
给你一个向量。
向量一、二、三。
这是在三个维度上。 向量 x
等于一、二、
三?
长度的平方呢?
好吧,实际上,你只是想要

二、向外三的向量的长度,
直角三角形的东西。
正好是 x 转置 x。
转置 x 时,我就知道
正数。
平方,除非 -
是零向量,
长度为零的一种情况。
平方加上 x2 的平方加上
依此类推,再加上
那么,在我给你的例子中,xn 的平方。

向量一、二、三的长度平方是多少?
得到一、四和九,
你相加,你得到十四。
三的长度为十四。
这里写一个向量。
一、二、三。
与其正交的向量。

是一加四加九,
与它正交的向量,
- 这两个向量是
y 平方的长度是五,
二等于三,
三加零等于三,
是九加一加九,
十九。
没有证明什么。
我的测试 x 转置
是真的,对吗?
这里x转置y为零了吗?
这也许是重点。
很快地进行 x 转置 y,
加上这个,那是零。
十四加五
等于十九。
信件来做到这一点。
这就是 y 转置 y。
转置 x 加 y。
好的。
这并不总是正确的。
我重复。
当我们有一个直角时,这将是正确的。
当然,
这里简化一下这些内容。
那里有一个 x 转置 x 。 那里
转置 y。 还有
一个 x 转置 y。
还有一个 y 转置 x。
简化,因为我只是在
我只是遵循了规则。
好的。
所以 x 转置 x-s 取消。
Y 转置 y-s 取消。 那么
这些人呢?
x 与 y 的内积
y 与 x 的内积吗?
有区别吗?
我们在做实数向量时,
没有区别,
没有区别。
y,那将给我零,
具有相同的 x1y1 和 x2y2
相同的,所以这是 -
那个相同,这 真的,
然后说两句话。
所以实际上这就是——
为这个东西为零。
正确的?
这个方程式
归结为那个方程式。
这就是我真正想要的。
直角三角形的毕达哥拉斯是否
我现在取消了这两个。
没问题。
0 作为测试。
很公平。
好的。
你知道它即将到来。
正交向量的点积为零。
说这真的很棒。
结果这么好。
好的。
现在,

当两个向量正交时,现在我知道是否有两个向量了。
是零向量呢?
向量,y 是任意值。
它们是正交的吗?
当然。
关于数学的一件事是你
应该遵守规则。
如果 x 是零向量,
点缀着 y 的零向量,
总是得到零。
可以确定,零向量与
每个人都是正交的。

考虑的是子空间。
说某个子空间
其他子空间正交对我来说意味着什么?
那么好吧。
现在我必须把它写下来。
子空间 S 的定义
假设子空间 T 是正交的,
几个子空间。
那些人来说正交意味着什么?
这只是

到正交子空间的自然延伸是什么?
,让我们
子空间,比如这面墙。
我们从三个维度来说。
到无穷远,对吧,是一个——
二维子空间。
但无论如何,它是一个——把
我把地板当作另一个子
空间。
一个很棒的子空间,
,但我将把
原点放在这里。
就在那里。
好的。
适当的重要性。
好的。
还有另一个。
地上。
它们是正交的吗?
两个子空间
在这种特殊情况下
它们是正交的吗?
好的。
让我们把这句话说完。
意味着我们必须知道
我们在这里谈论的是什么

正交的合理想法呢?
正确的事情放上来。
S 中的每个向量、S 中的每个向量都
正交于——
我要说的是什么?
T 中的每个向量。
、很好的定义,也是
两个子空间
正交的正确定义。
嘿,这是什么意思?
关于
黑板和地板的问题。
是二维的,对吧,
我们在 R^3 中。
其他平面和一个 xy 平面。
它们是正交的吗?
黑板上的每个向量都与 从原点开始,
地板上的每个向量正交吗


是还是不是?
我可以投票。
肯定和一些否定。
答案是否定的。
他们不是。
黑板上的一个向量
不正交的向量。
很多。
某个四十五度的人
地板上的某个东西,
九十度,对吧?
您可以告诉我
黑板平面又位于地板
不与其自身正交。
平面不是正交的。
那会是什么?
那么这
两个平面上的矢量是什么?
沿着
十字路口。
你知道,一个向量——
向量处相交,那么
因为该向量在一个向量中,
它与自身不正交,
除非它为零。
的正交是
个子空间是正交的,
它们
在任何非零向量中都不相交。
仅仅不相交还
不够好

假设在平面上,哦,好吧,
在平面上有正交子空间?
,所以我们不会——飞机上
子空间。
可能的子空间?
零向量,非常小。
一条穿过原点的线。
或者整个飞机。
好的。
穿过原点的线
与整个平面正交呢? 从来
没有,对,从来没有。
与零子
空间正交?
总是。
正确的。
通过原点的线何时与通过原点的


我们大家都清楚的情况
,他们——
九十度相交。
这就像
我正在谈论的这个简单的案例。
还有另一个子空间。
他们只在零相遇。
而且它们是正交的。
好的。
现在。
两个子空间正交意味着什么。
这对于行
空间​​和零空间来说都是正确的。
好的。
这就是事实。
与零空间正交。
现在我是怎么想到这个的?
很棒,这意味着这些
正确的东西,
整个空间切割成
子空间。
好的。
所以为什么?
要做什么?
我所知道的就是零空间。
求解 Ax 等于零的向量。
所以这是一个x人。
x 位于零空间中。
那么 Ax 为零。
A 的行正交呢?
等于 0,这
零空间的了解,
看到这告诉我,
那里的等式告诉我
让我把它写出来 。
A 的第一行。
第二行。
A 的第 m 行,即 A。
它与 X 相乘。
结果为零。
好的。
你就明白了。
行空间中的向量
零空间中的 X。
你明白为什么吗?
告诉您
是点积,对吗?
产生该零。
与第一行正交。
到第二排。
给出零。
给出零。
所以 x 是——
告诉我 x 与
所有行正交。
没错,它只是坐在那里。
坐在那里,因为我们
对零空间的了解不比这多。
这里已经完全完成了,
x 是否
与每个单独的行正交。
来说我还需要做什么呢?
子空间是正交的,
并证明
行空间中的每个向量正交,行空间
,那又怎样——
行空间中还有什么?
那行在行空间中,
不是整个行空间,
记住,这是什么意思
单词空间告诉我们什么 我们?
行空间中还有什么?
除了行?
他们所有的组合。
检查一下,
于第一行、
不同的单独行,
于这些行的组合。
乘法。
转置 x 为零,
转置 x 为零,
乘以某个 c1,再乘以某个 c2,
我有权添加 ,
所以当我把它们放在一起时,所有这些
c1 第一行加上 c2 第二行,
依此类推。
转置 x 为零。
正确的?
零并得到了零,

没什么大不了。
于此。
好的。
我就像一个更快乐的人。
因为我有了这个——
子空间是如何定向的。
也是有向的。
正交性呢?
对于 A 转置来说,这与
A 的说法是一样的。
再次证明它,
每个矩阵,
A 一样都是好的矩阵。
所以我们 那里是正交的。
分割了这个——
m 维空间分割
这个是将
成两个子空间。
好吧,还有一件事。
还有一件重要的事情。
三个维度。
空间,它们
分割整个空间,那里还
剩下一些东西。
正交线。
三维空间,R^3。
一维子空间
和一个垂直子空间。
空间和零空间吗?
空间和零空间吗?
空间中,有
一个零空间(一条线)吗?
不,
为什么不呢?
不对。
正确的?
尺寸不好。
n-r,它们加起来是三,
它们加起来是n。
如果我采取 -
只要按照这个例子 -
是一维的,
R^3 中的好东西,
空间,让我采取一,二,
五,二 、四、十。
该行空间的尺寸是多少?
一。 零空间的

在这种情况下零空间是什么样子的?
行空间是一条线,对吗?
穿过一、二、五的线。
行空间是什么样的?
它的尺寸是多少?
是三,秩
零空间的维度,
这个 x,x1,x2,x3。
给零。
我们都知道零空间的维数是二。
正确的。
这是一架飞机。
看得更清楚了,这是什么飞机?
是什么飞机?
垂直于一、
二、五的平面。
正确的?
我们现在看到了。
十个其实根本
没有任何作用。
我本来可以忽略这一点。
空间或零空间。
一个等式。
是的。
好的。
当然。
这是最容易对付的。
一个方程。
三个未知数。
我想问 -
会给我零空间,
说早在九月份
会给你一架飞机,
我们是完全正确的。
你的平面,法向量,
有一个叫做 N 的愚蠢的法向量。
-
好吧,它就在那里。
一、二、五。
好的。
我在这里想表达的重点是什么?
我想——
我想强调的是
——
让我用文字写下来。
空间和行空间是
简单的事实,
Ax 等于零,
因为还有更多
是真的 。
了整个空间。
额外的信息。
我能拥有的——
在三维空间中拥有一条线和一条线。
一加起来也不等于三。
在 R^n 中使用了正交补这个词。
这个词补的想法
行空间的正交补

而是包含所有向量。
那么这是什么意思呢?
空间包含所有(不仅仅是
垂直于行空间的向量。
好的。
半场讲座中所做的只是
注意到一些

因为我们之前没有讨论过
垂直向量。
但一切都坐在那里。
现在我们把它捡起来了。
正交补集。
称这一部分为
线性代数的基本定理之一。
线性代数的基本定理
子空间的,所以第一部分
我现在应该称之为第二部分。
我们得到了它们的尺寸。
正交性,这是第二部分。
讨论它们的基础。
正交基。
所以这就是即将发生的事情。
好的。
现在对这个几何形状很满意。
好的。
好的。

本章的主要问题。
是——所以这就来了。
即将到来的吸引力。
关于 Ax=b 的最后一章。 当方程组无解时,
该方程组

可笑的事情啊。
发生。
事实上它必须要做。
问题就解决了——
当没有解决方案时,我会引用 Ax=b

这意味着什么?
b 不在列空间中。
如果我——
它比
肯定,秩不是 m,
会有 很多
但这里有一个例子。
一些卫星正在嗡嗡作响。
你测量它的位置。
测量。
方程,用于
给出位置的参数。
参数并没有一千个,
六个左右。
你正在做调查问卷。
您正在测量电阻。
你正在测量脉搏。
某人的脉搏率。
只有一个未知。
好的。
脉率。
一次,好吧,很好,
想知道它,
但测量结果
中存在噪音,所以
在许多问题中,
我们遇到了 方程太多,

右侧就会有噪音。
完全正确地解决它,
至不知道是什么 - b 中
错误。
但也有信息。 里面
关于x的信息。
噪音、垃圾
与信息分开。
线性代数问题。
最好的解决方案是什么?
好的。
现在。
我想说这就像

用代数方式描述问题一样。
寻找最佳解决方案。
是——一种开始方法,
是扔掉方程,直到
可逆系统并解决
它。
这并不令人满意。
没有理由 这些测量

测量是完美的
是无用的。
测量
获得最大的信息。
但如何呢?
好的。
出现的矩阵。
该 A 通常是矩形的。
只要有矩阵就会出现——
关于矩形矩阵的。
这个问题可以解决时,
,对吧?
你进行消除,
等于其他非零。
消除将会失败。
这就是我们的问题。
让我们认真思考——
是否有解决方案。
但我现在认为不是。
那么我们要做什么呢?
好的。
好的。
我想告诉你跳到

将发挥关键作用的矩阵。

在第四章中想要理解的矩阵。
A 转置 A。
关于该矩阵的事情。
矩形,但现在
最终出现的好矩阵
是 A 转置 A。
所以请告诉我一些相关信息。
关于 A 转置 A,你首先知道的是什么?
它是平方的。
正确的?
× n,这是 n × m。
所以这就是 n × n 的结果。
好的。
正方形。 还有
什么?
它是对称的。
好的。
它是对称的。
如何做到这一点。
矩阵,让我们转置它,
我转置它,
转置,然后
两次就剩下它了——
相同的状态,所以它是对称的。
好的。
询问有关矩阵的更多信息。
它是可逆的吗?
如果不是,它的零空间是什么?
因为你会看到,
我不应该这样做,
但我会的。
让我告诉您

当您无法解出该方程时该解什么方程。
于两边都
你得到的好的方程
就是这个。
等于转置 b。
本章的中心方程。
告诉你呢。
为什么不马上承认呢。
好的。
我必须——
我真的应该给x。
这个 x 不是,我的意思是这个 x 是
如果它存在的话),
但可能不存在。
不同的名字,x hat。
能有一个解决方案。
这是我最好的解决方案。
最好的意思是什么。
是我的计划。
解决方案可以解决这个方程。
对这个矩阵
A 转置 A
及其可逆性如此感兴趣。
好的。
现在,什么时候可以逆转?
好的。
做一个例子,然后——
我在这里选择一个矩阵。
转置 A 是什么样子的。
一、一、一、一、二、五。
只是为了发明一个矩阵。
因此有一个矩阵 A。
三行,N 等于两列。
它的秩是——
该矩阵的秩是二。
列是独立的。
Ax 等于 b 吗?
就是 x1 x2,b 就是 b1 b2 b3。
我期望解 Ax=b 吗?
什么——不可能吧?
很棒,但是
仅用两个未知数求解三个方程通常
我们做不到。 只有
这个向量是b 时我们才能求解它是什么?
如果向量 b1 b2 b3
在列空间中,我可以求解该方程。
这些列的组合,那就没问题了。
但通常不会。
只是填满一个平面,
并不在该平面上。
我将
使用矩阵 A 转置 A。
在这个例子中弄清楚
A 转置 A 是什么。
所以是两两。
下一个条目是八,
这个条目是——
那个条目是什么?
八个,当然。
是,这个条目
我值得信赖的计算器,三十,
是吗?
该矩阵是可逆的吗?
三十。
有一个 A 转置 A。
而且它是可逆的,对吧?
八、三十的倍数——
而且它是可逆的。
这就是我所期望的。
这就是我想展示的。
这是关键点。 A
转置 A——
可逆的。
告诉我一个矩阵——
说 A 转置 A 总是
可逆的。
因为那要求太多了。
例如,矩阵 A 可以是什么,
A 不可逆?
是零矩阵。
我的意思是这就像极端的情况。
假设我达到了这个排名——
假设我改为A。
A,然后我得到——
我会得到什么?
在这里我
得到了那个条目是什么?
二十七。
该矩阵是可逆的吗?
不。
我为什么要这么做——
无论如何它都不会是可逆的。
只有一阶。
阶数为 1 的矩阵的乘积,则
阶数不会大于 1。
答案只有第一名并不感到惊讶。
这就是我
转置 A 的秩
A 的秩。
转置 A 的零空间
转置 A 的秩
等于 A 的等级。
为什么这是真的。
我想要的事实。
方阵是可逆的,
如果——
所以这是我的结论。
如果
仅获得零向量时。
A 的列是独立的。
可逆的

需要 A 转置 A 的事实。
A 转置 A 如何
输入所有内容。
实际上是关键的一课。
通过让我们
考虑 A 转置 A 来进行准备。
与 A 的秩相同,
它何时可逆。
好的。
那么周五见。
谢谢。

You might also like