Articles

14: Orthonormal Bases and Complements - Mathematics


You may have noticed that we have only rarely used the dot product. Once a dot or inner product is available, lengths of and angles between vectors can be measured--very powerful machinery and results are available in this case.


Lecture 14: Orthogonal vectors and subspaces

Download the video from iTunes U or the Internet Archive.

Vectors are easier to understand when they're described in terms of orthogonal bases. In addition, the Four Fundamental Subspaces are orthogonal to each other in pairs. If A is a rectangular matrix, Ax = b is often unsolvable.

These video lectures of Professor Gilbert Strang teaching 18.06 were recorded in Fall 1999 and do not correspond precisely to the current edition of the textbook. However, this book is still the best reference for more information on the topics covered in each lecture.

Strang, Gilbert. Introduction to Linear Algebra. 5th ed. Wellesley-Cambridge Press, 2016. ISBN: 9780980232776.

Instructor/speaker: Prof. Gilbert Strang

Lecture 1: The geometry of .

Lecture 2: Elimination with.

Lecture 3: Multiplication a.

Lecture 4: Factorization in.

Lecture 5: Transposes, perm.

Lecture 6: Column space and.

Lecture 9: Independence, ba.

Lecture 10: The four fundam.

Lecture 12: Graphs, network.

Lecture 14: Orthogonal vect.

Lecture 15: Projections ont.

Lecture 16: Projection matr.

Lecture 17: Orthogonal matr.

Lecture 18: Properties of d.

Lecture 19: Determinant for.

Lecture 21: Eigenvalues and.

Lecture 22: Diagonalization.

Lecture 23: Differential eq.

Lecture 24: Markov matrices.

Lecture 24b: Quiz 2 review

Lecture 25: Symmetric matri.

Lecture 26: Complex matrice.

Lecture 27: Positive defini.

Lecture 28: Similar matrice.

Lecture 29: Singular value .

Lecture 30: Linear transfor.

Lecture 31: Change of basis.

Lecture 33: Left and right .

Lecture 34: Final course re.


Orthonormal bases on Reproducing Kernel Hilbert Spaces

Recall that a Hilbert space $mathcal$ is a reproducing kernel Hilbert space (RKHS) if the elements of $mathcal$ are functions on a certain set $X$ and for any $ain X$, the linear functional $fmapsto f(a)$ is bounded on $mathcal$. By Riesz Representation Theorem, there exists an element $K_ainmathcal$ such that $f(a) = langle f, K_a angle ext < for all > finmathcal.$ The function $K(x,y) = K_y(x) = langle K_y, K_x angle$ defined on $X imes X$ is called the reproducing kernel function of $mathcal$.

It is well known and easy to show that for any orthonormal basis $_^$ for $mathcal$, we have the formula $K(x,y) = sum_^e_m(x)overline, ag$ where the convergence is pointwise on $X imes X$.

My question concerns the converse of the above statement.

Question: if $_^$ is a sequence of functions in $mathcal$ such that $K(x,y) = sum_^g_m(x)overline ag$ for all $x,yin X$. Is the sequence $_^$ an orthonormal basis for $mathcal$?

The answer to this question is clearly negative since equation (Eqn 1) can be re-written as $K(x,y) = frac>overline>>+frac>overline>>+sum_^e_m(x)overline$ and clearly $, e_1/sqrt<2>, e_2, ldots>$ is not an orthonormal basis for $mathcal$. So the following additional condition should be added: the sequence $_^$ is linearly independent.

The following proof suggests that the answer is affirmative. (For those who are familiar with the proof of the Moore-Aronszajn's Theorem in the theory of RKHS, the proof here looks similar.) Assume that we have (Eqn 2) and the sequence $_^$ is linearly independent. Let $mathcal M$ be the linear space spanned by the functions $_^$. Define a sesquilinear form on $mathcal M$ as egin leftlanglesum_< ext>a_jg_j, sum_< ext>b_kg_k ight angle_ = sum_< ext> a_joverline_j. end Since $_^$ is a linearly independent set, the above definition is well-defined. Note that $_^$ is an orthonormal set in $langle, angle_$. For any $finmathcal M$ and $xin X$, we have egin f(x) = sum_< ext>langle f,g_m angle_,g_m(x). end Cauchy-Schwarz's inequality gives egin |f(x)| & leq Big(sum_< ext>|langle f,g_m angle_|^2Big)^<1/2>Big(sum_< ext>|g_m(x)|^2Big)^ <1/2>leq |f|_sqrt. end Let $widetilde$ be the Hilbert space completion of $mathcal M$. The standard argument shows that $widetilde$ is a RKHS of functions on $X$. What is the kernel of $widetilde$? Since $_^$ is an orthonormal set and its span is dense in $widetilde$, it is an orthonormal basis for $widetilde$. The kernel of $widetilde$ then can be computed as $sum_^g_m(x)ar_m(y),$ which is the same as $K(x,y)$. Therefore, $widetilde$ is the same as $mathcal H$ (they consist of the same functions and the inner products on the two spaces are equal). Consequently, $_^$ is an orthonormal basis for $mathcal$. This completes the proof.

Counterexample: On the other hand, there are counterexamples that provide a negative answer to the question in the infinite dimensional case.

What part of the above proof is incorrect? I have checked but could not figure out what went wrong.


Contents

  • The set of vectors < e1 = (1, 0, 0) , e2 = (0, 1, 0) , e3 = (0, 0, 1) > (the standard basis) forms an orthonormal basis of R 3 . Proof: A straightforward computation shows that the inner products of these vectors equals zero, ⟨e1, e2⟩ = ⟨e1, e3⟩ = ⟨e2, e3⟩ = 0 and that each of their magnitudes equals one, ||e1|| = ||e2|| = ||e3|| = 1 . This means that <e1, e2, e3> is an orthonormal set. All vectors (x, y, z) in R 3 can be expressed as a sum of the basis vectors scaled ( x , y , z ) = x e 1 + y e 2 + z e 3 , _<1>+ymathbf _<2>+zmathbf _<3>,> so <e1, e2, e3> spans R 3 and hence must be a basis. It may also be shown that the standard basis rotated about an axis through the origin or reflected in a plane through the origin forms an orthonormal basis of R 3 .
  • Notice that an orthogonal transformation of the standard inner-product space ( R n , ⟨ ⋅ , ⋅ ⟩ ) ^,langle cdot ,cdot angle )> can be used to construct other orthogonal bases of R n ^> .
  • The set <fn : nZ> with fn(x) = exp(2πinx) forms an orthonormal basis of the space of functions with finite Lebesgue integrals, L 2 ([0,1]), with respect to the 2-norm. This is fundamental to the study of Fourier series.
  • The set <eb : bB> with eb(c) = 1 if b = c and 0 otherwise forms an orthonormal basis of 2 (B).
  • Eigenfunctions of a Sturm–Liouville eigenproblem.
  • An orthogonal matrix is a matrix whose column vectors form an orthonormal set.

If B is an orthogonal basis of H, then every element x of H may be written as

When B is orthonormal, this simplifies to

and the square of the norm of x can be given by

Even if B is uncountable, only countably many terms in this sum will be non-zero, and the expression is therefore well-defined. This sum is also called the Fourier expansion of x, and the formula is usually known as Parseval's identity.

If B is an orthonormal basis of H, then H is isomorphic to 2 (B) in the following sense: there exists a bijective linear map Φ : H 2 (B) such that

for all x and y in H.

Given a Hilbert space H and a set S of mutually orthogonal vectors in H, we can take the smallest closed linear subspace V of H containing S. Then S will be an orthogonal basis of V which may of course be smaller than H itself, being an incomplete orthogonal set, or be H, when it is a complete orthogonal set.

Using Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), one can show that every Hilbert space admits an orthonormal basis [5] furthermore, any two orthonormal bases of the same space have the same cardinality (this can be proven in a manner akin to that of the proof of the usual dimension theorem for vector spaces, with separate cases depending on whether the larger basis candidate is countable or not). A Hilbert space is separable if and only if it admits a countable orthonormal basis. (One can prove this last statement without using the axiom of choice).

In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: given an orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-one correspondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends a given basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonal basis to any other orthogonal basis.


Solutions for Chapter 6.B: Orthonormal Bases

Solutions for Chapter 6.B: Orthonormal Bases

  • 6.B.1: (a) Suppose 2 R. Show that .cos sin / . sin cos / and.cos sin .
  • 6.B.2: Suppose e1. em is an orthonormal list of vectors in V. Let v 2 V.
  • 6.B.3: Suppose T 2 L.R3/ has an upper-triangular matrix with respect tothe.
  • 6.B.4: Suppose n is a positive integer. Prove that1p2 cos xp cos 2xp .
  • 6.B.5: On P2.R/, consider the inner product given byhp qi D Z 10p.x/q.x/ .
  • 6.B.6: Find an orthonormal basis of P2.R/ (with inner product as in Exerci.
  • 6.B.7: Find a polynomial q 2 P2.R/ such thatp 12DZ 10p.x/q.x/ dxfor every .
  • 6.B.8: Find a polynomial q 2 P2.R/ such thatZ 10p.x/.cos x/ dx DZ 10p.x/q.
  • 6.B.9: What happens if the GramSchmidt Procedure is applied to a list ofve.
  • 6.B.10: Suppose V is a real inner product space and v1. vm is a linearl.
  • 6.B.11: Suppose h i1 and h i2 are inner products on V such that hvwi1 D .
  • 6.B.12: Suppose V is finite-dimensional and h i1, h i2 are inner products.
  • 6.B.13: Suppose v1. vm is a linearly independent list in V. Show that t.
  • 6.B.14: Suppose e1. en is an orthonormal basis of V and v1. vn areve.
  • 6.B.15: Suppose CR.1 1/ is the vector space of continuous real-valued func.
  • 6.B.16: Suppose F D C, V is finite-dimensional, T 2 L.V /, all the eigenval.
  • 6.B.17: For u 2 V, let u denote the linear functional on V defined by.u/.v/.
Textbook: Linear Algebra Done Right (Undergraduate Texts in Mathematics)
Edition: 3
Author: Sheldon Axler
ISBN: 9783319110790

This textbook survival guide was created for the textbook: Linear Algebra Done Right (Undergraduate Texts in Mathematics), edition: 3. Linear Algebra Done Right (Undergraduate Texts in Mathematics) was written by and is associated to the ISBN: 9783319110790. This expansive textbook survival guide covers the following chapters and their solutions. Since 17 problems in chapter 6.B: Orthonormal Bases have been answered, more than 11500 students have viewed full step-by-step solutions from this chapter. Chapter 6.B: Orthonormal Bases includes 17 full step-by-step solutions.

Upper triangular systems are solved in reverse order Xn to Xl.

A = CTC = (L.J]))(L.J]))T for positive definite A.

Remove row i and column j multiply the determinant by (-I)i + j •

cond(A) = c(A) = IIAIlIIA-III = amaxlamin. In Ax = b, the relative change Ilox III Ilx II is less than cond(A) times the relative change Ilob III lib II· Condition numbers measure the sensitivity of the output to change in the input.

B j has b replacing column j of A x j = det B j I det A

dij = 0 if i #- j. Block-diagonal: zero outside square blocks Du.

A factorization of the Fourier matrix Fn into e = log2 n matrices Si times a permutation. Each Si needs only nl2 multiplications, so Fnx and Fn-1c can be computed with ne/2 multiplications. Revolutionary.

Independent columns in A, orthonormal columns in Q. Each column q j of Q is a combination of the first j columns of A (and conversely, so R is upper triangular). Convention: diag(R) > o.

The big formula for det(A) has a sum of n! terms, the cofactor formula uses determinants of size n - 1, volume of box = I det( A) I.

Nullspace of AT = "left nullspace" of A because y T A = OT.

The algebraic multiplicity A M of A is the number of times A appears as a root of det(A - AI) = O. The geometric multiplicity GM is the number of independent eigenvectors for A (= dimension of the eigenspace).

Columns that contain pivots after row reduction. These are not combinations of earlier columns. The pivot columns are a basis for the column space.

Unit vector u is reflected to Qu = -u. All x intheplanemirroruTx = o have Qx = x. Notice QT = Q-1 = Q.

CS ] rotates the plane by () and R- 1 = RT rotates back by -(). Eigenvalues are eiO and e-iO , eigenvectors are (1, ±i). c, s = cos (), sin ().


Contents

The Gram–Schmidt process then works as follows:

The sequence u1, …, uk is the required system of orthogonal vectors, and the normalized vectors e1, …, ek form an orthonormal set. The calculation of the sequence u1, …, uk is known as Gram–Schmidt orthogonalization, while the calculation of the sequence e1, …, ek is known as Gram–Schmidt orthonormalization as the vectors are normalized.

Geometrically, this method proceeds as follows: to compute ui, it projects vi orthogonally onto the subspace U generated by u1, …, ui−1 , which is the same as the subspace generated by v1, …, vi−1 . The vector ui is then defined to be the difference between vi and this projection, guaranteed to be orthogonal to all of the vectors in the subspace U.

The Gram–Schmidt process also applies to a linearly independent countably infinite sequence <vi>i. The result is an orthogonal (or orthonormal) sequence <ui>i such that for natural number n: the algebraic span of v1, …, vn is the same as that of u1, …, un .

If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the 0 vector on the ith step, assuming that vi is a linear combination of v1, …, vi−1 . If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs.

Euclidean space Edit

Consider the following set of vectors in R 2 (with the conventional inner product)

Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors:

We check that the vectors u1 and u2 are indeed orthogonal:

noting that if the dot product of two vectors is 0 then they are orthogonal.

For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above:

It has the following properties:

  • It is continuous
  • It is orientation preserving in the sense that or ⁡ ( v 1 , … , v k ) = or ⁡ ( GS ⁡ ( v 1 , … , v k ) ) (mathbf _<1>,dots ,mathbf _)=operatorname (operatorname (mathbf _<1>,dots ,mathbf _))> .
  • It commutes with orthogonal maps:

When this process is implemented on a computer, the vectors u k _> are often not quite orthogonal, due to rounding errors. For the Gram–Schmidt process as described above (sometimes referred to as "classical Gram–Schmidt") this loss of orthogonality is particularly bad therefore, it is said that the (classical) Gram–Schmidt process is numerically unstable.

The Gram–Schmidt process can be stabilized by a small modification this version is sometimes referred to as modified Gram-Schmidt or MGS. This approach gives the same result as the original formula in exact arithmetic and introduces smaller errors in finite-precision arithmetic. Instead of computing the vector uk as

This method is used in the previous animation, when the intermediate v'3 vector is used when orthogonalizing the blue vector v3.

The following MATLAB algorithm implements the Gram–Schmidt orthonormalization for Euclidean Vectors. The vectors v1, …, vk (columns of matrix V , so that V(:,j) is the jth vector) are replaced by orthonormal vectors (columns of U ) which span the same subspace.

The cost of this algorithm is asymptotically O(nk 2 ) floating point operations, where n is the dimensionality of the vectors (Golub & Van Loan 1996, §5.2.8).

And reducing this to row echelon form produces

The normalized vectors are then

The result of the Gram–Schmidt process may be expressed in a non-recursive formula using determinants.

where D0=1 and, for j ≥ 1, Dj is the Gram determinant

Note that the expression for uk is a "formal" determinant, i.e. the matrix contains both scalars and vectors the meaning of this expression is defined to be the result of a cofactor expansion along the row of vectors.

The determinant formula for the Gram-Schmidt is computationally slower (exponentially slower) than the recursive algorithms described above it is mainly of theoretical interest.

Other orthogonalization algorithms use Householder transformations or Givens rotations. The algorithms using Householder transformations are more stable than the stabilized Gram–Schmidt process. On the other hand, the Gram–Schmidt process produces the j th orthogonalized vector after the j th iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration.

Yet another alternative is motivated by the use of Cholesky decomposition for inverting the matrix of the normal equations in linear least squares. Let V be a full column rank matrix, whose columns need to be orthogonalized. The matrix V ∗ V V> is Hermitian and positive definite, so it can be written as V ∗ V = L L ∗ , V=LL^<*>,> using the Cholesky decomposition. The lower triangular matrix L with strictly positive diagonal entries is invertible. Then columns of the matrix U = V ( L − 1 ) ∗ ight)^<*>> are orthonormal and span the same subspace as the columns of the original matrix V . The explicit use of the product V ∗ V V> makes the algorithm unstable, especially if the product's condition number is large. Nevertheless, this algorithm is used in practice and implemented in some software packages because of its high efficiency and simplicity.

In quantum mechanics there are several orthogonalization schemes with characteristics better suited for certain applications than original Gram–Schmidt. Nevertheless, it remains a popular and effective algorithm for even the largest electronic structure calculations. [3]


2. Transpose and the inverse of an orthonormal matrix are equal.

For any square matrix, we know that

and from the first property, we know that

so we can conclude from both the facts that

3. The determinant of an orthogonal matrix has value +1 or -1.

To verify this, lets find the determinant of square of an orthogonal matrix


Subsection 6.2.2 Computing Orthogonal Complements

Since any subspace is a span, the following proposition gives a recipe for computing the orthogonal complement of any subspace. However, below we will give several shortcuts for computing the orthogonal complements of other common kinds of subspaces–in particular, null spaces. To compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix, as in this important note in Section 2.6.


Comment.

This is one of the midterm 2 exam problems for Linear Algebra (Math 2568) in Autumn 2017.

One common mistake is just to normalize the vectors by dividing them by their length $sqrt<3>$.
The resulting vectors have length $1$, but they are not orthogonal.

Another mistake is that you just changed the numbers in the vectors so that they are orthogonal.
The issue here is that if you change the numbers randomly, then the new vectors might no longer belong to the subspace $W$.

The point of the Gram-Schmidt orthogonalization is that the process converts any basis for $W$ to an orthogonal basis for $W$.
The above solution didn’t use the full formula of the Gram-Schmidt orthogonalization. Of course, you may use the formula in the exam but you must remember it correctly.


Math 307 -Linear AlgebraFall 2015

Textbook: We'll be using a draft of the imaginatively named Linear Algebra by myself and Mark Meckes. The text book is posted in Blackboard. It is there for the use of students in this course please do not distribute it.

All course information is posted here Blackboard is used only for posting the text book and for grades. (See Dave Noon's take on Blackboard.).

About this course: Math 307 is a theoretical course in linear algebra, geared primarily for students majoring in mathematics, mathematics and physics, and applied mathematics. (Although everyone is welcome, if you're not a math major, then depending on your interests and goals you may wish to consider taking Math 201 instead.) The major topics are linear systems of equations, matrices, vector spaces, linear transformations, and inner product spaces.

This is the official course description:

Saying that this is a theoretical course means that students will be expected to read and write proofs. If you don't yet feel comfortable with that, Math 305 (Introduction to Advanced Mathematics) is a course which is specifically designed to help ease the transition from calculus to proof-based math classes. Here is a self-diagnostic which you may find useful I am happy to discuss it with you in office hours.

Even if you do feel comfortable with reading and writing proofs, I strongly suggest you read and work through this tutorial on proof comprehension.

Topics and rough schedule: We will cover essentially all of the book. The schedule will be roughly as follows:

TopicsBook chapterWeeks
Linear systems, spaces, and maps 1 1-4
Linear independence and bases 2 5-7
Inner products 3 8-11
Determinants and the characteristic polynomial 4 12-14

Attendance: You're supposed to come. (To every class.)

Reading and group quizzes: We wrote the book to be read, by you! The reading and the lectures are complementary, and it's important to do both. Before each class, please read the section to be covered in the next lecture (we'll go through the book in order &mdash I'll announce any exceptions in class). You will be placed in a group of four at the beginning of the semester each class will start with a short group quiz based on the material you read in preparation for class.

Homework Problems: How much you work on the homework problems is probably the single biggest factor in determining how much you get out of the course. If you are having trouble with the problems, please come ask for help you will learn much more (and probably get a rather better grade) if you figure out all of the homework problems, possibly with help in office hours or from your classmates, than if you do them alone when you can and skip the ones you can't. Students are welcome to work together on figuring out the homework, but you should write up the solutions on your own.

Each lecture has specific homework problems associated to it, as listed in the chart below. I strongly suggest doing the homework the same day as the corresponding lecture, or the next day at the latest (see in particular the figure I passed out on the first day of class titled "The value of rehearsal after a lecture"). Homework will be collected weekly.

The homework is meant to be a mix of relatively straightforward exercises and really tough problems. Don't worry too much if you find some of it hard, but do continue to struggle with it that's the way you learn.

The next stage after the struggle of figuring out a problem is writing down a solution you learn a lot here, too. Think of the homework as writing assignments. Keep in mind that what you turn in should solutions: polished English prose with well-reasoned, complete arguments. I should be able to give your solutions to another student who has never thought about the problems (or did, but didn't figure them out), and she should be able to read and understand them.

Individual quizzes: There will be five hour-long quizzes throughout the term. These are closed book, closed notes. The tentative dates are: (all Fridays) September 11, October 2, October 23, November 13, December 4.

  • Group quizzes 5% (the lowest three will be dropped)
  • Homework 25%
  • Individual quizzes 50%
  • Final exam 20%

A couple articles worth reading:

Forget What You Know About Good Study Habits appeared in the Times in Fall 2010. It offers some advice about studying based on current pedagogical research.

Teaching and Human Memory, Part 2 from The Chronicle of Higher Education in December 2011. Its intended audience is professors, but I think it's worth it for students to take a look as well.

Investigating and Improving Undergraduate Proof Comprehension, Fall 2015. This is a fascinating description of attempts to help undergraduates improve at understanding and learning from proofs it is the source of the tutorial on proof comprehension linked above. Again, it's really written with professors in mind, but you'll learn a lot by reading it.

Assignments: Howework is posted below.

LectureGroup quizReading for next timeProblemsDue Date
M 8/24noneSec. 1.1, 1.2 pdf8/28
W 8/26pdfSec. 1.3 pdf8/28
F 8/28pdfSec. 1.4 pdf9/4
M 8/31pdfSec. 1.5 pdf9/4
W 9/2pdfSec. 1.6 pdf9/4
F 9/4pdfSec. 1.6 pdf9/11
W 9/9pdfSec. 1.7 pdf9/11
M 9/14pdfSec. 1.7 pdf9/18
W 9/16pdfSec. 1.8 pdf9/18
F 9/18pdfSec. 1.9 pdf9/25
M 9/21pdfSec. 1.10 pdf9/25
W 9/23pdfSec. 2.1 pdf9/25
F 9/25pdfSec. 2.2 pdf10/2
M 9/28pdfSec. 2.3 pdf10/2
W 9/30pdfSec. 2.4 pdf10/2
M 10/5pdfSec. 2.5 pdf10/9
W 10/7pdfSec. 2.5 pdf10/9
F 10/9pdfSec. 2.6 pdf10/16
M 10/12pdfSec. 2.6, 3.1 pdf10/16
W 10/14pdfSec. 3.1 pdf10/16
F 10/16pdfSec. 3.2 pdf10/23
M 10/19 Fall break
W 10/21pdfSec. 3.2 pdf10/23
M 10/26pdfSec. 3.3 pdf10/30
W 10/28pdfSec. 3.4 pdf10/30
F 10/30pdfSec. 3.5 pdf11/6
M 11/2pdfSec. 3.5 pdf11/6
W 11/4pdfSec. 3.6 pdf11/6
F 11/6pdfSec. 3.6 pdf11/13
M 11/9pdfSec. 3.7 pdf11/13
M 11/16pdfSec. 4.1 pdf11/20
W 11/18pdfSec. 4.2 pdf11/20
F 11/20pdfSec. 4.2 pdf11/25 (Wednesday!)
M 11/23pdfSec. 4.3 pdf11/25 (Wednesday!)
W 11/25pdfSec. 4.4 pdf12/2 (Wednesday!)
M 11/30pdfSec. 4.4 pdf12/2 (Wednesday!)

The fifth quiz will be Friday, December 4 in class. The quiz will last all 50 minutes of lecture and is closed-notes, closed-book, with no calculators allowed.

The quiz will focus on sections 3.7 &mdash 4.4 of the book.

  • orthogonal projection
  • algebraically closed
  • Schur decomposition
  • multilinear
  • alternating
  • determinant of a matrix
  • determinant of a linear map
  • Laplace expansion
  • characteristic polynomial
  • multiplicity of a root
  • multiplicity of an eigenvalue

The fourth quiz will be Friday, November 13 in class. The quiz will last all 50 minutes of lecture and is closed-notes, closed-book, with no calculators allowed.

The quiz will focus on sections 3.4 &mdash 3.6 of the book.

  • QR decomposition
  • conjugate transpose
  • symmetric matrix
  • Frobenius inner product
  • Frobenius norm
  • norm (of a normed space)
  • operator norm
  • isometry
  • unitary
  • orthogonal matrix
  • orthogonal complement
  • singular value decomposition (all versions!)
  • singular values
  • adjoint
  • self-adjoint
  • Hermitian
  • normal map or matrix

The third quiz will be Friday, October 23 in class. The quiz will last all 50 minutes of lecture and is closed-notes, closed-book, with no calculators allowed.

The quiz will focus on material since the previous quiz i.e., sections 2.4 &mdash 3.2 of the book.

  • rank (of a linear map or of a matrix)
  • nullity
  • coordinate representation
  • matrix of a linear map with respect to bases
  • diagonalizable map
  • change of basis matrix
  • similar matrices
  • diagonalizable matrix
  • invariant
  • trace
  • inner product
  • norm (in an inner product space)
  • orthogonal
  • orthonormal
  • orthonormal basis

The second quiz will be Friday, October 2 in class. The quiz will last all 50 minutes of lecture and is closed-notes, closed-book, with no calculators allowed.

The quiz will focus on material since the previous quiz i.e., sections 1.7 &mdash 2.3 of the book.

  • identity matrix
  • diagonal matrix
  • matrix of an operator
  • matrix product
  • transpose
  • inverse matrix
  • invertible matrix
  • elementary matrices
  • range
  • column space
  • kernel
  • eigenspace
  • linearly dependent
  • linearly independent
  • finite dimensional
  • infinite dimensional
  • basis
  • standard basis
  • dimension

The first quiz will be Friday, September 11 in class. The quiz will last all 50 minutes of lecture and is closed-notes, closed-book, with no calculators allowed.

The quiz will cover all the course material covered through September 9, including section 1.6 of the book.


Watch the video: Orthogonal Complements (November 2021).