Articles

7: Eigenvalues and Eigenvectors


In this chapter we study linear operators (T : V o V) on a finite-dimensional vector space (V). For example, quantum mechanics is largely based upon the study of eigenvalues and eigenvectors of operators on finite- and infinite-dimensional vector spaces.


Chapter 7: Eigenvalues and Eigenvectors

It was shown in Chapter 6 that provided the eigenvalues and eigenvectors of a system can be found, it is possible to transform the coordinates of the system from local or global coordinates to coordinates consisting of normal or 'principal' modes.

Depending on the damping, the eigenvalues and eigenvectors of a system can be real or complex, as discussed in Chapter 6. However, real eigenvalues and eigenvectors, derived from the undamped equations of motion, can be used in most practical cases, and will be assumed here, unless stated otherwise.

In Example 6.3, we used a very basic 'hand' method to demonstrate the derivation of the eigenvalues and eigenvectors of a simple 2-DOF system solving the characteristic equation for its roots, and substituting these back into the equations to obtain the eigenvectors. In this chapter, we look at methods that can be used with larger systems.


First, choose the matrix size you want to enter. You will see a randomly generated matrix to give you an idea of what your output will look like.

Then, enter your own numbers in the boxes that appear. You can enter integers or decimals. (More advanced entry and output is in the works, but not available yet.)

On a keyboard, you can use the tab key to easily move to the next matrix entry box.

Click calculate when ready.

The output will involve either real and/or complex eigenvalues and eigenvector entries.

You can change the precision (number of significant digits) of the answers, using the pull-down menu.

Eigenvalues and eigenvectors calculator

NOTE 1: The eigenvector output you see here may not be the same as what you obtain on paper. Remember, you can have any scalar multiple of the eigenvector, and it will still be an eigenvector. The convention used here is eigenvectors have been scaled so the final entry is 1.

NOTE 2: The larger matrices involve a lot of calculation, so expect the answer to take a bit longer.

NOTE 3: Eigenvectors are usually column vectors, but the larger ones would take up a lot of vertical space, so they are written horizontally, with a "T" superscript (known as the transpose of the matrix).

NOTE 4: When there are complex eigenvalues, there's always an even number of them, and they always appear as a complex conjugate pair, e.g. 3 + 5i and 3 &minus 5i.

NOTE 5: When there are eigenvectors with complex elements, there's always an even number of such eigenvectors, and the corresponding elements always appear as complex conjugate pairs. (It may take some manipulating by multiplying each element by a complex number to see this is so in some cases.)


Eigenvalues and Eigenvectors

We review here the basics of computing eigenvalues and eigenvectors. Eigenvalues and eigenvectors play a prominent role in the study of ordinary differential equations and in many applications in the physical sciences. Expect to see them come up in a variety of contexts!

Definitions

Let $A$ be an $n imes n$ matrix. The number $lambda$ is an eigenvalue of $A$ if there exists a non-zero vector $<f v>$ such that $ A <f v>= lambda <f v>. $ In this case, vector $<f v>$ is called an eigenvector of $A$ corresponding to $lambda$.

Computing Eigenvalues and Eigenvectors

We can rewrite the condition $A <f v>= lambda <f v>$ as $ (A- lambda I) <f v>= <f 0>. $ where $I$ is the $n imes n$ identity matrix. Now, in order for a non-zero vector $<f v>$ to satisfy this equation, $A – lambda I$ must not be invertible.

Otherwise, if $A – lambda I$ has an inverse, egin (A – lambda I)^<-1>(A – lambda I) <f v>& = & (A – lambda I)^<-1> <f 0> <f v>& = & <f 0>. end But we are looking for a non-zero vector $<f v>$. That is, the determinant of $A – lambda I$ must equal 0. We call $p(lambda )= det (A – lambda I)$ the characteristic
polynomial
of $A$. The eigenvalues of $A$ are simply the roots of the characteristic polynomial of $A$.

Example

Let $A = left[ egin 2 & -4 -1 & -1 end ight] $. Then $egin p(lambda) & = & det left[egin 2-lambda & -4 -1 & -1-lambda end ight] & = & (2-lambda)(-1-lambda)-(-4)(-1) & = & lambda^ <2>-lambda -6 & = & (lambda -3)(lambda +2). end$ Thus, $lambda_1 =3$ and $lambda_2 = -2$ are the eigenvalues of $A$.

To find eigenvectors $ <f v>= left[ egin v_1 v_2 vdots v_n end ight] $ corresponding to an eigenvalue $lambda$, we simply solve the system of linear equations given by $ (A-lambda I) <f v>= <f 0>. $

Example

The matrix $A = left[ egin 2 & -4 -1 & -1 end ight] $ of the previous example has eigenvalues $lambda_1 =3$ and $lambda_2 = -2$. Let’s find the eigenvectors corresponding to $lambda_1 =3$. Let $<f v>= left[ ight]$. Then $(A-3I)<f v>=<f 0>$ gives us $ left[egin 2-3 & -4 -1 & -1-3 end ight]left[egin v_1 v_2 end ight] = left[egin 0 0 end ight], $ from which we obtain the duplicate equations egin -v_1-4v_2 & = & 0 -v_1-4v_2 & = & 0. end If we let $v_2=t$, then $v_1=-4t$. All eigenvectors corresponding to $lambda_1 =3$ are multiples of $left[<-4 atop 1> ight] $ and thus the eigenspace corresponding to $lambda_1 =3$ is given by the span of $left[<-4 atop 1> ight] $. That is, $left ight] ight>$ is a basis of the eigenspace corresponding to $lambda_1 =3$.

Repeating this process with $lambda_2 = -2$, we find that egin 4v_1 -4V_2 & = & 0 -v_1 + v_2 & = & 0 end If we let $v_2=t$ then $v_1=t$ as well. Thus, an eigenvector corresponding to $lambda_2 = -2$ is $left[<1 atop 1> ight]$ and the eigenspace corresponding to $lambda_2 = -2$ is given by the span of $left[<1 atop 1> ight]$. $left ight] ight>$ is a basis for the eigenspace corresponding to $lambda_2 = -2$.

In the following example, we see a two-dimensional eigenspace.

Example

Let $A=left[egin 5 & 8 & 16 4 & 1 & 8 -4 & -4 & -11 end ight]$. Then $p(lambda ) = detleft[egin 5-lambda & 8 & 16 4 & 1-lambda & 8 -4 & -4 & -11-lambda end ight] = (lambda-1)(lambda+3)^<2>$ after some algebra! Thus, $lambda_1 = 1$ and $lambda_2=-3$ are the eigenvalues of $A$. Eigenvectors $ <f v>= left[egin v_1 v_2 v_3 end ight]$ corresponding to $lambda_1=1$ must satisfy

Letting $v_3=t$, we find from the second equation that $v_1=-2t$, and then $v_2=-t$. All eigenvectors corresponding to $lambda_1=1$ are multiples of $left[egin -2 -1 1 end ight]$, and so the eigenspace corresponding to $lambda_1=1$ is given by the span of $left[egin -2 -1 1 end ight]$. $left -2 -1 1 end ight] ight>$ is a basis for the eigenspace corresponding to $lambda_1=1$.

Eigenvectors corresponding to $lambda_2=-3$ must satisfy

The equations here are just multiples of each other! If we let $v_3 = t$ and $v_2 = s$, then $v_1 = -s -2t$. Eigenvectors corresponding to $lambda_2=-3$ have the form $ left[egin -1 1 0 end ight]s+left[egin -2 0 1 end ight]t. $ Thus, the eigenspace corresponding to $lambda_2=-3$ is two-dimensional and is spanned by $left[egin -1 1 0 end ight]$ and $left[egin -2 0 1 end ight]$. $left -1 1 0 end ight],left[egin -2 0 1 end ight] ight>$ is a basis for the eigenspace corresponding to $lambda_2=-3$.

Notes

  • Eigenvalues and eigenvectors can be complex-valued as well as real-valued.
  • The dimension of the eigenspace corresponding to an eigenvalue is less than or equal to the multiplicity of that eigenvalue.
  • The techniques used here are practical for $2 imes 2$ and $3 imes 3$ matrices. Eigenvalues and eigenvectors of larger matrices are often found using other techniques, such as iterative methods.

Key Concepts

Let $A$ be an $n imes n$ matrix. The eigenvalues of $A$ are the roots of the characteristic polynomial $ p(lambda )= det (A – lambda I). $ For each eigenvalue $lambda$, we find eigenvectors $ <f v>=left[ egin v_1 v_2 vdots v_n end ight] $ by solving the linear system $ (A – lambda I) <f v>= <f 0>. $ The set of all vectors $<f v>$ satisfying $A<f v>= lambda <f v>$ is called the eigenspace of $A$ corresponding to $lambda$.


2 Answers 2

You don't need to find the characteristic polynomial of $M$ (or indeed the matrix $M$ at all) in order to find the eigenvalues and eigenvectors of $T$ . You can work directly from the definition. If $lambda$ is an eigenvalue of $T$ with associated eigenvector $A$ , then by definition $A^T = lambda A$ Taking transposes of both sides gives $A = lambda A^T$ Substituting the previous equation into this one, we obtain $A = lambda^2 A$ Assuming $A$ is nonzero, which is required for any eigenvector, we conclude that the only possible eigenvalues are $lambda = 1$ and $lambda = -1$ .

Now consider the two cases.

In this case, the first equation becomes $A^T = A$ , so $A$ is an associated eigenvector if and only if it is nonzero and symmetric, i.e. of the form $A = egina & b b & cend$

Note that there are three degrees of freedom (the values of $a$ , $b$ , and $c$ ), so this eigenspace has geometric multiplicity $3$ .

In this case, the first equation becomes $A^T = -A$ , so $A$ is an associated eigenvector if and only if it is nonzero and antisymmetric, i.e. of the form $A = egin0 & d -d & 0end$

Note that there is one degree of freedom (the value of $d$ ), so this eigenspace has geometric multiplicity $1$ .


Chapter 7: Eigenvalues and Eigenvectors

occur frequently in engineering analysis. Consider, for example, that the variables of interest in the analysis of a linear system are x 1,x 2 and x 3 and that they are related by three, linear, simultaneous differential equations with constant coefficients:

These may be solved for the derivatives

and then put into matrix form

The forgoing may be written with X indicating the derivative of X with respect to time as

The solution to the system of differential equations begins with the determination of the so-called complementary function. The procedure is to make the set of equations homogeneous and then, knowing that exponential solutions exist, assume that the complementary function is in the form x = Ce ?t where C is an arbitrary constant. Thus in

take X c = Ce ?t where C is a 3 1 column vector of arbitrary constants.

Then, with X c = ? Ce ?t it is observed that

which is in the form of eq (7.1) and where the values of the ? 's must be determined.

The forgoing discussion describes what is called the eigenvalve or characteristic value problem. It occurs frequently in engineering analysis in asll disciplines and it does not derive exclusively from a set of differential equations.


Invertibility and Eigenvalues¶

So far we haven’t probed what it means for a matrix to have an eigenvalue of 0.

This happens if and only if the equation (Amathbf = 0mathbf) has a nontrivial solution.

But that equation is equivalent to (Amathbf = <f 0>) which has a nontrivial solution if and only if (A) is not invertible.

0 is an eigenvalue of $A$ if and only if $A$ is not invertible.

This draws an important connection between invertibility and zero eigenvalues.

So we have yet another addition to the Invertible Matrix Theorem!


7: Eigenvalues and Eigenvectors

If you get nothing out of this quick review of linear algebra you must get this section. Without this section you will not be able to do any of the differential equations work that is in this chapter.

So, let’s start with the following. If we multiply an (n imes n) matrix by an (n imes 1) vector we will get a new (n imes 1) vector back. In other words,

What we want to know is if it is possible for the following to happen. Instead of just getting a brand new vector out of the multiplication is it possible instead to get the following,

In other words, is it possible, at least for certain (lambda ) and (vec eta ), to have matrix multiplication be the same as just multiplying the vector by a constant? Of course, we probably wouldn’t be talking about this if the answer was no. So, it is possible for this to happen, however, it won’t happen for just any value of (lambda ) or (vec eta ). If we do happen to have a (lambda ) and (vec eta ) for which this works (and they will always come in pairs) then we call (lambda) an eigenvalue of (A) and (vec eta ) an eigenvector of (A).

So, how do we go about finding the eigenvalues and eigenvectors for a matrix? Well first notice that if (vec eta = vec 0) then (eqref) is going to be true for any value of (lambda ) and so we are going to make the assumption that (vec eta e vec 0). With that out of the way let’s rewrite (eqref) a little.

[eginAvec eta - lambda vec eta & = vec 0 Avec eta - lambda vec eta & = vec 0 left( > ight)vec eta & = vec 0end]

Notice that before we factored out the (vec eta ) we added in the appropriately sized identity matrix. This is equivalent to multiplying things by a one and so doesn’t change the value of anything. We needed to do this because without it we would have had the difference of a matrix, (A), and a constant, (lambda ), and this can’t be done. We now have the difference of two matrices of the same size which can be done.

So, with this rewrite we see that

is equivalent to (eqref). In order to find the eigenvectors for a matrix we will need to solve a homogeneous system. Recall the fact from the previous section that we know that we will either have exactly one solution ((vec eta = vec 0)) or we will have infinitely many nonzero solutions. Since we’ve already said that we don’t want (vec eta = vec 0) this means that we want the second case.

Knowing this will allow us to find the eigenvalues for a matrix. Recall from this fact that we will get the second case only if the matrix in the system is singular. Therefore, we will need to determine the values of (lambda ) for which we get,

Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. Let’s take a look at a couple of quick facts about eigenvalues and eigenvectors.

If (A) is an (n imes n) matrix then (det left( ight) = 0) is an (n^< ext>) degree polynomial. This polynomial is called the characteristic polynomial.

To find eigenvalues of a matrix all we need to do is solve a polynomial. That’s generally not too bad provided we keep (n) small. Likewise this fact also tells us that for an (n imes n) matrix, (A), we will have (n) eigenvalues if we include all repeated eigenvalues.

If (>, >, ldots ,>) is the complete list of eigenvalues for (A) (including all repeated eigenvalues) then,

    If (lambda ) occurs only once in the list then we call (lambda ) simple.

The usefulness of these facts will become apparent when we get back into differential equations since in that work we will want linearly independent solutions.

Let’s work a couple of examples now to see how we actually go about finding eigenvalues and eigenvectors.

The first thing that we need to do is find the eigenvalues. That means we need the following matrix,

In particular we need to determine where the determinant of this matrix is zero.

So, it looks like we will have two simple eigenvalues for this matrix, (> = - 5) and (> = 1). We will now need to find the eigenvectors for each of these. Also note that according to the fact above, the two eigenvectors should be linearly independent.

To find the eigenvectors we simply plug in each eigenvalue into GOTOBUTTON ZEqnNum594711 * MERGEFORMAT REF ZEqnNum594711 ! * MERGEFORMAT (2) and solve. So, let’s do that.

(> = - 5) :
In this case we need to solve the following system.

Recall that officially to solve this system we use the following augmented matrix.

Upon reducing down we see that we get a single equation

that will yield an infinite number of solutions. This is expected behavior. Recall that we picked the eigenvalues so that the matrix would be singular and so we would get infinitely many solutions.

Notice as well that we could have identified this from the original system. This won’t always be the case, but in the (2 imes 2) case we can see from the system that one row will be a multiple of the other and so we will get infinite solutions. From this point on we won’t be actually solving systems in these cases. We will just go straight to the equation and we can use either of the two rows for this equation.

Now, let’s get back to the eigenvector, since that is what we were after. In general then the eigenvector will be any vector that satisfies the following,

To get this we used the solution to the equation that we found above.

We really don’t want a general eigenvector however so we will pick a value for (>) to get a specific eigenvector. We can choose anything (except (> = 0)), so pick something that will make the eigenvector “nice”. Note as well that since we’ve already assumed that the eigenvector is not zero we must choose a value that will not give us zero, which is why we want to avoid (> = 0) in this case. Here’s the eigenvector for this eigenvalue.

Now we get to do this all over again for the second eigenvalue.

(> = 1) :
We’ll do much less work with this part than we did with the previous part. We will need to solve the following system.

Clearly both rows are multiples of each other and so we will get infinitely many solutions. We can choose to work with either row. We’ll run with the first because to avoid having too many minus signs floating around. Doing this gives us,

Note that we can solve this for either of the two variables. However, with an eye towards working with these later on let’s try to avoid as many fractions as possible. The eigenvector is then,

Note that the two eigenvectors are linearly independent as predicted.

This matrix has fractions in it. That’s life so don’t get excited about it. First, we need the eigenvalues.

So, it looks like we’ve got an eigenvalue of multiplicity 2 here. Remember that the power on the term will be the multiplicity.

Now, let’s find the eigenvector(s). This one is going to be a little different from the first example. There is only one eigenvalue so let’s do the work for that one. We will need to solve the following system,

So, the rows are multiples of each other. We’ll work with the first equation in this example to find the eigenvector.

Recall in the last example we decided that we wanted to make these as “nice” as possible and so should avoid fractions if we can. Sometimes, as in this case, we simply can’t so we’ll have to deal with it. In this case the eigenvector will be,

Note that by careful choice of the variable in this case we were able to get rid of the fraction that we had. This is something that in general doesn’t much matter if we do or not. However, when we get back to differential equations it will be easier on us if we don’t have any fractions so we will usually try to eliminate them at this step.

Also, in this case we are only going to get a single (linearly independent) eigenvector. We can get other eigenvectors, by choosing different values of (>). However, each of these will be linearly dependent with the first eigenvector. If you’re not convinced of this try it. Pick some values for (>) and get a different vector and check to see if the two are linearly dependent.

Recall from the fact above that an eigenvalue of multiplicity (k) will have anywhere from 1 to (k) linearly independent eigenvectors. In this case we got one. For most of the (2 imes 2) matrices that we’ll be working with this will be the case, although it doesn’t have to be. We can, on occasion, get two.

So, we’ll start with the eigenvalues.

This doesn’t factor, so upon using the quadratic formula we arrive at,

In this case we get complex eigenvalues which are definitely a fact of life with eigenvalue/eigenvector problems so get used to them.

Finding eigenvectors for complex eigenvalues is identical to the previous two examples, but it will be somewhat messier. So, let’s do that.

(> = - 1 + 5,i) :
The system that we need to solve this time is

Now, it’s not super clear that the rows are multiples of each other, but they are. In this case we have,

This is not something that you need to worry about, we just wanted to make the point. For the work that we’ll be doing later on with differential equations we will just assume that we’ve done everything correctly and we’ve got two rows that are multiples of each other. Therefore, all that we need to do here is pick one of the rows and work with it.

We’ll work with the second row this time.

Now we can solve for either of the two variables. However, again looking forward to differential equations, we are going to need the “(i)” in the numerator so solve the equation in such a way as this will happen. Doing this gives,

So, the eigenvector in this case is

As with the previous example we choose the value of the variable to clear out the fraction.

Now, the work for the second eigenvector is almost identical and so we’ll not dwell on that too much.

(> = - 1 - 5,i) :
The system that we need to solve here is

Working with the second row again gives,

The eigenvector in this case is

There is a nice fact that we can use to simplify the work when we get complex eigenvalues. We need a bit of terminology first however.

If we start with a complex number,

then the complex conjugate of (z) is

To compute the complex conjugate of a complex number we simply change the sign on the term that contains the “(i)”. The complex conjugate of a vector is just the conjugate of each of the vector’s components.

We now have the following fact about complex eigenvalues and eigenvectors.

If (A) is an (n imes n) matrix with only real numbers and if (> = a + bi) is an eigenvalue with eigenvector (>). Then (> = overline <>> = a - bi) is also an eigenvalue and its eigenvector is the conjugate of (>).

This fact is something that you should feel free to use as you need to in our work.

Now, we need to work one final eigenvalue/eigenvector problem. To this point we’ve only worked with (2 imes 2) matrices and we should work at least one that isn’t (2 imes 2). Also, we need to work one in which we get an eigenvalue of multiplicity greater than one that has more than one linearly independent eigenvector.

Despite the fact that this is a (3 imes 3) matrix, it still works the same as the (2 imes 2) matrices that we’ve been working with. So, start with the eigenvalues

So, we’ve got a simple eigenvalue and an eigenvalue of multiplicity 2. Note that we used the same method of computing the determinant of a (3 imes 3) matrix that we used in the previous section. We just didn’t show the work.

Let’s now get the eigenvectors. We’ll start with the simple eigenvector.

This time, unlike the (2 imes 2) cases we worked earlier, we actually need to solve the system. So let’s do that.

Going back to equations gives,

So, again we get infinitely many solutions as we should for eigenvectors. The eigenvector is then,

Now, let’s do the other eigenvalue.

Okay, in this case is clear that all three rows are the same and so there isn’t any reason to actually solve the system since we can clear out the bottom two rows to all zeroes in one step. The equation that we get then is,

So, in this case we get to pick two of the values for free and will still get infinitely many solutions. Here is the general eigenvector for this case,

Notice the restriction this time. Recall that we only require that the eigenvector not be the zero vector. This means that we can allow one or the other of the two variables to be zero, we just can’t allow both of them to be zero at the same time!

What this means for us is that we are going to get two linearly independent eigenvectors this time. Here they are.

Now when we talked about linear independent vectors in the last section we only looked at (n) vectors each with (n) components. We can still talk about linear independence in this case however. Recall back with we did linear independence for functions we saw at the time that if two functions were linearly dependent then they were multiples of each other. Well the same thing holds true for vectors. Two vectors will be linearly dependent if they are multiples of each other. In this case there is no way to get (>) by multiplying (>) by a constant. Therefore, these two vectors must be linearly independent.

So, summarizing up, here are the eigenvalues and eigenvectors for this matrix


Here is the most important definition in this text.

Definition

The German prefix “eigen” roughly translates to “self” or “own”. An eigenvector of

is a vector that is taken to a multiple of itself by the matrix transformation

which perhaps explains the terminology. On the other hand, “eigen” is often translated as “characteristic” we may think of an eigenvector as describing an intrinsic, or characteristic, property of

Eigenvalues and eigenvectors are only for square matrices.

Eigenvectors are by definition nonzero. Eigenvalues may be equal to zero.

We do not consider the zero vector to be an eigenvector: since

the associated eigenvalue would be undefined.

If someone hands you a matrix

On the other hand, given just the matrix

it is not obvious at all how to find the eigenvectors. We will learn how to do this in Section 5.2.

Example (Verifying eigenvectors)
Example (Verifying eigenvectors)
Example (An eigenvector with eigenvalue

are collinear with the origin. So, an eigenvector of

lie on the same line through the origin. In this case,

the eigenvalue is the scaling factor.

For matrices that arise as the standard matrix of a linear transformation, it is often best to draw a picture, then find the eigenvectors and eigenvalues geometrically by studying which vectors are not moved off of their line. For a transformation that is defined geometrically, it is not necessary even to compute its matrix to find the eigenvectors and eigenvalues.


3 Answers 3

Anyway, to solve this one, keep in mind what an eigenvector actually is. It is a non-zero vector which, after being multiplied by $A$, becomes a multiple of itself. Geometrically this means that an output vector needs to be parallel to its corresponding input vector.

It seems that the diagonal vectors $displaystyle pm frac<1>> egin 1 1 end$ are the only vectors that work this way. Their outputs seem to have been scaled by a factor of 3 or 4. Let's say $lambda =3$.

Then $displaystyle pm frac<1>> egin 1 1 end$ are eigenvectors with eigenvalue about $lambda =3$. So in fact, $displaystyle egin c c end$ are eigenvectors with eigenvalue about $lambda =3$ for any constant $c ot=0$.


Watch the video: Linear Algebra: Ch 3 - Eigenvalues and Eigenvectors 8 of 35 Eigenvector=? of a 3x3 Matrix (December 2021).