11.1: Self-adjoint or hermitian operators - Mathematics

Let (V) be a finite-dimensional inner product space over (mathbb{C}) with inner product (inner{cdot}{cdot}). A linear operator (Tinmathcal{L}(V)) is uniquely determined by the values of

[ inner{Tv}{w}, quad ext{for all (v,win V).} ]

This means, in particular, that if (T,Sinmathcal{L}(V)) and

inner{Tv}{w} = inner{Sv}{w} quad ext{for all (v,w in V),}

then (T=S). To see this, take (w) to be the elements of an orthonormal basis of (V).

Definition 11.1.1. Given (Tinmathcal{L}(V)), the adjoint (a.k.a. hermitian conjugate) of (T) is defined to be the operator (T^*inmathcal{L}(V)) for which

[ inner{Tv}{w} = inner{v}{T^*w}, quad ext{for all (v,win V)} ]

Moreover, we call (T) self-adjoint (a.k.a.hermitian}) if (T=T^*).

The uniqueness of (T^*) is clear by the previous observation.

Example 11.1.2. Let (V=mathbb{C}^3), and let (T in cal{L}(mathbb{C}^3)) be defined by (T(z_1,z_2,z_3)=(2z_2+iz_3,iz_1,z_2)). Then
inner{(y_1,y_2,y_3)}{T^*(z_1,z_2,z_3)} &= inner{T(y_1,y_2,y_3)}{(z_1,z_2,z_3)}
&= inner{(2y_2+iy_3,iy_1,y_2)}{(z_1,z_2,z_3)}
&= 2y_2overline{z_1} + iy_3 overline{z_1} +iy_1overline{z_2} + y_2 overline{z_3}
&= inner{(y_1,y_2,y_3)}{(-iz_2,2z_1+z_3,-iz_1)}

so that (T^*(z_1,z_2,z_3)=(-iz_2,2z_1+z_3,-iz_1)). Writing the matrix for (T) in terms of the canonical basis, we see that
M(T) = egin{bmatrix} 0&2&i i&0&0 0&1&0 end{bmatrix} quad ext{and} quad
M(T^*) = egin{bmatrix} 0&-i&0 2&0&1 -i&0& 0end{bmatrix}.

Note that (M(T^*)) can be obtained from (M(T)) by taking the complex conjugate of each element and then transposing. This operation is called the conjugate transpose of (M(T)), and we denote it by ((M(T))^{*}).

We collect several elementary properties of the adjoint operation into the following proposition. You should provide a proof of these results for your own practice.

Proposition 11.1.3. Let (S,Tin mathcal{L}(V)) and (ain mathbb{F}).

  1. ((S+T)^* = S^*+T^*).
  2. ((aT)^* = overline{a} T^*).
  3. ((T^*)^* = T).
  4. (I^* = I).
  5. ((ST)^* = T^* S^*).
  6. (M(T^*) = M(T)^*).

When (n=1), note that the conjugate transpose of a (1 imes 1) matrix (A) is just the complex conjugate of its single entry. Hence, requiring (A) to be self-adjoint ((A=A^*)) amounts to saying that this sole entry is real. Because of the transpose, though, reality is not the same as self-adjointness when (n > 1), but the analogy does nonetheless carry over to the eigenvalues of self-adjoint operators.

Proposition 11.1.4. Every eigenvalue of a self-adjoint operator is real.

Proof. Suppose (lambdainmathbb{C}) is an eigenvalue of (T) and that (0 eq vin V) is a corresponding eigenvector such that (Tv=lambda v). Then

lambda orm{v}^2 &= inner{lambda v}{v} = inner{Tv}{v} = inner{v}{T^*v}
&= inner{v}{Tv} = inner{v}{lambda v} = overline{lambda} inner{v}{v}
=overline{lambda} orm{v}^2.

This implies that (lambda=overline{lambda}).

Example 11.1.5. The operator (Tin mathcal{L}(V)) defined by (T(v) = egin{bmatrix} 2 & 1+i 1-i & 3 end{bmatrix} v) is self-adjoint, and it can be checked (e.g., using the characteristic polynomial) that the eigenvalues of (T) are (lambda=1,4).

Let $mathcal$ be a second-order self-adjoint differential operator. Then $mathcalu(x)$ may be written as
eginlabelmathcalu(x)=fracleft[p(x)frac ight]+q(x)u(x)end as we discussed here. Multiply eqref by $v^ast$ ($v^ast$ is the complex conjugate of $v$) and integrate
int_a^bv^astmathcaludx&=int_a^bv^astfracleft[p(x)frac ight]dx+int_a^bv^ast qudx
&=int_a^bv^ast dleft[p(x)frac ight]+int_a^bv^ast qudx
&=v^ast pfrac|_a^b-int_a^b ^prime pu’dx+int_a^bv^ast qudx
We may impose
eginlabelv^ast pfrac|_a^b=0end
as a boundary condition.
-int_a^b ^prime pu’dx&=-int_a^b ^prime pdu
&=-^prime pu|_a^b+int_a^b u(p^prime)’dx
We may also impose
eginlabel-^prime pu|_a^b=0end
as a boundary condition. Then
int_a^bv^astmathcaludx&=int_a^b u(p^prime)’dx+int_a^bv^ast qudx
&=int_a^b umathcalv^ast dx

Definition. A self-adjoint operator $mathcal$ is called a Hermitian operator with respect to the functions $u(x)$ and $v(x)$ if

eginlabelint_a^bv^astmathcaludx=int_a^b umathcalv^ast dxend

That is, a self-adjoint operator $mathcal$ which satisfies the boundary conditions eqref and eqref is a Hermitian operator.

Hermitian Operators in Quantum Mechanics

In quantum mechanics, the differential operators need to be neither second-order nor real. For example, the momentum operator is given by $hat p=-ihbarfrac$. Therefore we need an extended notion of Hermitian operators in quantum mechanics.

Definition. The operator $mathcal$ is Hermitian if
eginlabelint psi_1^astmathcalpsi_2 d au=int(mathcalpsi_1)^astpsi_2 d auend
Note that eqref coincides with eqref if $mathcal$ is real. In terms of Dirac’s braket notation eqref can be written as
$langlepsi_1|mathcalpsi_2 angle=langlemathcalpsi_1|psi_2 angle$

The adjoint operator $A^dagger$ of an operator $A$ is defined by
eginlabelint psi_1^ast A^dagger psi_2 d au=int(Apsi_1)^astpsi_2 d auend Again in terms of Dirac’s braket notation eqref can be written as
$langlepsi_1|A^daggerpsi_2 angle=langle Apsi_1|psi_2 angle$
If $A=A^dagger$ then $A$ is said to be self-adjoint. Clearly, self-adjoint operators are Hermitian operators. However the converse need not be true. Although we will not delve into this any deeper here, the difference is that Hermitian operators are always assumed to be bounded while self-adjoint operators are not necessarily restricted to be bounded. That is, bounded self-adjoint operators are Hermitian operators. Physicists don’t usually distinguish self-adjoint operators and Hermitian operators, and often they mean self-adjoint operators by Hermitian operators. In quantum mechanics, observables such as position, momentum, energy, angular momentum are represented by (Hermitian) linear operators and the measurements of observables are given by the eigenvalues of linear operators. Physical observables are regarded to be bounded and continuous, because the measurements are made in a laboratory (so bounded) and points of discontinuity are mathematical points and nothing smaller than the Planck length can be observed. As well-known any bounded linear operator defined on a Hilbert space is continuous.

For those who are interested: This may cause a notational confusion, but in mathematics the complex conjugate $a^ast$ is replaced by $ar a$ and the adjoint $a^dagger$ is replaced by $a^ast$. Let $mathcal$ be a Hilbert space. By the Riesz Representation Theorem, it can be shown that for any bounded linear operator $a:mathcallongrightarrowmathcal’$, there exists uniquely a bounded linear operator $a^ast: mathcal’longrightarrowmathcal$ such that
$langle a^asteta|xi angle=langleeta|axi angle$ for all $xiinmathcal$, $etainmathcal’$. This $a^ast$ is defined to be the adjoint of the bounded operator $a$. $<>^ast$ defines an involution on $mathcal(mathcal)$, the set of all bounded lineart operators of $mathcal$ and $mathcal(mathcal)$ with $<>^ast$ becomes a C$<>^ast$-algebra. In mathematical formulation of quantum mechanics, observables are represented by self-adjoint operators of the form $a^ast a$, where $ainmathcal(mathcal)$. Note that $a^ast a$ is positive i.e. its eigenvalues are non-negative.

Definition. The expectation value of an operator $mathcal$ is
$langlemathcal angle=int psi^astmathcalpsi d au$
$langlemathcal angle$ corresponds to the result of a measurement of the physical quantity represented by $mathcal$ when the physical system is in a state described by $psi$. The expectation value of an operator should be real and this is guaranteed if the operator is Hermitian. To see this suppose that $mathcal$ is Hermitian. Then
langlemathcal angle^ast&=left[int psi^astmathcalpsi d au ight]^ast
&=intpsimathcal^astpsi^ast d au
&=int(mathcalpsi)^astpsi d au
&=intpsi^astmathcalpsi d au (mbox$ is Hermitian>)
&=langlemathcal angle
That is, $langlemathcal angle$ is real.

There are three important properties of Hermitian (self-adjoint) operators:

Hermitian vs. self-adjoint operators

I'm gearing up for my Quals and need to make sure I understand the difference in case it gets asked. Would you guys agree with this distinction? An operator A is hermitian if <Au,v>=<u,Av> for all u,v in the domain of A. This doesn't necessarily mean A=A* as the domain of A* could be larger than the domain of A. So if A is hermitian and D[A]=D[A*] then A is self-adjoint Please correct me in even the smallest detail as I would much rather hear how wrong I am from you guys then from my qualifying committee

Similarly, a self-adjoint operator is by definition symmetric and everywhere defined.

I don't think this is correct. Self-adjoint operator need not to be everywhere defined. If this was true then all self-adjoint operators would be also bounded and, therefore, Hermitian.

OK, that makes sense. But are you saying you would define self-adjoint as symmetric and everywhere defined or is it better to say that if an operator is symmetric and everywhere defined then it is also self-adjoint? maybe this is unimportant but id hate to say that definition and have one of my committee say "no the definition is A=A*" or something else. as you can see im kinda in that pre-quals panic mode

I think in the mathematical physics community people usually use the term Hermitian for bounded operators with the following definition

<Au,v>=<u,Av> for all u,v in the Hilbert space.

For unbounded operators, the definition is more complicated because you have to define A*. This is defined as operator with domain given by all v for which there exists z such that:

<Au,v>=<u,z> for all u in the domain of A.

Operator A is then self-adjoint if A=A*, which means D(A)=D(A*) and Av=A*v for all v in D(A).

I'm not sure how widespread these definitions are, but I'm sure that when you have unbounded operators, you need to use the second definition. I think we used to call operators satisfying your first definition symmetric, but I'm not sure about that. Physicists don't care about the distinction and define Hermitian operators by my first definition even for unbounded operators.

EDIT: I should also mention that this distinction is only important if your Hilbert space has infinitely many dimensions. Also A have to be a densely defined operator (this should be part of the definition). In finite Hilbert space every densely defined operator can be uniquely continued to the whole Hilbert space, so one can take all operators as defined on the whole space. Both definitions are then equivalent. However, if you are on infinite dimensional Hilbert space and A is unbounded then you cannot uniquely define it on the whole space and domains become important.

Hilbert-adjoint operator vs self-adjoint operator

Hi, while reading a comment by Dr Du, I looked up the definition of Hilbert adjoint operator, and it appears as the same as Hermitian operator:

This is ok, as it implies that ##T^<*>T=TT^<*>##, however, it appears that self-adjointness is different?

Please correct me if this is wrong. And it looks to me that a self-adjoint operator is defined self adjoint if and only if it satisfies the rule of the inner product in a normed Hilbert space, where the conjugate transpose of the matrix elements is equal to the matrix.

So one can say that the first is a property which defines a particular symmetric aspect of the relationship of two operators, T and ##T^<*>##, while the latter defines a symmetric aspect of the action (or operation) of either of the operators separately, T or ##T^<* >## on a mapping - and that both properties (the former and the latter) not need to occur at the same time?

I am aware that much of this can be answered by looking at other threads, but this question compares these two critical properties, which beginners like me can misconceive, and thus may contribute to increase impact of the forum as a resource.

11.1: Self-adjoint or hermitian operators - Mathematics

A Hermitian Operator is one which satisfies

As shown in Sturm-Liouville Theory, if is Self-Adjoint and satisfies the boundary conditions

then it is automatically Hermitian. Hermitian operators have Real Eigenvalues, Orthogonal Eigenfunctions, and the corresponding Eigenfunctions form a Complete set when is second-order and linear. In order to prove that Eigenvalues must be Real and Eigenfunctions Orthogonal, consider

Assume there is a second Eigenvalue such that

Now multiply (3) by and (5) by

But because is Hermitian, the left side vanishes.

If Eigenvalues and are not degenerate, then , so the Eigenfunctions are Orthogonal. If the Eigenvalues are degenerate, the Eigenfunctions are not necessarily orthogonal. Now take .

The integral cannot vanish unless , so we have and the Eigenvalues are real.

Given Hermitian operators and ,

Because, for a Hermitian operator with Eigenvalue ,

Therefore, either or . But Iff , so

for a nontrivial Eigenfunction. This means that , namely that Hermitian operators produce Real expectation values. Every observable must therefore have a corresponding Hermitian operator. Furthermore,

Self-Adjoint Linear Operators

Recall that if $V$ and $W$ are finite-dimensional nonzero inner product space and if $T in mathcal L(V, W)$ them the adjoint of $T$ denoted $T^*$ is the linear map $T^* : W o V$ is defined by considering the linear function $varphi : V o mathbb$ defined by $varphi (v) = <T(v), w>$ and for a fixed $w in W$ we define $T^* (w)$ to be the unique vector in $V$ such that ltT(v), w> = <v, T^*(w)>$ .

If we now look only at linear operators, say $T in mathcal L (V)$ then $T : V o V$ and $T^* : V o V$ , and in some cases, we will have that $T = T^*$ . These type of linear operators are special and are defined below.

Definition: Let $V$ be a finite-dimensional nonzero inner product space. Let $T in mathcal L (V)$ . Then $T$ is said to be Self-Adjoint if $T = T^*$ .

The term "Hermitian" is used interchangeably as opposed to "Self-Adjoint".

We have already seen one type of self-adjoint linear operator, namely the identity operator since $I = I^*$ .

For a more complex example, consider the linear operator $T in mathcal (mathbb^2)$ defined by $T(x, y) = (2x + 3y, 3x + 2y)$ and consider the standard basis $< (1, 0), (0, 1) >$ of $mathbb^2$ . Note that $T(1, 0) = (2, 3)$ and $T(0, 1) = (3, 2)$ , so we can construct the matrix $mathcal M (T)$ with respect to the basis $< (1, 0), (0, 1) >$ to be:

As we saw on The Matrix of the Adjoint of a Linear Map, the matrix of $T^*$ with respect to this basis $< (1, 0), (0, 1) >$ can be obtained taking the conjugate transpose of $mathcal M (T, < (1, 0), (0, 1) >$ , however, note that $mathcal M (T, < (1, 0), (0, 1) >= mathcal M (T^*, < (1, 0), (0, 1) >$ so $T$ is self-adjoint.

We will now look at some basic properties of self-adjoint matrices.

Note that proposition 2 holds only if $a$ is a real number since in the proof below we require that $a = ar$ which holds if and only if $a in mathbb$ .

11.1: Self-adjoint or hermitian operators - Mathematics

Most op­er­a­tors in quan­tum me­chan­ics are of a spe­cial kind called Her­mit­ian . This sec­tion lists their most im­por­tant prop­er­ties.

An op­er­a­tor is called Her­mit­ian when it can al­ways be flipped over to the other side if it ap­pears in a in­ner prod­uct:

  • They al­ways have real eigen­val­ues, not in­volv­ing (But the eigen­func­tions, or eigen­vec­tors if the op­er­a­tor is a ma­trix, might be com­plex.) Phys­i­cal val­ues such as po­si­tion, mo­men­tum, and en­ergy are or­di­nary real num­bers since they are eigen­val­ues of Her­mit­ian op­er­a­tors .
  • Their eigen­func­tions can al­ways be cho­sen so that they are nor­mal­ized and mu­tu­ally or­thog­o­nal, in other words, an or­tho­nor­mal set. This tends to sim­plify the var­i­ous math­e­mat­ics a lot.
  • Their eigen­func­tions form a com­plete set. This means that any func­tion can be writ­ten as some lin­ear com­bi­na­tion of the eigen­func­tions. (There is a proof in de­riva­tion for an im­por­tant ex­am­ple. But see also .) In prac­ti­cal terms, it means that you only need to look at the eigen­func­tions to com­pletely un­der­stand what the op­er­a­tor does.

In the lin­ear al­ge­bra of real ma­tri­ces, Her­mit­ian op­er­a­tors are sim­ply sym­met­ric ma­tri­ces. A ba­sic ex­am­ple is the in­er­tia ma­trix of a solid body in New­ton­ian dy­nam­ics. The or­tho­nor­mal eigen­vec­tors of the in­er­tia ma­trix give the di­rec­tions of the prin­ci­pal axes of in­er­tia of the body.

An or­tho­nor­mal com­plete set of eigen­vec­tors or eigen­func­tions is an ex­am­ple of a so-called “ ba­sis.” In gen­eral, a ba­sis is a min­i­mal set of vec­tors or func­tions that you can write all other vec­tors or func­tions in terms of. For ex­am­ple, the unit vec­tors and are a ba­sis for nor­mal three-di­men­sion­al space. Every three-di­men­sion­al vec­tor can be writ­ten as a lin­ear com­bi­na­tion of the three.

The fol­low­ing prop­er­ties of in­ner prod­ucts in­volv­ing Her­mit­ian op­er­a­tors are of­ten needed, so they are listed here:

The first says that you can swap and if you take the com­plex con­ju­gate. (It is sim­ply a re­flec­tion of the fact that if you change the sides in an in­ner prod­uct, you turn it into its com­plex con­ju­gate. Nor­mally, that puts the op­er­a­tor at the other side, but for a Her­mit­ian op­er­a­tor, it does not make a dif­fer­ence.) The sec­ond is im­por­tant be­cause or­di­nary real num­bers typ­i­cally oc­cupy a spe­cial place in the grand scheme of things. (The fact that the in­ner prod­uct is real merely re­flects the fact that if a num­ber is equal to its com­plex con­ju­gate, it must be real if there was an in it, the num­ber would change by a com­plex con­ju­gate.)

Her­mit­ian op­er­a­tors can be flipped over to the other side in in­ner prod­ucts.

Her­mit­ian op­er­a­tors have only real eigen­val­ues.

Her­mit­ian op­er­a­tors have a com­plete set of or­tho­nor­mal eigen­func­tions (or eigen­vec­tors).

A ma­trix is de­fined to con­vert any vec­tor into Ver­ify that and are or­tho­nor­mal eigen­vec­tors of this ma­trix, with eigen­val­ues 2, re­spec­tively 4.

A ma­trix is de­fined to con­vert any vec­tor into the vec­tor Ver­ify that and are or­tho­nor­mal eigen­vec­tors of this ma­trix, with eigen­val­ues 2 re­spec­tively 0. Note:

Show that the op­er­a­tor is a Her­mit­ian op­er­a­tor, but is not.

Gen­er­al­ize the pre­vi­ous ques­tion, by show­ing that any com­plex con­stant comes out of the right hand side of an in­ner prod­uct un­changed, but out of the left hand side as its com­plex con­ju­gate

As a re­sult, a num­ber is only a Her­mit­ian op­er­a­tor if it is real: if is com­plex, the two ex­pres­sions above are not the same.

Show that an op­er­a­tor such as cor­re­spond­ing to mul­ti­ply­ing by a real func­tion, is an Her­mit­ian op­er­a­tor.

Show that the op­er­a­tor ​ is not a Her­mit­ian op­er­a­tor, but ​ is, as­sum­ing that the func­tions on which they act van­ish at the ends of the in­ter­val on which they are de­fined. (Less re­stric­tively, it is only re­quired that the func­tions are pe­ri­odic they must re­turn to the same value at that they had at )

Show that if is a Her­mit­ian op­er­a­tor, then so is As a re­sult, un­der the con­di­tions of the pre­vi­ous ques­tion, ​ is a Her­mit­ian op­er­a­tor too. (And so is just ​ of course, but ​ is the one with the pos­i­tive eigen­val­ues, the squares of the eigen­val­ues of ​)

A com­plete set of or­tho­nor­mal eigen­func­tions of ​ on the in­ter­val 0 that are zero at the end points is the in­fi­nite set of func­tions

Check that these func­tions are in­deed zero at 0 and that they are in­deed or­tho­nor­mal, and that they are eigen­func­tions of ​ with the pos­i­tive real eigen­val­ues

Com­plete­ness is a much more dif­fi­cult thing to prove, but they are. The com­plete­ness proof in the notes cov­ers this case.

A com­plete set of or­tho­nor­mal eigen­func­tions of the op­er­a­tor ​ that are pe­ri­odic on the in­ter­val 0 are the in­fi­nite set of func­tions

Check that these func­tions are in­deed pe­ri­odic, or­tho­nor­mal, and that they are eigen­func­tions of ​ with the real eigen­val­ues

Com­plete­ness is a much more dif­fi­cult thing to prove, but they are. The com­plete­ness proof in the notes cov­ers this case.

3.41.2. Spectrum¶

To obtain a spectrum of the operator , we need to solve the following problem:

Those values of for which the solution belong to the discrete part of the spectrum. are called eigenvalues and eigenvectors. Those values of for which can be normalized to a delta function:

belong to the continuous part of the spectrum (note that in this case ).

Eigenvectors belonging to the continous part of the spectrum obey the completeness relation:

Eigenvectors belonging to the discrete part obey the following completeness relation:

The sum or integral runs over the whole spectrum (if the spectrum contains both discrete and continous part, we simply combine sums and integrals).

Spectrum of a self-adjoint operator is real, because

The eigenvectors are orthogonal:

So for we get , for the is equal to 1 if belongs to the discrete spectrum and we get:

or it is normalized as a delta function if it belongs to the continous part:

As such, eigenvectors of a self-adjoint operator are complete and orthogonal in the above sense. Thus any function from the space can then be expanded into the series:

11.1: Self-adjoint or hermitian operators - Mathematics

     All the space’s a stage,
and all functionals and operators merely players!

All our previous considerations were only a preparation of the stage and now the main actors come forward to perform a play. The vectors spaces are not so interesting while we consider them in statics, what really make them exciting is the their transformations. The natural first steps is to consider transformations which respect both linear structure and the norm.

6.1  Linear operators

Definitionਁ    A linear operator T between two normed spaces X and Y is a mapping T : X → Y such that T (λ v + µ u )=λ T ( v ) + µ T ( u ). The kernel of linear operator ker T and image are defined by
    ker T  =< x ∈  X :  Tx =0>      Im   T =< y ∈  Y :  y = Tx ,  for some x ∈  X >.

As usual we are interested also in connections with the second (topological) structure:

  1. T is continuous on X
  2. T is continuous at the point 0.
  3. T is a bounded linear operator.

Proof. Proof essentially follows the proof of similar Theorem਄.

6.2  Orthoprojections

Here we will use orthogonal complement, see §ਃ.5, to introduce a class of linear operators—orthogonal projections. Despite of (or rather due to) their extreme simplicity these operators are among most frequently used tools in the theory of Hilbert spaces.

Corollaryਈ   (of Thm.  23 , about Orthoprojection)    Let M be a closed linear subspace of a hilbert space H . There is a linear map P M from H onto M (the orthogonal projection or orthoprojection) such that
P M 2 = P M ,      ker P M = M ⊥ ,       P M ⊥ = I − P M .     (34)

Proof. Let us define P M ( x )= m where x = m + n is the decomposition from the previous theorem. The linearity of this operator follows from the fact that both M and M ⊥ are linear subspaces. Also P M ( m )= m for all m ∈ M and the image of P M is M . Thus P M 2 = P M . Also if P M ( x )=0 then x ⊥ M , i.e. ker P M = M ⊥ . Similarly P M ⊥ ( x )= n where x = m + n and P M + P M ⊥ = I .

6.3   B ( H ) as a Banach space (and even algebra)

Proof. The proof repeat proof of the Theoremਈ, which is a particular case of the present theorem for Y =ℂ, see Exampleਃ.

Proof. Clearly ( ST ) x = S ( Tx )∈ Z , and

STx ⎪⎪
≤ ⎪⎪
S ⎪⎪
Tx ⎪⎪
S ⎪⎪
T ⎪⎪
x ⎪⎪

which implies norm estimation if || x ||𢙁.

Proof. It is induction by n with the trivial base n =1 and the step following from the previous theorem.

Definitionꀕ    Let T ∈ B ( X , Y ). We say T is an invertible operator if there exists S ∈ B ( Y , X ) such that
     ST =  I X     ਊnd       TS = I Y .
Such an S is called the inverse operator of T .
  1. for an invertible operator T : X → Y we have ker T =<0>and ℑ T = Y .
  2. the inverse operator is unique (if exists at all). (Assume existence of S and S ′, then consider operator STS ′.)
  1. The zero operator is never invertible unless the pathological spaces X = Y =<0>.
  2. Theidentity operator I Xis the inverse of itself.
  3. A linear functional is not invertible unless it is non-zero and X is one dimensional.
  4. An operator ℂ n → ℂ m is invertible if and only if m = n and corresponding square matrix is non-singular, i.e. has non-zero determinant.Theright shift S is not invertible on l 2(it is one-to-one but is not onto). But the left shift operator T ( x 1, x 2,…)=( x 2, x 3,…) is its left inverse, i.e. TS = I but TS ≠ I since ST (1,0,0,…)=(0,0,…). T is not invertible either (it is onto but not one-to-one), however S is its right inverse.
  5. Operator of multiplication M wis invertible if and only if w 𢄡 ∈ C [ a , b ] and inverse is M w 𢄡. For example M 1+ tis invertible L 2[0,1] and M tis not.

6.4 ꂭjoints

Theoremꀘ    Let H and K be Hilbert Spaces and T ∈ B ( H , K ). Then there exists operator T * ∈ B ( K , H ) such that
      ⟨  Th , k   ⟩ K =⟨  h , T * k   ⟩ H      ਏor all   h ∈  H ,  k ∈  K .
Such T * is called the adjoint operator of T . Also T ** = T and || T * ||=|| T ||.

Proof. For any fixed k ∈ K the expression h :→ ⟨ Th , k ⟩ K defines a bounded linear functional on H . By the Riesz𠄿rຜhet lemma there is a unique y ∈ H such that ⟨ Th , k ⟩ K =⟨ h , y ⟩ H for all h ∈ H . Define T * k = y then T * is linear:

which implies || T * k ||≤|| T ||·|| k ||, consequently || T * ||≤|| T ||. The opposite inequality follows from the identity || T ||=|| T ** ||.

  1. For operators T 1and T 2show that
          ( T 1 T 2) * = T 2 * T 1 * ,     ( T 1+ T 2) * = T 1 * + T 2 *      (λ  T ) * = λ T * .
    If A is an operator on a Hilbert space H then (ker A ) ⊥ = Im A * .

6.5  Hermitian, unitary and normal operators

To appreciate the next Theorem the following exercise is useful:

  1. For x ∈ H we have || x ||= sup < | ⟨ x , y ⟩ | for all y ∈ H such that || y ||=1>.
  2. For T ∈ B ( H ) we have
    = sup <  
    ⟨  Tx , y   ⟩ 
      for all   x , y ∈  H   such that  ⎪⎪
    =1>.     (35)

The next theorem says, that for a Hermitian operator T the supremum in (35) may be taken over the 𠇍iagonal” x = y only.

Proof. If Tx =0 for all x ∈ H , both sides of the identity are 0. So we suppose that ∃ x ∈ H for which Tx ≠ 0.

We see that | ⟨ Tx , x ⟩ |≤ || Tx |||| x || ≤ || T |||| x 2 ||, so sup|| x || =1 | ⟨ Tx , x ⟩ |≤ || T ||. To get the inequality the other way around, we first write s :=sup|| x || =1 | ⟨ Tx , x ⟩ |. Then for any x ∈ H , we have | ⟨ Tx , x ⟩ |≤ s || x 2 ||.

    ⟨  T ( x + y ), x + y   ⟩ =⟨  Tx , x   ⟩ +⟨  Tx , y   ⟩+⟨  Ty , x   ⟩ +⟨  Ty , y   ⟩ =  ⟨  Tx , x   ⟩ +2ℜ ⟨  Tx , y   ⟩ +⟨  Ty , y   ⟩

(because T being Hermitian gives ⟨ Ty , x ⟩=⟨ y , Tx ⟩ = ⟨ Tx , y ⟩ ) and, similarly,

    ⟨  T ( x − y ), x − y   ⟩ = ⟨  Tx , x   ⟩ 𢄢ℜ ⟨  Tx , y   ⟩ +⟨  Ty , y   ⟩.

by the parallelogram identity.

Now, for x ∈ H such that Tx ≠ 0, we put y =|| Tx || 𢄡 || x || Tx . Then || y || =|| x || and when we substitute into the previous inequality, we get

Tx ⎪⎪
x ⎪⎪
=4ℜ⟨  Tx , y   ⟩  ≤਄ s ⎪⎪
x 2 ⎪⎪

So || Tx ||≤ s || x || and it follows that || T ||≤ s , as required.

  1. If D : l 2→ l 2is adiagonal operatorsuch that D e kk e k, then D * e k = λ k e kand D is unitary if and only if | λ k |=1 for all k .
  2. The shift operator S satisfies S * S = I but SS * ≠ I thus S is not unitary.
  1. U is unitary U is surjection and an isometry, i.e. || Ux ||=|| x || for all x ∈ H U is a surjection and preserves the inner product, i.e. ⟨ Ux , Uy ⟩=⟨ x , y ⟩ for all x , y ∈ H .

Proof. 1𡴢. Clearly unitarity of operator implies its invertibility and hence surjectivity. Also

Ux ⎪⎪
2 =⟨  Ux , Ux   ⟩=⟨  x , U * Ux   ⟩=⟨  x , x   ⟩=⎪⎪
x ⎪⎪
2 . 

2𡴣. Using the polarisation identity (cf. polarisation in equation (12)):

Take T = U * U and T = I , then

3𡴡. Indeed ⟨ U * U x , y ⟩=⟨ x , y ⟩ implies ⟨ ( U * U − I ) x , y ⟩=0 for all x , y ∈ H , then U * U = I . Since U is surjective, for any y ∈ H there is x ∈ H such that y = Ux . Then, using the already established fact U * U = I we get

16.10.2018. Chapter 1: Review of analysis. Measure theory: measurable sets, measurable functions, Lebesgue integration, Monotone Convergence, Dominated Convergence, Fatou's lemma, Brezis-Lieb refinement of Fatou's lemma, Approximation of integrable functions by continuous functions with compact support L^p spaces: definition of L^p norm, completeness of the norm (L^p spaces are Banach spaces), Hölder's inequality, dual space of L^p.

19.10.2018. L^p spaces (continued): weak convergence, Banach-Alaoglu theorem (weak compactness of bounded sequences), Banach-Steinhaus theorem (Uniform bounded principle, without proof). Convolution, Young inequality, approximation by convolution.

23.10.2018. Hardy-Littlewood-Sobolev inequality. Fourier transform: Plancherel theorem, inverse transform, Fourier transform of convolution, Fourier transform of derivatives. Sobolev space H^m(R^d). Hilbert space: orthogonality, Parseval's identity, Riesz representation theorem, weak convergence.

26.10.2018. Chapter 2: Principles of quantum mechanics. Postulates of quantum mechanics: states, observables, measurement, dynamics. Why do we need quantum mechanics? Strange observations and non-commutativity of observables, Einstein-Podolsky–Rosen (EPR) paradox, Bell's inequality. Formal similarities of classical mechanics. Here is the lecture notes.

30.10.2018. Mathematical formulation of quantum mechanics. Heisenberg's and Hardy's uncertainty principles. Proof of the stability of hydrogen atom using Hardy's inequality. Chapter 3: Sobolev spaces. Distribution theory: test functions and distributions, locally integrable functions are distributions, fundamental lemma of calculus of variations, weak (distributional) derivatives. Two equivalent definitions of Sobolev space H^m(R^d). Smooth functions with compact support is dense in H^m(R^d).

2.11.2018. Sobolev inequalities for H^1(R^d): Scaling argument, Fourier transform of 1/|x|^s, standard Sobolev inequality for d>=3 (proof using Hardy-Littlewood-Sobolev inequality), application to the stability of hydrogen atom, Sobolev inequality in low dimensions.

6.11.2018. Sobolev embedding theorem: weak convergence in H^1(R^d), heat kernel, H^1 weak-convergence implies L^p strong-convergence in bounded sets. Sobolev inequalities/embeddings for H^s. Green function of Laplacian, mean-value theorem for harmonic functions, Newton's theorem.

9.11.2018. Derivative of |f| and diamagnetic inequality. Application of Sobolev embedding theorem: existence of ground state for hydrogen atom. Chapter 4: Spectral theorem. Bounded operators, compact operators, adjoint of an operator. Spectral theorem for compact operators.

13.11.2018. Proof of spectral theorem for compact operators. Definition of resolvent and spectrum. Basic properties of spectrum of bounded self-adjoint operators. Continuous functional calculus for bounded self-adjoint operators.

16.11.2018. States and observables in C*-algebra abstract setting. Spectral properties of hermitian, unitary, projection, and positive operators. Gel'fand isomorphism.

20.11.2018. Riesz-Markov representation theorem. Spectral measure. Zorn's lemma. Spectral theorem for bounded self-adjoint operator (multiplication operator version). Bounded functional calculus. Spectral theorem for bounded normal operators.

23.11.2018. Gelfand isomorphism (continued), representations in Hilbert space, the GNS construction. Lecture notes.

27.11.2018. Unbounded operators: densely defined domain, extension, adjoint operator, symmetric operator, self-adjoint operator, resolvent and spectrum. Spectral theorem for unbounded self-adjoint operators (Multiplication operator version). Functional calculus.

30.11.2018. Chapter 5: Self-adjoint extensions. Closure method. Essentially self-adjoint operators. Kato-Rellich method. Applications to Schrödinger operators.

4.12.2018. Operators bounded from below. Quadratic form. Friedrichs extension. Chapter 6: Quantum dynamics. Stone theorem (strong solution version).

7.12.2018. Symmetries and unitary evolution. Density matrix and entropy. Lecture notes.

11.12.2018. Stone theorem (weak solution and strongly continuous one-parameter unitary group). Three fundamental questions in quantum mechanics: self-adjointness, spectral properties and scattering properties.

14.12.2018. Chapter 7: Bound states. Discrete spectrum and essential spectrum. Weyl's criterion for spectrum. Perturbation by relatively compact operators. Application to Schroedinger operator.

21.12.2018: lecture moved to 11.1.2019

8.1.2019. Chapter 8: Scattering theory. Overview I: physical motivation, potential scattering, RAGE theorem, scattering operators, asymptotic completeness, stationary scattering theory, Lippmann–Schwinger equation. Lecture notes.

11.1.2019. Bound states (continued): Min-max principle, existence of (in)finitely many bound states of Schrödinger operators, exponential decay of bound states

15.1.2019. CLR inequality on the number of bound states. Scattering theory (continued): Space localization of bound states, kernel of free Schrödinger dynamics, RAGE theorem for free Schrödinger dynamics

18.1.2019. Proof of RAGE theorem in general case

22.1.2019. Scattering theory overview (2): Asymptotic completeness - guide through the proof by Enss, Coulomb scattering, S-matrix, cross-section. Lecture notes (with consistent signs for the wave-operators)

25.1.2019. Detailed proof of existence of wave operators and asymptotic completeness for short range interactions using Cook's method.

29.1.2019. Kernel equation of wave operators. Chapter 9: Many-body quantum theory. Tensor product and many-body Hilbert space, Kato theorem on self-adjointness, HVZ theorem on essential spectrum. An overview on Kato's work by B. Simon (Sections 7 and 13 are particularly relevant to what we discussed in the course).

1.2.2019. Zhislin theorem for existence of bound states of atoms. Particle statistics: bosons and fermions. Pauli exclusion principle. The ground state energy of non-interacting systems. Density functional theories.

5.2.2019. Chapter 10: Quantum entropy. Overview: states vs. density matrices, desired properties of entropy, a probabilistic argument for von Neumann entropy, sub-additivity of entropy. Lecture notes