Chapter 5.2 Subspace Iteration , Simutaneous Iteration and the QR Algerithm
QR Algerithm
Power Method A : Simple
then
If are two subspace of Fn
define 1. 2.
[Thm 5.2.2 ] simple with linear Indep. Eigenvectors
with Eigenvalue Subspace
for some k . Let and
Let be any k-dimensional subspace of Fn such that .
Then there exists a constant C s.t.
Thus , linear with con. ratio linear if x_n+1 - x^* c x_n - x^* ifc < 1)
Subspace Iteration
If : k-dim subspace
linear
( Note : are not all equal to 0 )
If vi's are eigenvectors
Am q = because Aq =
=
=
Support that
QR Algorithm
A = A0
Am+1 =
Am+1 Q* A Q = T = D + N ( In complex field )
= ( In real field )
Recall that
If x = [ x1 , x2 ] , ,
hen B = x-1 A x =
For
QR Algorithm
[Thm 5.2.2 ] A is simple , ,
for some k
Assume
then s.t.
i.e. linear with conv. rate Note : 1. If then for all i
where provided that 2. < v1 > = < q1 > < v1 , v2 > = < q1 , q2 > 3. If holds for all k
( The process of orthogonal can used by 1. Gram Schmidt process
or 2. )
Simultaneous Iteration
Let be a k-dimensional subspace of Fn with basis
Assume that
Let be the i-dimensional subspace spanned by
If
provided ( mean that success for )
Note : i can be moving , just apply the above reservation .
Note : success for some i , and success for all
(*) If , for all
then
true for all
The mth step of the QR algorithm
, where are ON
Now ,
provided
because If m is large
If
m is lagre , Qm first k column
1. Simultaneous Iteratons : 2. QR Algerithm
are the standard basis vectors
If then ( Note : )
Note : If are column vectors of A
If
then qi's are the orthonormalized vectors of ai's Letting , we have A = Q1 R1
The unitary transformation of A
. This constitute the 1th step of QR iteration
Note : 2th step , the relation of QR and Simultaneous Iteration
( where , the matrix after orthogonal ) =
because x in origion A , and in A1
( the origion coordinate system ) ( the new coordinate system )
origion , new
=
equivalent to A1 I = A1 and equivalent to A1 = Q2 R2
Field , , C
Addition: 1. associate 2. commutation 3. exists unit element 4. exists inverse element
Multiplication : 1. associate 2. commutation 3. exists unit element 4. exists inverse element
Note : The field that Q is minimal
Example 1. : z is not a subfield of C
Example 2. : is a subfield of C
Vector Space [Def] A vector space consists of the following :
1. F2. v ( vectors ) 3. A rule is called vector addition and multipication Rule that satisty "Addition" :
a. associate b. commutation c. unit element ( 0 ) d. inverse element ( )
Example : The space of all matrix ( Addition : [A+B]ij = Aij + Bij)
The relation of F and v must satisfy :
e. f. g. h.
Vector space over
Subspace :
v : vector space
If w is a subset of v then w is a subspace of v
if w itself a vector space
( must satisfy the rule that define by v )
[Thm] A nonempty subset w of v is a subspace of v
if and only if
Example : is Hermition is not a subspace over C
but it is a subspace over
( Note : The set of all complex Hermition matrices is not
a subspace over c but in a subspace over ) Span [Def] S be a set of vectors in a vector space v
The subspace span by S is defined to be the
intersection of all subspace v which contains S
, w is subspace contains S
, where . w : subspace
Bases and Dimension :
[Def] A set S of v ( vector space ) is linear dependent if there exists distinct vectors in S ,
and scalars not all of which are 0 ,
such that
A set which is not linearly dependent is called linear Independent
Example : If A set contain the vector 0 must be linear dependent .
[Def] Let v be a vector space . A basis for v is a
linear indep. set of vectors in v which span v
Example :
, Bi are column vectors .
AB = C , C =
are row vectors of C
are row vectors of B
AB = C = for all i
Order Basis
Example : ,
( x1 , x2 , x3 ) =
Example :
=
( x1 , x2 ) = x1 ( 1,0 ) + x2 ( 0.1 )
(*) B' =
= ( x'1 , x'2 )
=
B' =
for all iPij exists and only
= where P = [ Pij ]
if and only if x=0
is invertible and
so = where P =
=
, T exists and only
= = = can find (*)
Linear Transformation
is a function
s.t.
Then T is called Linear Transformation from v into w .
Example : then 1. T most through zero point 2. T must be linear function
T (0) = 0 because T(0+0) = T(0) + T(0)
T(0) = T(0) + T(0) T(0) = 0
Fact : rank(T) + nullity(T) = dim v
[Thm] linear transformation , then
L ( v , w ) : the set of all linear transformations
then L ( v , w ) is a vector space .
Note : dim L ( v , w ) = ( dim v ) ( dim w )
[Def] , then T is called the linear operator on v
is invertible if there is a s.t.
Tv = I = vT , v = T-1 ( or T = v-1 )
T is invertible iff 1. T is 1-1 null space = { 0 } 2. T is onto
A is nonsingular ( Invertible ) if A-1 exist iff
T is nonsingular off
If T is 1-1 or onto ?
if
because
T is nonsingular
onto ( by rank(T) + nullity(T) = dim v )
where
order basis for v
order basis for w
, for all
A = ( Aij ) is called the matrix T relative to the pair
of order basis and
If , then
= = = =
=
( the Multiplication of the combine of linear transformation )
Example : field
T ( x1 , x2 ) = ( x1 , 0 ) ,
= T ( 1 , 0 ) = ( 1 , 0 ) =
=
= ( = =
=
( because =
linear operators
= are ordered bases for v
because = where : the column vector of Pbecause = = now from (*) ( change by )
= = = P =