next up previous
Next: About this document ...

Chapter 5.2 Subspace Iteration , Simutaneous Iteration and the QR Algerithm

QR Algerithm $\left \{
\matrix{ A_0 = Q_1 R_1 \cr
R_1 Q_1 = A_1 \cr
}
\right ~~~~
\Rightarrow \underbrace { {Q_k}^* \cdots {Q_1}^* {Q_0}^* A_0 Q_0 Q_1 \cdots Q_k}_{A_k}$

$\left \{
\matrix{ A_1 = Q_2 R_2 \cr
R_2 Q_2 = A_2 \cr
}
\right ~~~~
A_k \underrightarrow { k \rightarrow \infty} T = D + N$

Power Method A : Simple $ A \in {I\!\!R}^{ n \times n }$

$ \left \{
\matrix{ g : v_1 \cdots v_n ~ are~basis~of~{I\!\!R}^n \cr
\vert {\l...
...ert {\lambda}_2 \vert \geq \cdots \geq \vert {\lambda}_n \vert \cr
}
\right $

$ q , Aq , A^2 q , \cdots , A^n q , \cdots $

then $ A^m q \underrightarrow { m \rightarrow \infty} v_1 $

If ${\varphi}_1 {\varphi}_2$ are two subspace of Fn

define 1. $dist ( s_1 , {\varphi}_2 ) = {min}_{s_2 \in {\varphi}_2} {\Vert s_1 - s_2 \Vert}_2$ 2. $d ( {\varphi}_1 , {\varphi}_2 ) = {max}_{ {\Vert s_1 \Vert}_2 = 1 } d ( s_1 , {\varphi}_2 ) $

[Thm 5.2.2 ] $A \in F^{ n \times n }$ simple with linear Indep. Eigenvectors $v_1 , v_2 , \cdots , v_n$

with Eigenvalue $\vert {\lambda}_1 \vert > \vert {\lambda}_2 \vert \geq \cdot \geq \vert {\lambda}_n \vert$ Subspace $\vert {\lambda}_k \vert > \vert {\lambda}_{k+1} \vert $

for some k . Let $\jmath = < v_1 , \cdots , v_k > $ and $u = < v_{k+1} , \cdots , v_n > $

Let $\varphi $ be any k-dimensional subspace of Fn such that $\varphi \cap u = \{ 0 \} $ .

Then there exists a constant C s.t. $ d ( A_m \varphi , \jmath ) \leq c {\vert {{{\lambda}_{k+1}} \over {{\lambda}_k} }\vert}^m , m = 0 , 1 , 2, \cdots $

Thus , $ A_m \varphi \underrightarrow { m \rightarrow \infty } \jmath $ linear with con. ratio $\vert {{{\lambda}_{k+1}} \over {{\lambda}_k}} \vert $ $( \{ x_n \} x_n \underrightarrow { n \rightarrow \infty } x^* $ linear if x_n+1 - x^* c x_n - x^* ifc < 1)

Subspace Iteration

$\jmath = < v_1 , v_2 , \cdots , v_k > ~~~~u = < v_{k+1} , \cdots , v_n > ~~~~\varphi , u = \{ 0 \}$

If $\varphi $ : k-dim subspace

$\Rightarrow A_m \varphi \rightarrow \jmath $ linear

$q \in F^n ~,~q \in \varphi$ ( Note : $ q \in \varphi \Rightarrow q \ne u \Rightarrow c_1 \cdots c_k$ are not all equal to 0 )

$q = c_1 v_1 + c_2 v_2 + \cdots + c_n v_n $

If vi's are eigenvectors ${\lambda}_i$

$\Rightarrow$ Am q = $c_1 {\lambda}^m_1 + c_2 {\lambda}^m_2 + \cdots + c_n {\lambda}^m_n $ because Aq = $A ( c_1 v_1 + c_2 v_2 + \cdots + c_n v_n )$

= $c_1 A v_1 + c_2 A v_2 + \cdots + c_n A v_n$

= $c_1 {\lambda}_1 v_1 + \cdots + c_n {\lambda}_n v_n$

Support that $\vert {\lambda}_k \vert > \vert {\lambda}_{k+1}$

$\Rightarrow {{ A^m q } \over {\vert {\lambda}^m_k}\vert }
= c_1 {\vert {{{\la...
...v_{k+1} + \cdots + c_n {\vert {{{\lambda}_n} \over {{\lambda}_k}} \vert}^m v_n $

QR Algorithm

A = A0

$ \left \{
\matrix{ A_0 = Q_0 R_0 \cr
R_0 Q_0 = A_1 \cr
}
\right $ $ \left \{
\matrix{ A_1 = Q_1 R_1 \cr
R_1 Q_1 = A_2 \cr
}
\right \cdots $

$ \left \{
\matrix{ A_m = Q_m R_m \cr
R_m Q_m = A_{m+1} \cr
} \right $ $\Rightarrow$ Am+1 = $( Q^*_m \cdots Q^*_1 Q^*_0 ) A_0 ( Q_0 Q_1 \cdots Q_m )$

Am+1 $\underrightarrow$ ${m \rightarrow \infty }$ Q* A Q = T = D + N ( In complex field )

= $\left [ \matrix{ T_{11} & T_{12} & \cdots & T_{1n} \cr
& T_{22} & \cdots & \vdots \cr && \ddots & \vdots \cr
0 &&& T_{pp} \cr } \right ]$ ( In real field )

Recall that

If x = [ x1 , x2 ]   ,   $x_1 = [ v_1 , \cdots , v_k ]$   ,   $x_2 = [ v_{k+1} , \cdots , v_n ]$

hen B = x-1 A x = $\left [ \matrix{ B_{11} & B_{12} \cr 0 & B_{22} \cr } \right ]$

$\lambda (B) = \lambda ( B_{11} )$ $\cup \lambda ( B_{22} )$

For $ k = 1 , \cdots , n-1$

${\varphi}_k = < q^0_1 , \cdots , q^0_k >$

${\jmath}_k = < v_1 , \cdots , v_k >$ $u_k = < v_{k+1} , \cdots , v_n >$

QR Algorithm

[Thm 5.2.2 ] A is simple , $\jmath = < v_1 , \cdots , v_n >$ , $u = < v_{k+1} , \cdots v_n >$

$\vert {\lambda}_1 \vert >$ $\vert {\lambda}_2 \vert$ $\geq \cdots \geq$ $\vert {\lambda}_n \vert ~~,~~\vert {\lambda}_k \vert >$ $\vert {\lambda}_{k+1} \vert$ for some k

Assume $\varphi \cap u = \{ 0 \} $

then $\exists c $ s.t. $dist (A^m \varphi , \jmath )$ $\leq$ $c {\vert { {{\lambda}_{k+1}} \over {{\lambda}_k} }\vert}^m$

i.e. $ A_m \varphi \underrightarrow { m \rightarrow \infty } \jmath $ linear with conv. rate $\vert {{{\lambda}_{k+1}} \over {{\lambda}_k}} \vert $Note : 1. If $\vert {\lambda}_1 \vert > \vert {\lambda}_2 \vert > \cdots > \vert {\lambda}_k \vert > \cdots > \vert {\lambda}_n \vert$ then $ A_m {\varphi}_i \underrightarrow { m \rightarrow \infty } {\jmath}_i $ for all i

where ${\jmath}_i = < v_1 , \cdots , v_i > $ provided that ${\varphi}_i \cup u_i = \{ 0 \}$ 2. < v1 > = < q1 >    < v1 , v2 > = < q1 , q2 > 3. If $< v_1 , \cdots , v_n > = < q_1 , \cdots , q_n >
\par\Rightarrow < v_1 , \cdots , v_k > = < q_1 , \cdots , q_k >$ holds for all k

( The process of orthogonal can used by 1. Gram Schmidt process

or 2. $A = [ v_1 , \cdots , v_n ] = QR = [ q_1 , \cdots , q_n ]$ )

Simultaneous Iteration

Let $\varphi $ be a k-dimensional subspace of Fn with basis $q^0_1 , q^0_2 , \cdots , q^0_k $

Assume that $\varphi \cap u = \{ 0 \} $

$\Rightarrow A^m \varphi = < A^m q^0_1 , A^m q^0_2 ,\cdots , A^m q^0_k > $

Let ${\varphi}_i$ be the i-dimensional subspace spanned by $\{ g^0_1 , \cdots , g^0_i \} $

$\Rightarrow A {\varphi}_i = < A q^0_1 , \cdots , A q^0_i > = < q^1_1 , q^1_2 , ...
...i = < A q^{m-1}_1 , \cdots , A q^{m-1}_2 > = < q^m_1 , q^m_2 , \cdots , q^m_i >$

If ${\varphi}_i \cap u = \{ 0 \}$

$\Rightarrow A_m {\varphi}_i \underrightarrow { m \rightarrow \infty } {\jmath}_...
...dots , q^m_i > \underrightarrow { m \rightarrow \infty } < v_1 , \cdots , v_i >$ provided ${\varphi}_i \cap u = \{ 0 \}$ ( mean that success for ${\varphi}_i \cap u = \{ 0 \}$ )

Note : i can be moving , just apply the above reservation .

Note : success for some i , and success for all $ k \leq i$

(*) If ${\varphi}_k \cap u_k = \{ 0 \} $ , for all $k = 1 , 2 , \cdots , n-1$

then $A^m {\varphi}_k = < q^m_1 , q^m_2 , \cdots , q^m_k > \rightarrow {\jmath}_k = < v_1 , \cdots , v_k >$

true for all $k = 1 , 2 , \cdots , n-1$

The mth step of the QR algorithm

$A_{m+1} = {\hat Q}^*_m A {\hat Q}_m $ , where $\left \{
\matrix{ A_{m-1} = Q_{m-1} R_{m-1} \cr
R_{m-1} Q_{m-1} = A_m \cr
}
\right $ $F^{n \times n } \ni {\hat Q}_m = [ q^m_1 , q^m_2 , \cdots , q^m_n ] ~~~~q^m_i $ are ON

Now , $< q^m_1 , q^m_2 , \cdots , q^m_n > \underrightarrow { m \rightarrow \infty } < v_1 , \cdots , v_n >$

provided ${\varphi}_i \cap u_i = \{ 0 \} ~~,~~ \vert {\lambda}_i \vert > \vert {\lambda}_{i+1} \vert$

because If m is large $\Rightarrow < q^m_1 , q^m_2 , \cdots , q^m_n > \approx < v_1 , \cdots , v_n >$

If $< q^m_1 , q^m_2 , \cdots , q^m_k > \equiv < v_1 , \cdots , v_k >$

$\Rightarrow A_{m+1} = \left [
\matrix{ A_{11} & A_{12} \cr
0 & A_{22} \cr
}
\right ]$

$A_m = {\hat Q}_m A Q_m \approx \left [
\matrix{ A^{(m)}_{11} & A^{(m)} \cr
0 & A^{(m)}_{22} \cr
}
\right ]$

m is lagre , Qm first k column

$< q_1 , q_2 , \cdots , q_k > \approx < v_1 , \cdots , v_k > = {\jmath}_k$

1. Simultaneous Iteratons : $A_m {\varphi}_k \underrightarrow { m \rightarrow \infty } {\jmath}_k ~~~{\varphi}_k \cap u_k = \{ 0 \} , k = 1, 2 , \cdots , n-1$2. QR Algerithm $\left \{
\matrix{ A_{m-1} = Q_m R_m \cr
R_m Q_m = A_m \cr
}
\right \Rightarrow
A_m = ( Q^*_m \cdots Q^*_1 ) A ( Q_1 \cdots Q_m )
= {\hat Q}^* A {\hat Q}$

$q^0_1 = e_1 , q^0_2 = e_2 , \cdots , g^0_n = e_n , e_i $ are the standard basis vectors

$A q^0_1 , A q^0_2 , \cdots , A q^0_n
\par A e_1 , A e_2 , \cdots , A e_n $

If $Q_0 = [ q^0_1 , \cdots , q^0_n ] = [ e_1 , \cdots , e_n ]$ then $ [ A q^0_1 , \cdots , A q^0_n ] = A {\hat Q}_0 = AI = {\hat Q}_1 R_1$ ( Note : ${\hat Q}_1 = [ {q^'}_1 , \cdots , {q^'}_n ]$ )

Note : If $A = [ a_1 , a_2 , \cdots , a_n ] , a_i $ are column vectors of A

If $A = QR , Q = [ q_1 , \cdots , q_n ]$

then qi's are the orthonormalized vectors of ai's Letting $Q_1 = {\hat Q}_1 $ , we have A = Q1 R1

The unitary transformation of A

$A_1 = Q^*_1 A {\hat Q}_1 = Q^*_1 A Q_1$

$\Rightarrow R_1 Q_1 = A_1$ . This constitute the 1th step of QR iteration

Note : 2th step , the relation of QR and Simultaneous Iteration

$A_1 {\hat Q}_1 = B_2 $ ( where ${\hat Q}_1 = A {\hat Q}_0$ , the matrix after orthogonal ) = ${\hat Q}_2 R_2$

because $A_1 = {\hat Q}^*_1 A {\hat Q}_1$ x in origion A , and ${\hat Q}^*_1 x $ in A1

( the origion coordinate system ) ( the new coordinate system )

origion $A = \{ q^0_1 , \cdots , q^0_n \} $ , new $A_1 = [ {\hat Q}^*_1 q^0_1 , {\hat Q}^*_1 q^0_2 , \cdots ,{\hat Q}^*_1 q^0_n ]$

= $[ e_1 , e_2 , \cdots , e_n ]$

$\Rightarrow$ $A {\hat Q}_1 = B_2 $ equivalent to A1 I = A1 and $B_2 = {\hat Q}_2 R$ equivalent to A1 = Q2 R2

Field , $I\!\!R$ , C

Addition: 1. associate 2. commutation 3. exists unit element 4. exists inverse element

Multiplication : 1. associate 2. commutation 3. exists unit element 4. exists inverse element

Note : The field that Q is minimal

Example 1. : z is not a subfield of C

Example 2. : $x + \sqrt 2 y , x,y \in Q .$ is a subfield of C

Vector Space [Def] A vector space consists of the following :

1. F2. v ( vectors ) 3. A rule is called vector addition and multipication Rule that satisty "Addition" :

a. associate b. commutation c. unit element ( 0 ) d. inverse element ( $- \alpha$ )

Example : The space of all $m \times n$ matrix ( Addition : [A+B]ij = Aij + Bij)

The relation of F and v must satisfy :

e. $1 \alpha = \alpha$f. $( c_1 c_2 ) \alpha = c_1 ( c_2 \alpha ) $g. $ c ( \alpha + \beta ) = c \alpha + c \beta $h. $ ( c_1 + c_2 ) \alpha = c_1 \alpha + c_2 \alpha$

Vector space over $F ( C , I\!\!R ,Q )$

Subspace :

v : vector space

If w is a subset of v then w is a subspace of v

if w itself a vector space

( must satisfy the rule that define by v )

[Thm] A nonempty subset w of v is a subspace of v

if and only if $\alpha , \beta \in w , c \in F$ $\Rightarrow$ $
c \alpha + \beta \in w$

Example : $A \in C^{ n \times n }$ is Hermition is not a subspace over C

but it is a subspace over $I\!\!R$

( Note : The set of all $n \times n$ complex Hermition matrices is not

a subspace over c but in a subspace over $I\!\!R$ ) Span [Def] S be a set of vectors in a vector space v

The subspace span by S is defined to be the

intersection of all subspace v which contains S

${I\!\!R}^3 = span \{ ( 1,0,0 ) , ( 0,1,0 ) , ( 0,0,1 ) \} $

${\cap}_{\alpha} w_{\alpha} $ , w is subspace contains S

$span S \equiv {\cap}_{\alpha} w_{\alpha} $ , where $w_{\alpha} \in S$ . w : subspace

Bases and Dimension :

[Def] A set S of v ( vector space ) is linear dependent if there exists distinct vectors ${\alpha}_1 \cdots {\alpha}_n$ in S ,

and scalars $c_1 , c_2 , \cdots , c_n $ not all of which are 0 ,

such that $c_1 {\alpha}_1 + c_2 {\alpha}_2 + \cdots + c_n {\alpha}_n = 0 $

A set which is not linearly dependent is called linear Independent

Example : If A set contain the vector 0 must be linear dependent .

[Def] Let v be a vector space . A basis for v is a

linear indep. set of vectors in v which span v

Example : $span \{ ( 1,0,0 ) , ( 0,1,0 ) , ( 0,0,1 ) \} \equiv {I\!\!R}^3$

$B = [ B_1 , \cdots , B_p ] $ , Bi are column vectors . $AB = [ A B_1 , A B_2 , \cdots , A B_p ]$

AB = C , C = $\left [ \matrix{ {\gamma}_1 \cr {\gamma}_2 \cr \vdots \cr {\gamma}_n } \right ]$

$ \{ {\gamma}_1 , \cdots , {\gamma}_n \} $ are row vectors of C

$ \{ {\beta}_1 , \cdots , {\beta}_n \} $ are row vectors of B

AB = C $\Rightarrow {\gamma}_i$ = ${\sum}^n_{j=1} A_{ij} B_j $ for all i

Order Basis

Example : ${I\!\!R}^3$ , $B = \{ ( 1,0,0 ) , ( 0,1,0 ) , ( 0,0,1 ) \}$

$\alpha =$ ( x1 , x2 , x3 ) = ${\sum}^n_{i=1} x_i {\varepsilon}_i$     $\alpha \leftrightarrow ( x_1 , x_2 , x_3 ) $

$ {[ \alpha ]}_{\beta} =
\left [ \matrix{ x_1 \cr x_2 \cr x_3 \cr } \right ]$

Example : ${I\!\!R}^2 , B = \{ (1,0) , (0,1) \}$

$\alpha = ( x_1 , x_2 ) ~~.~~ {[ \alpha ]}_{\beta}$ = $\left [ \matrix{ x_1 \cr x_2 \cr } \right ]$

( x1 , x2 ) = x1 ( 1,0 ) + x2 ( 0.1 )

(*) B' = $\{ ( \cos \theta , \sin \theta ) , ( - \sin \theta , \cos \theta ) \}$

${[ \alpha ]}_{{\beta}^{\prime}}$ = ( x'1 , x'2 )

${[ \alpha ]}_{{\beta}^{\prime}}$ = $P {[ \alpha ] }_{\beta}$ $B = \{ {\alpha}_1 , \cdots , {\alpha}_n \}$

B' = $\{ {{\alpha}^'}_1 , \cdots , {{\alpha}^'}_n \}$

$\Rightarrow$ ${{\alpha}^'}_j = {\sum}^n_{i=1} P_{ij} {\alpha}_j $ for all iPij exists and only

$\Rightarrow {[ \alpha ]}_{{\beta}^{\prime}}$ = $P {[ \alpha ] }_{\beta}$ where P = [ Pij ]

${x^'} = Px \Rightarrow {x^'} = 0$ if and only if x=0

$\Rightarrow P$ is invertible and $\left \{ \matrix{ {[ \alpha ]}_{{\beta}^{\prime}}$\space =
$ P {[ \alpha ] }_{...
...\alpha ]}_{{\beta}}$ =
$ P^{-1} {[ \alpha ] }_{{\beta}^{\prime}} \cr } \right $

${x^'}_1 = x_1 \bigcirc + x_2 \bigcirc$

${x^'}_2 = x_1 \bigcirc + x_2 \bigcirc$

$\left [ \matrix{ ( \cos \theta , \sin \theta ) = \cos (1,0) + \sin (0.1) \cr
( - \sin \theta , \cos \theta ) = - \sin (1,0) + \cos (0,1) \cr }
\right ]$

so ${[ \alpha ]}_{{\beta}^{\prime}}$ = $P {[ \alpha ] }_{\beta}$ where P = $\left [\matrix{ \cos \theta & - \sin \theta \cr \sin \theta & \cos \theta \cr
}\right ]$

$\left [ \matrix{ {x^{\prime}_1} \cr {x^{\prime}_2} \cr } \right ]$ = $\left [\matrix{ \cos \theta & - \sin \theta \cr \sin \theta & \cos \theta \cr
}\right ]$ $\left [ \matrix{ x_1 \cr x_2 \cr } \right ]$

$\Rightarrow$ $\left \{ \matrix{ {x^'}_1 = x_1 \cos \theta - x_2 \sin \theta \cr
{x^'}_2 = x_1 \sin \theta + x_2 \cos \theta \cr } \right $

$T : v \rightarrow w $ $ {[ T ]}_{\beta} \sim { [ T ] }_{{\beta}^'} $ , T exists and only

$ P^{-1} = \left [
\matrix{ \cos \theta & \sin \theta \cr
- \sin \theta & \cos \theta \cr
}
\right ]$

${[ \alpha ] }_{\beta}$ = $\{ {\varepsilon}_1 , {\varepsilon}_2 \}$ = $\left [ \matrix{ x_1 \cr x_2 \cr } \right ]$ $\Rightarrow$ $\left [ \matrix{ x_1 \cr x_2 \cr } \right ]$ = $\left [ \matrix{ \cos \theta & \sin \theta \cr
- \sin \theta & \cos \theta \cr } \right ]$ $\left [ \matrix{ y_1 \cr y_2 \cr } \right ] \Rightarrow {\beta}^'$ can find (*)

Linear Transformation

$T : v \rightarrow w $ is a function

s.t. $ T ( C \alpha + \beta ) = C T ( \alpha ) + T ( \beta ) ) $

Then T is called Linear Transformation from v into w .

Example : $T : I\!\!R \rightarrow I\!\!R $ then 1. T most through zero point 2. T must be linear function

T (0) = 0 because T(0+0) = T(0) + T(0)

$\Rightarrow$ T(0) = T(0) + T(0) $\Rightarrow$ T(0) = 0

Fact : rank(T) + nullity(T) = dim v

[Thm] $v , T : v \rightarrow w $ linear transformation , then

$\left \{
\matrix{ ( v + T ) ( \alpha ) = v ( \alpha ) + T ( \alpha ) \cr
( c T ) ( \alpha ) = c T ( \alpha ) \cr
}
\right $

L ( v , w ) : the set of all linear transformations

then L ( v , w ) is a vector space .

Note : dim   L ( v , w ) = ( dim  v ) ( dim  w )

$E^{pq} ( {\alpha}_p ) = { \delta }_{pq} { \beta }_q = \left \{
\matrix{ 0 , ~~...
...\cdots , n \cr
{ \beta }_q , ~~if~~p = q , ~~q = 1 , \cdots , m \cr
}
\right$

$\{ {\alpha}_1 , \cdots , {\alpha}_n \}~~ v $

$\{ {\beta}_1 , \cdots , {\beta}_n \} ~~ w $

[Def] $ T : v \rightarrow v$ , then T is called the linear operator on v

$ T : v \rightarrow v$ is invertible if there is a $V : v \rightarrow v $ s.t.

Tv = I = vT , v = T-1 ( or T = v-1 )

T is invertible iff 1. T is 1-1 $\Leftrightarrow$ null space = { 0 } 2. T is onto

A is nonsingular ( Invertible ) if A-1 exist iff $Ax = 0 \Rightarrow x = 0$

T is nonsingular off $T \gamma = 0 \Leftrightarrow \gamma = 0 $

If T is 1-1 or onto ?

if $T \alpha = T \beta \Rightarrow \alpha = \beta $

because $T \alpha = T \beta \Leftrightarrow T \alpha - T \beta = 0 \Leftrightarrow T ( \alpha - \beta ) = 0$

T is nonsingular $\Rightarrow \alpha - \beta = 0 $

onto ( by rank(T) + nullity(T) = dim v ) $\beta = \{ {\alpha}_1 , \cdots , {\alpha}_n \} ~~~{{\beta}^'} = \{ {\beta}_1 , \cdots , {\beta}_n \}$

$\left
\matrix{ {[ \alpha ]}_{\beta } = P {[ \alpha ]}_{\beta } \cr
\Rightarro...
... \alpha ]}_{{\beta }^{\prime}} = P^{-1} {[ \alpha ]}_{\beta } \cr
}
\right \}$ where $P = {[ {{\alpha}^'}_j ]}_{\beta }$

$T : v \rightarrow w $

$\beta = \{ {\alpha}_1 , \cdots , {\alpha}_n \}$ order basis for v

${{\beta}^'} = \{ {\beta}_1 , \cdots , {\beta}_n \}$ order basis for w

$T {\alpha}_j = {\sum}^n_{i=1} A_{ij} {\beta}_i $ , for all $1 \leq j \leq m $

A = ( Aij ) is called the matrix T relative to the pair

of order basis $\beta$ and ${\beta}^'$

If $\alpha \in v $ , then $\alpha = x_1 {\alpha}_1 + x_2 {\alpha}_2 + \cdots + x_n {\alpha}_n $

$T \alpha$ = $x_1 T {\alpha}_1 + x_2 T {\alpha}_2 + \cdots + x_n T {\alpha}_n $= ${\sum}^n_{j=1} x_j ( T {\alpha}_j )$ = ${\sum}^n_{j=1} x_j ( {\sum}^m_{j=1} A_{ij} {\beta}_i )$= ${\sum}^n_{i=1} ( {\sum}^n_{j=1} A_{ij} x_j ) {\beta}_j $

$\Rightarrow$ ${[ T \alpha ]}_{{\beta}^{\prime}}$ = $A {[ \alpha ]}_{ \beta } $

$\left
\matrix{ v & \underrightarrow {T} & w & \underrightarrow {v} & z \cr
\vdots && \vdots && \vdots \cr
\beta && {\beta}^' && {\beta}^{''} \cr
}
\right $

$\left
\matrix{ {[T]}_{ \beta \rightarrow {\beta}^{\prime} } = A \cr
{[v]}_{ {\beta}^{\prime} \rightarrow {\beta}^{\prime \prime} } = B \cr
}
\right \}$ $vT : v \rightarrow z ~,~ {[vT]}_{ \beta \rightarrow {{\beta}^{\prime}}} = B \cdot A$

( the Multiplication of $BA \leftrightarrow $ the combine of linear transformation )

${ [ T ] }_{\beta} = A ( \beta = {\beta^'} )$

$ \Rightarrow { [ T \alpha ] }_{\beta} = { [ T ] }_{\beta} { [ \alpha ] }_{ \beta } $

Example : $T : F^2 \rightarrow F^2 , F $ field

T ( x1 , x2 ) = ( x1 , 0 ) , $\beta = \{ {\varepsilon}_1 , {\varepsilon}_2 \} , {[T]}_{\beta} = ? $

$ T {\varepsilon}_1 $= T ( 1 , 0 ) = ( 1 , 0 ) = $1 \cdot {\varepsilon}_1 + 0 \cdot {\varepsilon}_2$ $ T {\varepsilon}_2 = T ( 0 , 1 ) = ( 0 , 1 ) =$ $0 \cdot {\varepsilon}_1 + 1 \cdot {\varepsilon}_2 $

$ {[T]}_{\beta} = A = $ $\left [ \matrix{ 1 & 0 \cr 0 & 0 \cr } \right ] $

$ T {\alpha}_j$ = ${\sum}^n_{i=1} A_{ij} {\alpha}_j , 1 \leq j \leq n $

$ { [ uT ] }_{\beta}$ = ${[v]}_{\beta} {[T]}_{\beta}$    ( $ { [ uT ] }_{\beta}$ = $B \cdot A$ = ${[u]}_{\beta} {[T]}_{\beta} )$

$\Rightarrow {[T]}^{-1}_{\beta}$ = ${[ T^{-1} ]}_{\beta} $

( because $uT = I = Tu \Rightarrow {[ uT ]}_{\beta} = I$ ${[u]}_{\beta} {[T]}_{\beta} \Rightarrow {[u]}_{\beta}$ = ${[T]}^{-1}_{\beta} )$

$ T : v \rightarrow v$ linear operators

$\beta = \{ {\alpha}_1 , \cdots , {\alpha}_n \}$

${\beta}^'$ = $ \{ {\beta}_1 , \cdots , {\beta}_n \} $ are ordered bases for v

because ${[ \alpha ] }_{\beta}$ = $P {[ \alpha ]}_{{ \beta }^'}$ where $P_j = { [{{\alpha}^'_j}] }_{\beta} , P_j$ : the column vector of Pbecause ${ [ T \alpha ] }_{\beta}$ = ${ [ T ] }_{\beta} { [ \alpha ] }_{ \beta }$ = ${ [ T ] }_{\beta} P { [ \alpha ] }_{ {\beta}^' } $ now from (*) ( change by ${\alpha}_{ns} , T_{\alpha} $ )

$\Rightarrow$ ${ [ T \alpha ] }_{\beta}$ = $P {[ T \alpha ]}_{{ \beta }^'}$ $\Rightarrow$ ${ [T] }_{\beta} P {[ \alpha ]}_{{ \beta }^{\prime}}$ = $P {[ T \alpha ]}_{{ \beta }^{\prime}}$ = $P {[ T ]}_{{ \beta }^{\prime}} {[ \alpha ]}_{{ \beta }^{\prime}}$ $\Rightarrow$ $P_{-1} {[ T ]}_{ \beta }$ P = ${[ T ]}_{{ \beta }^'} $



 
next up previous
Next: About this document ...

1998-08-15