CATEGORII DOCUMENTE |
Bulgara | Ceha slovaca | Croata | Engleza | Estona | Finlandeza | Franceza |
Germana | Italiana | Letona | Lituaniana | Maghiara | Olandeza | Poloneza |
Sarba | Slovena | Spaniola | Suedeza | Turca | Ucraineana |
Together with the fundamental notion of vector space, the linear transformation is another basic concept of linear algebra. This represents the information carrier regarding the linearity property from one vector space to another.
The large scale use of this notion in geometry requires therefore a detailed treatment of the subject.
1. Definition and General Properties
Let V and W be two vector spaces over the commutative corp K.
1.1 Definition. |
A function T V W, with the following properties: ) T (x+y) = T (x) +T (y) , x, y I V 2) T ax = a T (x) , x I V , a I K is called a linear transformation (linear application, linear operator or vector space morphism). |
For the mapping T(x) of a linear transformation T the writing Tx is also used sometimes
1.2 Corollary. |
The application T V W is a linear transformation if and only if |
T ax+by a T(x) + b T (y) , x, y I V, a b I K
The proof can be written directly, and condition (1.1) shows that an application T V W is a linear transformation, if the image of a linear combination of vectors is the linear combination of the images of these vectors.
1 The application T: Rn Rm , T (x) = AX, A I M m n(R) , X = tx is a linear transformation. In the particular case n = m = 1 , the application defined by T (x) = ax, a I R, is linear.
2 If U V is a vector subspace, then the application T: U V, defined by T(x) = x, is a linear transformation, named inclusion application. In general, the restriction of a linear transformation to a subset S V is not a linear transformation. The linearity is inherited only by vector subspaces.
3 The application T: C1 (a, b) C0(a,b) , T (f) = f is linear.
4
The application T: C0 (a, b) R, T f) =
5 If T V W is a bijective linear transformation, then T W V is a linear transformation.
The set of linear transformations from the V vector space to the W vector space is denoted with L(V, W).
If we define on the set L(V, W) these operations:
(T1 + T 2) (x) = : T 1(x) + T 2(x), x I V
aT) (x) = : a T (x) , x I V, a I K
then L(V, W) aquires a K-vector space structure.
A vector space isomorphism is a bijective linear transformation T: V W.
If W = V, then the linear application T: V V will be named the endomorphism of the vector space V, and its set will be denoted by End(V). For T1, T2 I End(V), being two endomorphisms of V , we can write the following line:
T 1 T )(x) = : T1(T (x)) , x I V
The upper operation will be named the product of the transformations T1 si T2, shortly T 1T 2.
A bijective endomorphism T V V will be named automorphism of the V vector space, and its set will be denoted by Aut(V).
The automorphism subsets of a vector space are stable parts with regard to the endomorphism product operation, and they form a subgroup GL(V) End(V), also called linear group of the vector space V.
If W = K, then the linear application T: V K, is named , and the set V* = L( V, K), of all linear forms of V, is a K-vector space named the vector space V `s dual.
If V is an euclidian vector space, of finite dimensions, then its dual or V* has the same dimension and its identified with V.
1.3 Theorem. |
If T V W is some linear transformation, then : a) T (0v) = 0w , T (-x) = - T(x), x I V b) The image T (U) W, of a vector subspace U V, is also a vector subspace. c) The inverted image T-1 (W V , of a vector subspace W W , is also a vector subspace. d) If the vectors x1, x2 ,, xn I V are linearly dependent then the vectors T (x1), T (x2) ,, T (xn) I W are also linearly dependent. |
Proof. a) By giving a the value a = 0 , and a in the T (ax a T (x) equation, we obtain T (0v) = 0w and T (-x) = - T (x).
Therefore we will ignore the two vectors 0 indexes.
b) For u, v I T (U) , x, y I U such that u = T (x) and v = T (y). According to the hypothesis that U V is a vector subspace, for x, y I U, a b I K we have ax by I U and with relation (1.1), we obtain
au bv a T (x) + b T (y) = T(ax + by I T (U).
c) If x, y I T -1(W ) then T (x) , T (y) I W and for a b I K we have a T (x) + b T(y) = T (ax by I W (W vector subspace), we get a x b y I T -1(W ) as a result.
d) We apply the transformation T onto the relation of linear dependence l x l x lnxn = 0 and by using a), the linear dependency relation will be l T (x1) + l T (x2) + + ln T (xn) = 0.
1.4 Consequence |
If T V W is a linear transformation, then : a) The set Ker T = T -1 = V is called the kernel of the linear transformation T , and it is a vector subspace. b) The image of the linear transformation T, Im T = T (V) W, is a vector subspace. c) If T(x1) , T(x2) , , T(xn) I W are linearly independent then the vectors x1, x2 ,, xn I V are also linearly independent. |
1.5 Theorem. |
A linear transformation T V W is injective if and only if Ker T = . |
Proof. The injectivity of the linear transformation T combined with the general property T (0) = 0 implies Ker T = .
Reciproc. If Ker T = and T(x) = T(y), using the linearity property we obtain T (x - y) = 0 x y I Ker T , that is x = y , so T is injective.
The nullity of the operator T is the dimension of the kernel Ker T.
The rank of the operator T is the dimensioning of the image Im T.
1.6 Theorem. |
(rank theorem) If the vector space V is finitely dimensional then the vector space ImT is also finitely dimensional, and we have the relation : dim Ker T+ dim Im T = dim V |
Proof. Lets note with n = dim V and s = dim Ker T For s 1 let us consider a base in Ker T and complete it with B = so that B is a base of the entire vector space V. The vectors es+1 es+2 en represent a base of the supplementary subspace of the subspace Ker T
For
any y
I Im T, there is x =
Due to the fact that T (e1) = T (e1) = = T (es) = 0, we obtain
y = T (x)
=
which means that T (es+1), T (es+2), , T (en) generates the subspace ImT
We must proove that the vectors T(es+1) , , T (en) are linearly dependent. Hence,
ls T (es+1) + + ln T (en) = 0 T (ls es+1 + + ln en
which means that ls es+1 + + ln en I Ker T. Moreover, as the subspace Ker T has only the null vector in common with the supplementary subspace, we obtain:
ls es+1 + + ln en = 0 T ls = = ln
that is, T (es+1) , , T (en), are linearly independent. Therefore the image subspace Im T is finitely dimensional, moreover
dim Im T = n s = dimV dim Ker T .
For s = 0 (K er T = and dim Ker T = 0), we consider the base B = belonging to the vector space V and by following the same reasoning we obtrain that T (e1) , T (e2) , , T (en) represents a base of the space Im T meaning Im T = dim V . (q.e.d)
The property of linear dependence of a vector sistem is maintained in a linear transformation, opposed to the linear independency which generally will not be conserved by applying a linear transformation. The conditions that maintain the linear independence of a vector system, are given by the following theorem:
1.7 Theorem. |
If V is a n-dimensional vector space, and T V W a linear transformation, then the following relations are equivalent : 1) T is injective 2) The image of a linearly independent vector system e1, e2, , ep I V (p n) is a system of vectors T (e1), T (e2) , , T (ep), which are also linearly independent. |
Proof. 1) 2) Let us cosider a injective linear transformation T, and the linearly independent vectors e1, e2, , ep and T (e1), T (e2) , , T (ep) the images of these vectors.
For li I K,
l T (e1) + l T (e2) + + lp T (ep) = 0
T l e l e lpep
l e l e lpep I K er T (T - injectiva
l e l e lpep l l lp
therefore the vectors T (e1), , T (ep) sunt linearly independent.
1) Suppose there is a
vector x 0 for that its image T (x)
= 0. If B V is a base, then there
exist xi I K, not all null, so that x
=
1.8 Consequence. |
If V and W are two finitely dimension vector spaces, and T V W is a linear transformation, then : 1) If T is injective, and we consider a base of V then is a base of ImT 2) Two isomorph vector spaces have the same dimension. |
The demonstration can be written directly using the results from the theorems 1.6 and 1.7.
2. The Matrix of a Linear Transformation
Let V and W two vector spaces of the corp K.
2.1 Theorem. |
If B = is a base of the vector space V and are n arbitrary vectors from W then there
is a linear transformation T: V
W with the property T(ei)
= wi |
Proof. Let us consider x I V a vector that can be written in base B as
a x b y =
has the image given by T
T ax by)=
Lets suppose that
there T V
W
with the property T (ei) = wi ,
T (x) = T
= T
that is the oneness of the linear transformation T .
If W are linearly independent, then the linear transformation T defined i theorem 2.1 is injective.
Theorem 2.1 states that a linear transformation T V W, dimV = n, is perfectly determined, if its images are known over the vectors of a basis B V.
Let
Vn and Wm be two K-vector
spaces of dimension n respectively m, and let T:
V W be a linear transformation. If B = is a fixed base in Vn
, and B = is a
fixed base in Wm then the
linear transformation T is uniquely determined by the values T (ej)IWm.
For j =
T (ej) =
The coefficients aij
I K, i =
2.2 Definition. |
The matrix A I Mm n (K) whos elements are given by the relation ( ) is called the associated matrix to the linear transformation T with regard to the pair of bases B and B |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
2.3 Theorem. |
If x = yi
= |
Truly, T(x) = T resulting the relation (2.2) If we note X = t(x1, x2, , xn), Y = t(y1, y2, , ym) then relation (2.2) is also written as a matrix equation of the following form: Y = AX (2.3) The equation (2.2) or (2.3) is called the equation of the linear transformation T with regard to the considered bases. Remarks: 1 If L (Vn, Wm) is the set of all linear transformation from Vn with values in Wm and Mm n (K) is the set of all matrices of type m n, and B and B are two fixed bases in Vn respectively Wm, then the corespondence Y : L (Vn, Wm) Mm n (K) Y (T) = A, which associates the matrix A to a linear transformation T relative to the two fixed bases, is a vector space isomorphism. Consequently dim L (Vn, Wm) = m n. 2 This isomorphism has the following properties: Y (T 1 T 2 Y T 1 Y (T 2), if T 1 and T 2 exist - T Vn Vn is invertable iff the associated matrix A, with regard to some base from Vn, is invertable. Let Vn be a K-vector space and TI End(Vn). Considering different bases in Vn, we can associate separate square matrices to the linear transformation T. Naturally, one question rises: when do these two square matrices represent the same endomorphism? The answer is given by the following theorem
Proof. Let B = and B = two bases in Vn and W wij) the passing matrix from base B to base B , hence e j If A = (aij)
is the associated matrix of T relative to the base B, hence T
(ej) = T(e j T(ej) =
T Out of the two expressions we obtain
W being a nondegenerated matrix, results the relation A W AW
Remarks: The similarity relation is an equivalence relation on the set Mn (K). Every equivalence class corresponds to an endomorphism T I End(Vn) and contains all associated matrices of T relative the the bases of the vector space Vn. 2 The matrices A and B have the same determinant detB = det(C-1) detA detC = det A Any two similar matrices have the same rank, number which represents the rank of the endomorphism T . Therefore the rank of an endomorphism does not depend on the chosen base of the vector space V (the rank of an endomorphism is invariant to base change). 3. Eigenvectors and Eigenvalues Let V be an n-dimensional K-vector space and T I End(V) an endomorphism. If in the vector space V we consider different bases, then to
the same endomorphism T
I End(V) several different similar matrices will correspond. Therefore,
we are interested in finding the base of V,
relative to whom the associated matrix of the endomorphism T has the simplest form, the canonical form. In
this case the relations that define the endomorphism T , yi
=
The set of all eigenvalues of T is called the spectrum of the operator T and it is noted with s T The equation T (x) = l x with x 0 is equivalent to x I Ker(T - lI , where I is the identity endomorphism. If x is an eigenvector of T , then the vectors kx , k I K are also eigenvectors.
Proof. 1) Let us consider x 0 an eigenvector corresponding to the eigenvalue l I K. Suppose there is another eigenvalue l I K corresponding to the same eigenvector such that T (x) = l x then it would result lx l x l l )x = 0 l l 2) Let x1, x2, , xp be the eigenvectors corresponding to the distinct eigenvalues l l lp. We show using induction (p) the linear independence of the considered vectors. For p = 1 and x1 0, as an eigenvector, the set is linearly independent. Supposing that the property is true for p-1, we show that it is true for p eigenvectors. Applying the endomorphism T to the relation k1x1 + k2x2 + + kpxp = 0 we obtain k1l x + k2l x + + kplpxp = 0. By substracting the first relation amplified by lp, we obtain the following from the second relation: k l lp)x1 + + kp-1(lp lp )xp-1 = 0 The inductive hypothesis prooves k1 = k1 = = kp-1 = 0, and if we use this in the relation k1x1 + + kp-1xp-1 + kpxp = 0 we obtain kpxp = 0 kp = 0 , that is x1, x2, , xp are linearly independent. 3) For any x, y I Sl and a b I K we have : T ax by aT (x) + bT (y) = alx bly l ax + by), which means that Sl is a vector subspace of V. For x I Sl , then T (x) = lx I Sl , followingly T (Sl)
Proof. Let l l I s T l l . Suppose there
exists a vector x I
The matrix equation AX = lX can be written under the form (A - lI )X = 0 and it is equivalent to the sistem of linear and homogenous equations: which admits solutions different than the trivial solution if P l) = det(A - lI ) =
P l) = (-1)n [ln d ln-1 + + (-1)ndn (3.4) where di is the sum of the principal minors of order i belonging to the matrix A. Remarks The solutions of the caracteristic equations det(A - lI ) = 0 are the eigenvalues of the matrix A. If the field K is a closed field, then all roots of the caracteristic equations are in the field K therefore the corresponding eigenvectors are also in the K-vector space Mn (K). If K is open, i.e. K = R, the caracteristic equation may have also complex roots and the corresponding eigenvectors will be included in the complexified real vector space. For any real and symmetrical matrix, it can be proven that the eigenvalues are real. 3 Two similar matrices have the same caracteristic polinom. Truly, if A and A are similar, A = C-1AC with C nedegenerated, then P l) = det(A lI ) = det(C-1AC - lI ) = det[C-1(A - lI)C] = = det(C-1) det(A - lI) detC= det(A - lI) = P(l If A I Mn(K) and P(x) = a0xn + a1xn-1 + + an I K[X] then the polinom P(A) = a0An + a1An-1 + + anI is named matrix polinom.
Proof. Let us consider P(l) = det(A - lI) = a0ln + a1ln + + an. Likewise the reciproc of the matrix A - lI is given by (A - lI)* = Bn-1ln + Bn-2ln + + B1l + B0, Bi I Mn(K) and satisfies the relation (A - lI (A - lI)* = P(l I , hence (A - lI) (Bn-1ln Bn ln + B1l + B0) = (a0ln a ln + an)I, By identrifying the polinoms trough l , we obtain
Now let us consider an n-dimensional K-vector space Vn , and a base B and lets not with A I Mn(K), the associated matrix to the endomorphism T with regard to this base. The equation T x lx is equivalent to (A - lI )X = 0. The eigenvalues of the endomorphism T, if they exist, are the roots of the polinom P(l)in the field K, and the eigenvectors of T will be the solutions to the matriceal equations (A - lI )X = 0. Because of the invariance of the caracteristic polinom regarding a base change from Vn , P(l) depends only on the endomophism T and does not depend on the matrix representation of T in a given base. Therefore, the naming of the caracteristic polinom / caracteristic equation of the endomorphism T are well justified, respective to the caracteristic polinom P(l) of the matrix A, and for the caracteristic equation (A - lI )X = 0 of the matrix A. 4. An Endomorphisms Canonical Form. Let us consider the endomorphism T Vn Vn defined on the n-dimensional K-vector space Vn. If we consider the bases B si B belonging to vector space Vn and we denote the associated matrices of the endomorphism T with respect to these bases with A and A , then we have A W AW, where W represents the passing matrix from base B to base B . Knowing that the associated matrix to an endomorphism depends on the chosen base from the vector space Vn, we will determine that particular base which with regard to the endomorphism has the simplest form, that means that the associated matrix will have a diagonal form.
Proof: If T is diagonalizable then there
exists a base B =
with respect to whom the associated matrix A
= (aij) has diagonal form,
meaning aij = 0,
i
j.
The actoion of T on the elements
on base B is given by the relations T (ei) = aiiei ,
i Reciproc. Let a base in Vn , formed only of
eigenvectors, that is T vi
= livi , i = From these equations we can construct the associated matrix corresponding to T in the current base: D = where the scalars li I K are not necessarily distinct. In the context of the previous theorem, the matrices from the similarity classes which correspond to the diagonalizable endomorphism T , are called diagonalizable for the different bases referring to the vector space Vn.
An eigenvalue l I K , as the root of the caracteristic equation P(l) = 0, has a multiplicity order which is named algebric multiplicity, and the dimension of the eigensubspace to dimSl is named the geometric multiplicity of the eigenvalue l
Proof: Let the eigenvalue l I K have the algebric
multiplicity m n ,
and the corresponding eigensubpace If p = n, then we have n linear independent eigenvalues, therefore a base in Vn reported to whom the associated matrix of the endomorphism T has diagonal form, and its main diagonal having the eigenvalue l . In this case, the caracteristic polinom is written in the following form : P(l) = (-1)n (l l )n , resulting p = n. If p
< n , we upgrade the base of the
vector subspace The action of the operator T on the elements of this base is given by : T (ei) = l ei i = T (ej)
= With
regard to the base
hence the caracteristic polinom T has the form P(l l l)p Q(l) , meaning that (l l)p devides P(l) , then p m, q.e.d.
Proof. If T is diagonalizable, then there is a base B Vn , formed only by eigenvectors, with regard to whom the associated matrix has the diagonal form. In these conditions the caracteristic polinom is written in the following form: P l) = (-1)n with li I K
, the eigenvalues of T, with the multiplicity orders mi , Reciproc. If li I K and dim Because A being a diagonal matrix, we conclude that the endomorphism T este diagonalizable.
Practically, the required steps for diagonalizing and endomorphis T are the following: 1 We wirte the matrix A , associated to the endomorphism T with regard to a given base of the vector space Vn. 2 We solve the caracteristic equation det(A - lI ) = 0, determing the eigenvalues l l lp with the corresponding multiplicity orders m1, m2, , mp. 3 We use the result in theorem 3.13 and we have the following cases: I) If li I K i = a) If dim We can check this result by constructing the matrix T = , having as columns the coordonates of the eigenvectors (the diagonalizing matrix) and represents the passing matrix from the base considered initially to the base formed of eigenvectors, where the associated matrix of T with regard to the latter base is the diagonal matrix D, given by D = T-1 AT = b) if li I K such that dim II) If li K, then T is not diagonalizable. The problem of T s diagonalization can be considered only if the K-vector space V is considered an extension over the field KLet us consider Vn a K-vector space and T I End(Vn) an endomorphism defined on Vn. If A I Mn(K) is the associated matrix of T with regard to a base in Vn , then A can be diagonalized only if the conditions of theorem 3.13 are fulfilled.. If the eigenvalues corresponding to T belong to field K, li I K ,
and the geometric multiplicity is different then the algebric multiplicity dim For l I K, the matrices of the form : l), are named Jordan cells attached to the scalar l, of the order 1, 2, 3, ,n .
A Jordan cell of order p attached to the eigenvalue l I K , is a multiple of the order m p , and it corresponds to the linearly independent vectors e1, e2, , ep which satisfy the following relations: T(e1) = l e1 T(e2) = l e2 + e1 T(e3) = l e3 + e2 . . . . . . . . . . . . . . . . . . . . T(ep) = l ep + ep-1 The vector e1 is an eigenvector and the vectors e , e3, , ep are called main vectors. Remarks The diagonal form of a diagonalizable endomorphism is a particular case of a the Jordan canonical form, having all Jordan cells of the first order The jordan canonical form is not unique. The order in the main diagonal of the Jordan cells depends on the chosen order of the main vectors and the eigenvectors with regard to the given base. The number of the Jordan cells, equal to the number of the linearly independent eigenvectors, and also their orders are unique. The following theorem can be prooven :
Practically, for determining the canonical We write the matrix A ,associated to the endomorphism T with regard to the given base. We solve the caracteristic equation det(A - lI ) = 0 having the eigenvalues l l lp with their multiplicity order m1, m2, , mp determined. We find the eigensubspaces We calculate the
number of Jordan cells, separately for
each eigenvalue li , given by dim The corresponding main vectors of those eigenvalues are
determined for which dim (A - liI )X1 = v, , (A - liI )Xp = Xs Keeping in mind the compatibility conditions
and the general form of the eigenvectors and those of the main vectors we will
determine by giving arbitrary values to the parameters, the linearly
independent eigenvectors of We write the base of the vector space Vn, uniting the systems of linear independent
vectors mi , i = By using the matrix T having as columns the coordonates of the vectors belonging to the base, which were constructed using the associated eigenvectors and main vectors in this order, we obtain the matrix J = T-1AT , the Jordan canonical form which contains on the main diagonal the Jordan cells, in the order in which they appear in the constructed base of associated eigen- and main vectors (if they exist). The Jordan cells have the order equal to the number of vectors from the system constructed out of the associated eigen- and main vectors. 5. Linear transformations on euclidean vector spaces. The properties of the linear transformations defined on an arbitrary vector space, are applicable on the euclidian vector spaces as well. The scalar product that defines the euclidean structure permits us the introduction of particular linear transformation classes. 5.1. Orthogonal transformations Let V and W two euclidean R-vector spaces. Without being in danger of confusion we can denote the scalar products on the two vector spaces with the same symbol < , > .
Examples. The identical transformation T: V V, T (x) = x , is an orthogonal transformation. 2 The transformation T : V V, that associates to a vector x I V its inverse T (x) = - x , is an orthogonal transformation.
Proof If T is orthogonal then < T x, T y> = < x, y >, which for x=y becomes < T x, T y> = < x, x> || T x||2 = ||x||2 || T x|| = ||x||. Reciproc. Using the
relation <a, b> = < T x T y > = = =
Proof. If T is an orthogonal transformation then || T x|| = || x || and in the hypothesis T x = 0 results || x || = 0 x = 0 . Followingly, the kernel Ker T = , meaning T is injective. Using the consequence 4.3 we find that by using an orthogonal transformation on a linearly independent system we get also a linearly independent system.
Truly, d(T x, T y T x - T y || = || T (x - y)|| = ||x - y|| = d(x, y). Any orthogonal transformation T, different then the identical transformation is injective, T(0) = 0 is the only fixed point. Conversely, the transformation T is the same with the identical transformation. If W = V , T 1 and T 2 are two orthogonal transformations on V then their composition is also an orthogonal transformation. If T: V V is also surjective, then T is invertable moreover, T is an orthogonal transformation. In these conditions the set of all bijective orthogonal transformation of the euclidian vector space V forms a group together with the composing operation (product) of two orthogonal transformations, named the the orthogonal group of the euclidian vector space V denoted GO(V) and which represents a subgroup of the linear group GL(V). Let us consider in the finitely
dimensional euclidian vector spaces Vn
and Vm, the orthonormed
bases B = , respectively With regard to the orthonormed bases
B
and (T ej) = The bases B
and < e , e2
> = ij , i, j
= Lets evaluate the scalar product of the images of the vectors of B < T (ei),
T (ej)> = = By using T s orthogonal transformation property < T (ei), T (ej)> = ij we have tA A = In (5.3) Reciproc. If T is a linear transformation , caracterized (with regard to the
orthonormed bases B and Truly, let x = (x1, x2, , xn) and y = (y1, y2, , yn) two vectors in the space Vn. If we calculate the scalar product of the images of these vectors < T (x),
T (y)> = < T = meaning < T (x), T (y)> = < x, y> is an orthogonal transformation. We have also proven the following theorem:
By taking into consideration that det tA = detA, results that if A is an orthogonal matrix, then detA = The subset of the orthogonal matrices which have the property detA = 1 form a subgroup denoted with SO(n; R) , of the orthogonal matrix group of the n-th order name the special orthogonal group. An orthogonal transformation caracterized by the matrix A I SO(n; R) is also called rotation. Remarks Since an orthogonal transformation is
injective, and if B Vn, is a base, then its image trough the orthogonal transformation T:
Vn
Wm , is a base in Im T Any orthogonal transformation between two euclidian vector spaces of the same dimension is an isomorhpism of euclidian vector spaces. 3 The associated matrix of two orthogonal transformations T , T 2 : Vn Vn with regard to an orthonormed base, is given by the product of the associated matrices correspondin to the transformations T and T . Thus, the orthogonal group of the euclidian vector space Vn, group GO(Vm), is isomorph with the multiplicative group GO(n; R), of the orthogonal matrices of the n-th order. 5.2 Symmetrical linear transformations Let V and W two euclidian R-vector spaces and T: V W a linear transformation.
Let Vn a finitely dimensional euclidian R-vector space and an orthonormed base B Vn . If the endomorphism T : Vn Vn is caracterized by the real matrix A I Mn(R), we can easily proove that to any symmetrical (antisymmetrical) endomorphism corresponds a symmetrical (antisymmetrical) matrix, with regard to an orthonormed base.
Proof. Let l an arbitraty root of the caracteristic equation det(A-lI ) = 0 and X = t(x1, x2, , xn) and eigenvector that corresponds to the eigenvalue l Denoting with Since
A is symmetrical, the first member of
the equality is real, because Also, tXX 0 is a real number and l is the quotient of two real numbers, hence it is real. Let the eigenvector v1 I R correspond to the eigenvalue l I R , and S1 the subspace generalted by v1 and Vn-1 Vn , its orthogonal complement, Vn = S1 Vn-1.
Proof. If v I Vn is an eigenvector corresponding the eigenvalue l , then T v l v . For v I Vn-1 we have satisfied the relation < v1, v > = 0. Let us proove that T (v) I Vn-1. < v1, T v > = < T v , v > = < l, v1, v > = l < v1, v > = 0. Based on this result the next theorem can be easily prooven.
Proof. Let < w, T (v) > = li < w, v> si < T (w), v> = lj < w, v> . Since T is symmetrical we have (lj li) < w, v> = < T (w), v> - < w, T (v)> = 0 . Thus lj li T < w, v > = 0 . q.e.d.
Truly, the first mi vectors belong to the
eigensubspace With regard to the orthonormed base of these eigenvectors the endomorphism T:Vn Vn has canonical form. The matrix associated to the endomorphism T is this base is diagonal, having on the main diagonal eigenvalues, written as may times as their multiplicirty order allows and are expressed in function of the associated matrix A corresponding to the endomorphism T in the orthonormed base B , trought the relation D = tW AW , where W is the othogonal passing matrix form base B to the base formed out of eigenvectors. 5.3 Isometrical Transformations on punctual euclidian spaces. Let E=(E,V j) a punctual euclidian space, E being the support set, V the director vector space and j the affine structure function.
If E is endowed with a certain geometrical structure, and f satisfies certain conditions referring to this structure, then f will be named geometrical transformation. By denoting (sE ) the transformation group of the set E with regard to the composing operation of the functions, and G a subgroup of his, then the pair (E, G ) will be named the geometric space or the space iwth a fundamental group. A subset of points F E is named a figure of the geometric space (E, G ).
Two figures F1,F2 E are
called congruent if Let E2 = (E2, V2 , j) be a bidimensional puctual euclidian space. The application r a : E2 E3 , r a (P) = P , with the following properties: d (O,P)=d (O,P) and POP = a is called center rotation O and the angle a The associated
linear transformation T : V
V , T ( In the euclidian spaces geometry we are foremost interested in those geometrical transformations which conserve certain properties of the figures in the considered space. With other words, we will consider certain subgroups of the euclidian space affinity group which govern these transformations
If
we consider the representant of the vector d ( If the punctial space E3, is the space of the euclidian geometry, then an isometry f : E3 E is a bijective application , hence a geometrical transformation named isometric transformation. The subset of the isometrical transformations, with regard to the composing operation of the appliactions forma a subgroup Izo E, called the group of isometries.
The geometrical transformations given in the previous examples (central symmetry, axial symmetry, translation and rotation) are isometrical transformations.
Let
us consider the euclidian plane PE = ( E2
, IzoE2 ), and R
= ( O, g : E2 E2 an isometry with O as a fixed point. Consider the point A(1,0) and some arbitraty point M(x,y) having the image A(a,b) and M(x,y) respectively. With the conditions : d (O,A) = d (O,A) , d (O,M) = d (O,M) and respectively d (A,M) = d (A,M) we obtain the equation systems The general solution is: Reciprocally, the formulae (5.8) represent an isometry. Truly, if M1 and M2 are two arbitrary points and their images are M1 and M2 respectively, then d (M1,M2)2 = (x2 - x1)2 +(y2 - y1)2 = = (a2+b2) (x2-x1)2 +(y2-y1)2 d (M1,M2)2 , hence , d (M1,M2) = d (M1,M2) . The fixed points of the caracterized by the equations (5.8) are obtained in (5.8) x = x and y = y , hence The system (5.9) is a system of linear equations and homogenous, and has the determinant D e +1) (1-a) . If e = -1 the system (5.9) admits an infinity of fixed points, hence the equations (5.8) represemt an axial symmetry. If e = 1 and a 1 the isometry (5.8) admits only the orifin as a fixed point and represents a rotation, and for e = 1 and a = 1 the application (5.8) becomes an identical transformation. The equations (5.8) can be written under the matrix form as follows: The associated
matrix A = For e = 1 (considering the identical application as having the rotation of angle a = 0) the orthogonal matrix A has the property det.A=1 , which means that the subgroup of the rotations of the euclidian plane is isomorph with the orthogonal special subgroup SO(2;R). By combining the isometries, having the origin as a fixed point , together with the translations which transform the origin into the point (xo,yo) we obtain the isometries of the euclidian plane PE = ( E2 , IzoE2 ) ,caracterized analytically by the equations: Using the trigonometrical functions, and considering a = cosj and b = sinj jIR the equations (5,10) are written in the following form: For e=1 , the transformations caracterized by the equations meaning those isometries represented by the composing of a rotation and a translation ( movements of the plane ), form a subgroup called the subroup of movements. In the tridimensional punctual euclidian punctual space E (E3,V j The isometries T : E3 E3 , T(x1,x2,x3) = (y1,y2,y3) , are actually the transformations caracterized by the equations where the matrix A = (aij) , i,j = 1. Find out which of the following applications are linear transformations?
Fixed, is linear . Determine Ker T , Im T and verify the rank theorem.
Show that R3 =
A= A transformation with this property is called tangent structure. 10. Show that the transformation T : R2n R2n defined by the relation T(x) =(xn+1,xn+2,,x2n,-x1,-x2,,xn) has the property T2 = - Id.R2n A transformation with this propery is called complex structure. 11. Determine the linear transformation which transformes the point (x1,x2,x3) I R3 in its symmetric to the plane x1 +x2 + x3 = 0 and show that the newly determined transformation is orthogonal. 11. Determine the eigenvectors and the eigenvalues for the endomorphism T: R3 R caracterized by the matrix A= 12. Study the possibility of reducing to the canonical form, if yes, find the diagonalizing matrix for the endomorphisms with the following associated matrices: A= Determine the Jordan canonical form for the following matrices: A = Determine an orthonormed base in the R3 space where the endomorphism T : R3 R , T(x) = (-x1 +x2-4x3, 2x1-4x2-2x3, -4x1-2x2-x3) admits the canonical form. Using the Hamilton-Cayley theorem, calculate A-1 and P(A), P(x) = x4 + x3 +x2 +x + 1, for the following matrices: A =
DISTRIBUIE DOCUMENTUL
Comenteaza documentul:Te rugam sa te autentifici sau sa iti faci cont pentru a putea comentaCreaza cont nou Termeni si conditii de utilizare | Contact
|