Search This Blog

Free Online Currency Converter

Sunday, November 1, 2015

Linear Algebra MTH501 Lectures 31 to 45



Linear Algebra

VU

 
 



                                               MTH501

Visit Us
http://awesome2016.blogspot.ae
 

VU

 

For more details:
 

World Class Education at your Doorstep
 

VIRTUAL UNIVERSITY OF PAKISTAN
  

 

LECTURE NOTES
 

IRFAN MUGHAL
   BSc Computer Science

 

Composed by:
 
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
Lecture # 31                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors

Eigenvectors and Linear Transformations
To understand the matrix factorization A=PDP-1 as a statement about linear transformation
Transformation
is the same
as mapping  
The Matrix of Linear Transformation
Let V be an n-dim Vector space, W an m-dim space, and T be a LT from V to W. To associate a matrix with T we chose bases B and C.

 
Given any x in V, the coordinate vector [x]B  is in Rn and the [T(x)]C coordinate vector of its image, is in Rm







The Matrix of Linear Transformation
Let {b1 ,…,bn} be the basis B for V.
If x = r1b1 +…+ rnbn, then          
This equation can be written as

 
 
The Matrix M is the matrix
representation of T, Called the
matrix for T relative to the bases B and C






Example 1
Suppose that B = {b1, b2} is a basis for V and C = {c1, c2, c3} is a basis for
W. Let T: V  W be a linear transformation with the property that
Find the matrix M for T relative to B and C.
LT from V into V
When W is the same as V and the basis C is the same as B, the matrix M is called the matrix for T relative to B or simply the B-matrix

 
 

















The B-matrix of T: V     V satisfies
Example 2
(a) Find the B-matrix for T, when B is the basis {1, t, t2}.
(b) Verify that
[T(p)]B = [T ]B[p]B for each p inP2.


 
 
















Linear transformation in Rn
Theorem
Suppose A = PDP -1, where D is diagonal n x n matrix. If B is the basis for Rn formed from the columns of P, then D is the B-matrix of the transformation


Example 3


Find a basis B for R2 with theproperty that the B-matrix of T is a diagonal matrix.
Similarity of Matrix Representations

 
 











Similarity of two matrix representations: A=PCP-1
















Lecture # 32                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors
Complex Eigen-values
Definition
A complex scalar     satisfies 
if and only if there is a nonzero vector x in Cn such that   We call      a (complex) eigenvalue and x a (complex) eigenvector corresponding to    .
Example 2
 
Find the Eigen-values of A, and find a basis for each Eigen-space.
Example 3

 
 













Real and Imaginary Parts of Vectors
Example 4
 
Note That
 
Eigenvalues and Eigenvectors of a Real Matrix that acts on

Examples

 

 
 


















Example 7


 
 








Theorem

Let A be a real 2x2 matrix with complex Eigen-values

And associated Eigen-vectors v in C2. Then
                                         
                                                                       



Lecture # 33                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors
Discrete Dynamical Systems
Let a matrix A is diagonalizable, with n linearly independent eigenvectors,
                                    v1, …, vn,
            and corresponding eigenvalues,
For convenience, assume that the eigenvectors are arranged so that
            Since { v1, …, vn } is a basis for Rn, any initial vector x0 can be written uniquely as
Since the vi are eigenvectors,
 
     
In general
 
Example 1
Observe
Example 2
Plot several trajectories of the dynamical system
            xk+1 = Axk, when
Example 3
Plot several typical solution of the equation
            xk+1 = Axk, when


Example 4
Plot several typical solution of the equation
            yk+1 = Dyk, where
Show that a solution {yk} is unbounded if its initial point is not on the x2-axis.
Change of Variables
Example 5
Show that the origin is a saddle point for solutions of          
xk+1 = Axk, where
Find the directions of greatest attraction and greatest repulsion.
Note
If A has two complex eigenvalues whose absolute value is greater than 1, then 0 is a repellor and iterates of x0 will spiral outward around the origin. If the absolute values of the complex eigenvalues are less than 1, the origin is an attractor and the iterates of x0 spiral inward toward the origin.
Example 6
Example 7
Suppose the search survival rate of young bird is 50%, and the stage-matrix A is
What does the stage-matrix model predict about this bird?








Lecture # 34                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors

System of Differential Equations
System as a Matrix Differential Equation
Superposition of Solutions
If u and v are solutions of
Then cu + dv is also a solution
because
If A is n x n, then there are n linearly independent functions in a Fundamental Set, and each solution of    is a unique linear combination of these n functions.
That is, a Fundamental Set of solutions  is a basis for the set of all solutions of                
               
Initial Value Problem
Example
The given system is said to be decoupled because each derivative of a function depends only on the function itself, not on some combination or “coupling” of both x1(t) and x2(t). The solutions of the equations are x1(t) = c1e3t and x2(t) = c2e-5t, for any constants c1 and c2.
Each solution of given system can be written in the form
Observe
Example 1
The circuit in figure 1 can be described by
where v1(t) and v2(t) are the voltages across the two capacitors at time t.

Suppose that resistor R1 is 2 ohms, R2 is 1 ohms, capacitor C1 is 2 farads, and capacitor C2 is 1 farad, and suppose that there is an initial charge of 5 volts on capacitor C1 and 4 volts on capacitor C2.
Find formulas for v1(t) and v2(t) that describe how the voltages change over time.
Example 2
Suppose a particle is moving in a planar force field and its position vector x satisfies     and x(0) = x0, where
Solve this initial value problem and sketch the trajectory of the particle, for 

Decoupling a Dynamical System
Complex Eigenvalues

Example 3
where iL is the current passing through the inductor L and vc is the voltage drop across the capacitor C. Suppose R1 is 5s ohms, R2 is .8 ohm, C is .1 farad, and L  is .4 henry.
Find formulas for iL and vc, if the initial current through the inductor is 3 amperes and the initial voltage across the capacitor is 3 volts.




























Lecture # 35                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors
Iterative Estimates for Eigenvalues
Power Method
Let a matrix A is diagonalizable, with n linearly independent eigenvectors, v1, …, vn, and corresponding Eigen-values
 
Since { v1, …, vn } is a basis for Rn, any vector x can be written as
Thus for large k, a scalar multiple of Akx determines almost the same direction as the eigenvector c1v1. Since positive scalar multiples do not change the direction of a vector, Ax itself points almost in the same direction as v1 or –v1, provided c1  0.
Example 1
Then A has eigenvalues 2 and 1, and the eigenspaces for    is the line through 0 and v1.
For k = 0, …, 8, compute Akx and construct the line through 0 and Akx. What happens as k increases?
Example 2
Inverse Power Method
(4) For almost all choice of x0, the sequence {vk} approaches the Eigen-value   of A, and the sequence {xk} approaches a corresponding Eigen-vector
Example 4
How can you tell if a given vector x is a good approximation to an eigenvector of a matrix A; if it is, how would you estimate the corresponding eigenvalue?  
Experiment with
Revision (Segment V)
Definition
If A is an n x n matrix, then a scalar     is called an eigenvalue of A if there is a nonzero vector x such that Ax  x.
If     is an eigenvalue of A, then every nonzero vector x such that
Ax =   x is called an eigenvector of A corresponding to   .
Theorem
If A is an n x n matrix and      is a scalar, then the following statements are equivalent.
(i)     is an eigenvalue of A.
(ii)    is a solution of the equation                      
(iii) The linear system
has nontrivial solutions.
Theorem
If A is a triangular matrix (upper triangular, lower triangular, or diagonal) then the eigenvalues of A are the entries on the main diagonal of A.
Theorem
If   is an eigenvalue of a matrix A and x is a corresponding eigenvector, and if k is any positive integer, then    is an eigenvalue of Ak and x is a corresponding eigenvector.
Characteristic Equation
Similarity
If A and B are n x n matrices, then A is similar to B if there is an invertible matrix P such that P -1AP = B, or equivalently, A = PBP -1.
Writing Q for P -1, we have Q -1BQ = A.
 So B is also similar to A, and we say simply that A and B are similar.
Theorem
If n x n matrices A and B are similar, then they have the same characteristic polynomial and hence the same eigenvalues (with the same multiplicities).

Diagonalization
Remark
A square matrix A is said to be diagonalizable if A is similar to a diagonal matrix.  i.e. if
A = PDP -1 for some invertible matrix P and some diagonal matrix D.
Diagonalization Theorem
An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
Theorem
An n x n matrix with n distinct eigenvalues is diagonalizable.
Eigenvectors and Linear Transformation
Matrix of LT
Let V and W be n-dim and m-dim spaces, and T be a LT from V to W.
            To associate a matrix with T we chose bases B and C for V and W respectively
Given any x in V, the coordinate vector [x]B   is in Rn and the [T(x)]C coordinate vector of its image, is in Rm

 
 







Connection between  [ x ]B and [T(x)]C
Let {b1 ,…,bn} be the basis B for V.
If x = r1b1 +…+ rnbn, then    

 

This equation can be written as

The Matrix M is the matrix representation of T, Called the matrix for T relative to the bases B and C

 
 









LT from V into V
When W is the same as V and the basis C is the same as B, the matrix M is called the matrix for T relative to B or simply the B-matrix
Similarity of Matrix Representations

 
 











Similarity of two matrix  representations: A=PCP-1
Complex Eigenvalues
Definition
A complex scalar     satisfies 
 
if and only if there is a nonzero vector x in Cn such that   We call      a (complex) eigenvalue and x a (complex) eigenvector corresponding to    .
Note That
                               
Discrete Dynamical System
Note
If A has two complex eigenvalues whose absolute value is greater than 1, then 0 is a repellor and iterates of x0 will spiral outward around the origin. If the absolute values of the complex eigenvalues are less than 1, the origin is an attractor and the iterates of x0 spiral inward toward the origin.
Applications to Differential Equations

System as a Matrix Differential Equation
Initial Value Problem
Observe















Lecture # 36                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors

If A is an n x n matrix, then a scalar     is called an eigenvalue of A if there is a nonzero vector x such that Ax =    x. If A is a triangular matrix then the eigenvalues of A are the entries on the main diagonal of A. If   is an eigenvalue of a matrix A and x is a corresponding eigenvector, and if k is any positive integer, then  is an eigenvalue of Ak and x is a corresponding eigenvector.

Characteristic Equation
           
           
Similarity

If A and B are n x n matrices, then A is similar to B if there is an invertible matrix P such that P -1AP = B, or equivalently, A = PBP -1.
If n x n matrices A and B are similar, then they have the same characteristic polynomial and hence the same Eigenvalues.
Diagonalization
An n x n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
An n x n matrix with n distinct eigenvalues is diagonalizable.
Eigenvectors and Linear Transformation
Matrix of LT
Let V and W be n-dim and m-dim spaces, and T be a LT from V to W.
To associate a matrix with T we chose bases B and C for V and W respectively

Given any x in V, the coordinate vector [x]B  is in Rn and the [T(x)]C coordinate vector of its image, is in Rm



Connection between  [ x ]B and [T(x)]C
Let {b1 ,…,bn} be the basis B for V.
If x = r1b1 +…+ rnbn, then                              

This equation can be written as
The Matrix M is the matrix representation of T, Called the matrix for T relative to the bases B and C

Similarity of Matrix Representations

Similarity of two matrix representations: A=PCP-1

Complex Eigenvalues
Definition
A complex scalar     satisfies   if and only if there is a nonzero vector x in Cn such that   We call      a (complex) eigenvalue and x a (complex) eigenvector corresponding to .
Note that
                                   
                                   
                                                           
Discrete Dynamical System
                       
Note
If A has two complex eigenvalues whose absolute value is greater than 1, then 0 is a repellor and iterates of x0 will spiral outward around the origin. If the absolute values of the complex eigenvalues are less than 1, the origin is an attractor and the iterates of x0 spiral inward toward the origin.
Applications to Differential Equations
System as a Matrix Differential Equation
                                   
Initial Value Problem
Observe
Revisionx(Segment III) Determinants
3 x 3 Determinant
Expansion
Minor of a Matrix
If A is a square matrix, then the Minor of entry aij (called the ijth minor of A) is denoted by Mij and is defined to be the determinant of the sub matrix that remains when the ith row and jth column of A are deleted.
Cofactor
            The number
                                    Cij=(-1)i+j Mij
            is called the cofactor
   of entry aij

(or the ijth cofactor of A).
Cofactor Expansion Across the First Row

Theorem
The determinant of a  matrix A can be computed by a cofactor expansion across any row or down any column.
The cofactor expansion across the ith row
The cofactor expansion down the jth column
Theorem
If A is triangular matrix, then det (A) is the product of the entries on the main diagonal.
Properties of Determinantsvss
Theorem
Let A be a square matrix. If a multiple of one row of A is added to another row to produce a matrix B, then det B = det A. If two rows of A are  interchanged to produce B,
then det B = –det A. If one row of A is multiplied  by k to produce B, then
det B = k det A. 
Theorems
If A is an  n x n matrix, then
det AT = det A.
If A and B are  n x n matrices, then          
                   det (AB)=(det A )(det B)
Cramer's Rule, Volume, and Linear Transformations
Observe
For any n x n matrix A and any b in Rn, let Ai(b) be the matrix obtained from A by replacing column i by the vector b

Theorem (Cramer's Rule)
Let A be an invertible  n x n matrix. For any b in Rn, the unique solution x of Ax = b has entries given by
Theorem
Let A be an invertible  matrix, then
Theorem
Let T: R2       R2 be the linear transformation determined by a 2 x 2 matrix A. If S is a parallelogram in R2, then
{area of T (S)} = |detA|. {area of S}
If T is determined by a 3 x 3 matrix A, and if S is a parallelepiped in R3, then
{volume of T (S)} = |detA|. {volume of S}


Lecture # 37                                     IRFAN MUGHAL
Eigenvalues and Eigenvectors

Systems of Linear Equations
Examples of Linear Equations
Generally
An equation in n-variables or in n-unknowns
Linear System
A finite set of linear equations is called a system of linear equations or linear system. The variables in a linear system are called the unknowns.
Homogenous Linear Equations

If b=0
Theorem
Every system of linear equations has zero, one or infinitely many solutions; there are no other possibilities.
Row Reduction and Echelon Form

Echelon Form
q  All nonzero rows are above any rows of all zeros.
q  Each leading entry of a row is in a column to the right of the leading entry of the row above it.
q  All entries in a column below a leading entry are zero.
Reduced Echelon Form
The leading entry in each nonzero row is 1.
Each leading 1 is the only non-zero entry in its column.
Theorem
Each matrix is row equivalent to one and only one reduced echelon matrix
Row Reduction Algorithm
Row Reduction Algorithm consists of four steps, and it produces a matrix in echelon form.
A fifth step produces a matrix in reduced echelon form.
STEP 1 
Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.
STEP 2 
Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position
STEP 3 
Use row operations to create zeros in all positions below the pivot
STEP 4 
Cover (ignore) the row containing the pivot position and cover all rows, if any, above it
Apply steps 1 –3 to the sub-matrix, which remains.
Repeat the process until there are no more nonzero rows to modify

Vector Equations
Linear Combination
Definition
w  If v1, . . .  , vp are in Rn, then the set of all linear combinations of v1, . . .  , vp is denoted by Span {v1, . . .  , vp } and is called the subset of Rn spanned (or generated) by   v1, . . .  , vp .
w  That is, Span {v1, . . .  , vp } is the collection of all vectors that can be written in the form c1v1 + c2v2 + …. + cpvp, with  c1, . . . , cp scalars.
Vector and Parametric Equations of a Line
Vector and Parametric Equations of a Plane

Matrix Equations
Definition
Existence of Solutions
The equation Ax = b has a solution if and only if b is a linear combination of the columns of A.
Theorem
Let A be an  mxn matrix. Then the following statements are logically equivalent. That is, for a particular A, either they are all true statements or they are all false.
         For each b in Rm, the equation Ax = b has a solution.
         The columns of A Span Rm.
         A has a pivot position in every row.
Theorem
Let A be an  mxn matrix, u and v are vectors in Rn, and c is a scalar, then
                        1.         A ( u + v ) = Au + Av
                        2.         A (cu) = c A (u)
Solution Set of Linear Systems
Homogenous Linear Systems
A system of linear equations is said to be homogeneous if it can be written in the form Ax = 0, where A is an  mxn matrix and 0 is the zero vector in Rm

Trivial Solution
Such a system Ax = 0 always has at least one solution, namely, x = 0
(the zero vector in Rn).
This zero solution is usually called the trivial solution.

Non Trivial Solution
The homogeneous equation Ax = 0 has a nontrivial solution if and only if the equation has at least one free variable.

Solutions of Non-homogenous Systems
When a non-homogeneous linear system has many solutions, the general solution can be written in parametric vector form as one vector plus an arbitrary linear combination of vectors that satisfy the corresponding homogeneous system.

Parametric Form
The equation
x = p + tv  (t in R)      
describes the solution set of Ax = b in parametric vector form.

Linear Independence
Definition
An indexed set in Rn is said to be linearly independent if the vector equation x1v1+x2v2+…+xpvp=0
has only the trivial solution.
Important Fact
The columns of a matrix A are linearly independent if and only if the equation Ax = 0  has only the trivial solution.
Linear Transformations
A transformation T from Rm is a rule that assigns to each vector x in Rn a vector T(x) in Rm . The set Rn is called the domain of T, and Rm is called the   co-domain of T.
The notation 
indicates that the domain of T is Rn and the co-domain is Rm. For x in Rn , the vector T(x) in Rm is called the image of x (under the action of T). The set of all images T(x) is called the range of T

Definition
A transformation (or mapping) T is linear if:
         T(u + v) = T(u) + T(v)  for all u, v in the domain of T;
         T(cu) = cT(u)   for all u and all scalars c.

Further
If T is a linear transformation, then T(0) = 0, and
T(cu +dv) = cT(u) + dT(v)                 
for all vectors u, v in the domain of T and all scalars c, d.

Generally
The Matrix of a Linear Transformations
Every Linear Transformation from Rn to Rm is actually a matrix Transformation

Theorem
Let 
be a linear transformation. Then there exist a unique matrix A such that T(x)=Ax for all x in Rn.






















Lecture # 38                                     IRFAN MUGHAL
Orthogonality and Least Squares

Inner Product, Length and Orthogonality
Inner Product
If u and v are vectors in Rn, then we regard u and v as n x 1 matrices. The transpose uT is a 1 x n matrix, and the matrix product uTv is a 1 x 1 matrix, which we write as a single real number (a scalar) without brackets. The number uTv is called the inner product of u and v, and often it is written as u.v.  This inner product, is also referred to as a dot product.

Example 1
Compute u.v and v.u when
Solution
Theorem
Let u, v and w be vectors in Rn, and let c be a scalar. Then

Observe
Definition
The length (or norm) of v is the nonnegative scalar   defined by 
Note
For any scalar c,
Unit Vector
A vector whose length is 1. If we divide a nonzero vector v by its length  , we obtain a unit vector u because the length of u is  
The process of creating u from v is sometimes called normalizing v, and we say that u is in the same direction as v.
Example 2
Let v = (1,–2, 2, 0). Find a unit vector u in the same direction as v.
Example 3
Let W be the subspace of R2 spanned by
x = (2/3, 1).
Find a unit vector z that is a basis for W.
Definition
For u and v in Rn, the distance between u and v, written as dist(u, v), is the length of the vector u – v. That is,  
Example 4
Compute the distance between the vectors
            u = (7, 1) and v = (3, 2).
Solution
Continued
Figure 4 The distance between u and v
Example 5
If u = (u1, u2, u3) and
v = (v1, v2, v3), then
Orthogonal Vectors
Definition
Two vectors u and v in Rn are orthogonal (to each other) if
Note
The zero vector is orthogonal to every vector in Rn because 0T.  v = 0 for all v.
Pythagorean Theorem
Two vectors u and v are orthogonal if and only if
Orthogonal Complements
If a vector z is orthogonal to every vector in a subspace W of Rn, then z is said to be orthogonal to W. The set of all vectors z that are orthogonal to W is called the orthogonal complement of W and is denoted by  
Remarks
(1) A vector x is in     if and only if x is orthogonal to every vector in a set that spans W.
(2)       is a subspace of Rn.
Theorem 3
Angles in Two and Three Dimensions
R2 and R3






















Lecture # 39                                     IRFAN MUGHAL
Orthogonality and Least Squares


Orthogonal Sets
Definition
A set of vectors {u1, …, up} in Rn is said to be an orthogonal set if each pair of distinct vectors from the set is orthogonal, i.e.
Example 1
Show that {u1, u2, u3} is an orthogonal set, where
Theorem
If S = {u1, …, up} is an orthogonal set of nonzero vectors in Rn, then S is linearly independent and hence is a basis for the subspace spanned by S.
Proof
If 0 = c1u1 +…+cpup for some scalars c1, …, cp, then
Since u1 is nonzero, u1.u1 is not zero and so c1= 0.
Similarly, c2, …, cp must be zero. Thus S is linearly independent.
Definition
An orthogonal basis for a subspace W of Rn is a basis for W that is also an orthogonal set.
Theorem
Let {u1, …, up} be an orthogonal basis for a subspace W of Rn. Then each y in W has a unique representation as a linear combination of u1, …, up.
In fact, if
Proof
Since u1.u1 is not zero, the equation above can be solved for c1. To find cj for j = 2, …, p, compute y.uj and solve for cj.
Example 2
The set S = {u1, u2, u3} as in Ex.1 is an orthogonal basis for R3. Express the vector  y as a linear combination of the vectors in S, where
Solution
Orthogonal Projection
Example 3
Find the orthogonal projection of y onto u. Then write y as the sum of two orthogonal vectors, one in Span {u} and one orthogonal to u.
Example 4
Find the distance in Figure below from y to L.
Solution
The distance from y to L is the length of the perpendicular line segment from y to the orthogonal projection  . This length equals the length of    . Thus the distance is
Orthonormal Sets
A set {u1, …, up} is an Orthonormal set if it is an orthogonal set of unit vectors.
Orthonormal Basis
If W is the subspace spanned by an orthonormal  set  {u1, …, up}, then it is an Orthonormal basis for W
Example
The simplest example of an Orthonormal set is the standard basis   {e1, …, en} for Rn. Any nonempty subset of {e1, …, en} is orthonormal, too.
Example 5
Show that {v1, v2, v3} is an orthonormal basis of R3,  where

Theorem
An m x n matrix U has orthonormal columns iff   UTU = I.
Theorem
Let U be an m x n matrix with orthonormal columns, and let x and y be in Rn. Then

Example 6
Solution
Notice that U has orthonormal columns  and


















Lecture # 40                                     IRFAN MUGHAL
Orthogonality and Least Squares
Orthogonal Projection
The orthogonal projection of a point in R2 onto a line through the origin has an important analogue in Rn.
Given a vector y and a subspace W in Rn, there is a vector     in W such that
(1)     is the unique vector in W for which y –    is orthogonal to W, and
(2)     is the unique vector in W closest to y.
We observe that whenever a vector y is written as a linear combination of vectors u1, …, un in a basis of Rn, the terms in the sum for y can be grouped into two parts so that y can be written as y = z1 + z2 .where z1 is a linear combination of some of the ui, and z2 is a linear combination of the rest of the ui. This idea is particularly useful when {u1,…, un} is an orthogonal basis.
Example 1
Let {u1, …, u5} be an orthogonal basis for R5 and let
Consider the subspace W = Span {u1, u2}, and write y as the sum of a vector z1 in W and a vector z2 in      . 
The Orthogonal Decomposition Theorem
Let W be a subspace of Rn. Then each y in Rn can be written uniquely in the form
where   is in W and z is in     
In fact, if {u1, …, up} is any orthogonal basis of W, then
and z = y   . The vector    is called the orthogonal projection of y onto W and often is written as projw y.

Example 2
Observe that {u1, u2} is an orthogonal basis for W = Span {u1, u2}. Write y as the sum of a vector in W and a vector orthogonal to W.
Best Approximation Theorem
The vector     in this theorem is called the best approximation to y by elements of W.
Example 3
and W = Span {u1, u2}, then the closest point in W to y is
Example 4
The distance from a point y in Rn to a subspace W is defined as the distance from y to the nearest point in W.
Find the distance from y to W = Span{u1, u2}, where
Theorem
Example 5
and W = Span{u1, u2}. Use the fact that u1 and u2 are orthogonal to compute projw y.
Solution
In this case y happens to be a linear combination of u1 and u2, so y is in W. The closest point in W to y is y itself.
































Lecture # 41                                     IRFAN MUGHAL
Orthogonality and Least Squares

Gram-Schmidt Process
Example 1
Let W = Span {x1, x2}, where
Construct an orthogonal basis {v1, v2} for W.
Example 2
Then {x1, x2, x3} is linearly independent and thus, a basis for a subspace W of R4. Construct an orthogonal basis for W.
Theorem
Given a basis {x1, …, xp} for a subspace W of Rn , define
Then {v1, …,vp} is an orthogonal basis for W. In addition Span {v1, …, vk}= Span {x1,…, xk} for
Orthonormal Bases
Example 3

Theorem
If A is an m x n matrix with linearly independent columns, then A can be factored as
A = QR
Where Q is an m x n matrix whose columns form an orthonormal basis for Col A and R is an n x n upper triangular invertible matrix with positive entries on its diagonal.

Proof of the Theorem
Example 4
Find a QR factorization of
Example 5
Let W = Span {x1, x2}, where
Construct an orthonormal basis for W.
Theorem
Given a basis {x1, …, xp} for a subspace W of Rn , define
Then {v1, …,vp} is an orthogonal basis for W. In addition  Span {v1, …, vk}= Span {x1,…, xk} for  
The Orthogonal Decomposition Theorem
Let W be a subspace of Rn. Then each y in Rn can be written uniquely in the form
 where    is in W and z is in  
In fact, if {u1, …, up} is any orthogonal basis of W, then
 
and z = y  . The vector    is called the orthogonal projection of y onto W and often is written as projw y.
Best Approximation Theorem
Let W be a subspace of Rn, y any vector in Rn, and     the orthogonal projection of y onto W. Then    is the closest point in W to y, in the sense that      for all v in W distinct from   . The vector   in this theorem is called the best approximation to y by elements of W.
Lecture # 42                                     IRFAN MUGHAL
Orthogonality and Least Squares
Least Squares Problems
Best Approximation Theorem
Let W be a subspace of Rn, y any vector in Rn, and     the orthogonal projection of y onto W.
Then    is the closest point in W to y, in the sense that  for all v in W distinct from . The vector       in this theorem is called the best approximation to y by elements of W.
Least-Squares Problem
Inconsistent systems arise often in applications .When a solution of Ax = b  is demanded and none exists, the best one can do is to find an x that makes Ax as close as possible to b.  Let us take Ax as an approximation to b. The smaller the distance between b and Ax, given by ,     the better the approximation. The general least-squares problem is to find an x that makes as small as possible.  The term least-squares arises from the fact that
 is the square root of a sum of squares.
Definition
If A is m x n and b is in Rm, a least-squares solution of Ax = b is an   in Rn such that  
Solution of the General Least-Squares Problem
Theorem
The set of least-squares solutions of Ax = b coincides with the nonempty set of solutions of the normal equations
Example 1
Find a least-squares solution of the inconsistent system Ax = b for
Example 2
Find a least-squares solution of Ax = b for
Theorem
The matrix ATA is invertible iff  the columns of A are linearly independent. In this case, the equation Ax = b has only one least-squares solution    , and it is given by  
Example 3


Determine the least-squares error in the least-squares solution of Ax = b.
Example 4

Find a least-squares solution of Ax = b for
Theorem
Given an m x n matrix A with linearly independent columns, let A = QR be a QR factorization of A.  Then for each b in Rm, the equation Ax = b has a unique least-squares solution, given by    
Example 5

Find the least-squares solution of Ax = b for





















Lecture # 43                                     IRFAN MUGHAL
Orthogonality and Least Squares
Definition
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number       and satisfies the following axioms, for all u, v, w in V and all scalars c:
A vector space with an inner product is called an inner product space.
Example 1
Fix any two positive numbers-say, 4 and 5-and for vectors  u= (u1, u2) and v = (v1, v2) in R2, set
Show that it defines an inner product.
Example 2
Example 3
Let V be P2, with the inner product from Ex 2
where t0 = 0, t1 = 1/2, and t2 = 1. Let p(t) = 12t2 and q(t) = 2t – 1. Compute
Solution
Lengths, Distances and Orthogonality
Let V be an inner product space, with the inner product denoted by    . Just as in Rn, we define the length or norm of a vector v to be the scalar  
A unit vector is one whose length is 1. The distance between u and v is    . Vectors u and v are orthogonal if         .
Example 4
Compute the lengths of the vectors in Ex 3
Solution
 
Example 5
Let V be P4 with the inner product in Ex 2 involving evaluation of polynomials at –2, –1, 0, 1, and 2, and view P2 as a subspace of V. Produce an orthogonal basis for P2 by applying the Gram-Schmidt process to the polynomials 1, t, and t2.
Best Approximation in  Inner Product Spaces
Example 6
Let V be P4 with the inner product in Ex 5 and let p0, p1, and p2 be the orthogonal basis for the subspace P2. Find the best approximation to p(t)=5-(1/2)t4 by polynomials in P2.
Cauchy-Schwarz Inequality
Triangle Inequality
Proof
Inner-Product for C[a,b]
Example 7
For f, g in C[a, b], set
Show that it defines an inner product on C[a, b].
Example 8
Let V be the space C[0, 1] with the inner product
Let W be the subspace spanned by the polynomials  p1(t) = 1, p2(t) = 2t –1, and p3(t) = 12t2. Use the Gram-Schmidt process to find an orthogonal basis for W.








Lecture # 44                                     IRFAN MUGHAL
Orthogonality and Least Squares
Applications of Inner Product Spaces
Definition
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number     and satisfies the following axioms, for all u, v, w in V and all scalars c:
A vector space with an inner product is called an inner product space.
Least Squares Line
Example 1
of the least-squares line that best fits the data points (2, 1), (5, 2), (7, 3), (8, 3).
Weighted Least-Squares
Example 2
Find the least squares line
that best fits the data (–2, 3), (–1, 5), (0, 5), (1, 4), (2, 3).
Suppose that the errors in measuring the y-values of the last two data points are greater than for the other points. Weight these data half as much as the rest of the data.
Trend Analysis of Data
Example 3
The simplest and most common use of trend analysis occurs when the points t0, …, tn can be adjusted so that they are evenly spaced and sum to zero.  Fit a quadratic trend function to the data (– 2, 3), (– 1, 5), (0, 5), (1, 4), and (2, 3).
Example 4
Let    have the inner product  and let m and n be unequal positive integers. Show that cos mt and cos nt are orthogonal.
Solution


Example 5
Find the nth-order Fourier approximation to the function f (t) = t on the interval            
This expression for f (t) is called the Fourier series for f on   .The term am cos mt, for example, is the projection of f onto the one-dimensional subspace spanned by cos mt.

Example 6
Let q1 (t) = 1, q2 (t) = t, and q3 (t) = 3t2 – 4. Verify that {q1, q2, q3} is an orthogonal set in
C[-2, 2] with the inner product
Solution

Example 7
Find the first-order and third-order Fourier approximations to
f (t) = 3 –2 sin t + 5 sin 2t – 6 cos 2t


















Lecture # 45                                     IRFAN MUGHAL
Orthogonality and Least Squares

Matrix Algebra Vector Spaces  and Sum-up
Matrix Algebra
w  Matrix Operations
w  Inverse of a Matrix
w  Characteristics of Invertible Matrices
w  Partitioned Matrices
w  Matrix factorization
w  Iterative Solutions of linear Systems
Vector Spaces
w  Vector Spaces and Subspaces
w  Null Spaces, Column Spaces and Lin Tr.
w  Lin Ind. Sets: Bases
w  Coordinate Systems
w  Dim. of a Vector Spaces
w  Rank
w  Change of Basis
w  Application to Diff Eq.
Definition
Let V  be an arbitrary nonempty set of objects on which two operations are defined, addition and multiplication by scalars. If the following axioms are satisfied by all objects u, v, w in V and all scalars l and m, then we call V a vector space.
Axioms of Vector Space
For any set of vectors u, v, w in  V and scalars l, m, n:
1.                     u + v  is in V
2.                     u + v = v + u
3.         u + (v + w) = (u + v) + w
4.         There exist a zero vector 0 such that  0 + u = u + 0 = u
5.         There exist a vector –u in V such that  -u + u = 0 = u + (-u)
6.         (l u) is in V
7.         l (u + v)= l u + l v
8.         m (n u) = (m n) u = n (m u)
9.         (l + m) u = I u +m u
10.       1u = u where 1 is the multiplicative identity
Definition
A subset W of a vector space V is called a subspace of V if W itself is a vector space under the addition and scalar multiplication defined on V.
Theorem
If W is a set of one or more vectors from a vector space V, then W is subspace of V if and only if the following conditions hold:
(a) If u and v are vectors in W, then u + v is in W
(b) If k is any scalar and u is any vector in W, then k u is in W.
Definition
The null space of an  m x n matrix A (Nul A) is the set of all solutions of the hom  equation Ax = 0
            Nul A = {x: x is in Rn and Ax = 0}

Definition
The column space of an  m x n matrix A (Col A) is the set of all linear combinations of the columns of A.
Theorem
The column space of a  matrix A is a subspace of Rm.
Theorem
A system of linear equations Ax = b is consistent if and only if b is in the column space of A.
Definition
A linear transformation T from V into W is a rule that assigns to each vector x in V a unique vector T (x) in W, such that
(i)  T (u + v) = T (u) + T (v)                 for all u, v in V, and
                       
(ii) T (cu) = c T (u) for all u in V and all scalars c
Definition
The kernel (or null space) of such a T is the set of all u in V such that T (u) = 0.
Definition
An indexed set of vectors {v1,…, vp} in V is said to be linearly independent if the vector equation
has only the trivial solution, c1=0, c2=0,…,cp=0
Definition
The set {v1,…,vp} is said to be linearly dependent if (1) has a nontrivial solution, that is, if there are some weights, c1,…,cp, not all zero, such that (1) holds. In such a case, (1) is called a linear dependence relation among v1, … , vp.
Spanning Set Theorem
Let S = {v1, … , vp} be a set in V and let H = Span {v1, …, vp}.
            (a)        If one of the vectors in S, say vk, is a linear combination of the remaining vectors in S, then the set formed from S by removing vk still spans H.
            (b)        If    {0}, some subset of S is a basis for H.
Definition
Suppose the set B = {b1, …, bn} is a basis for V and x is in V. The coordinates of x relative to the basis B (or the B-coordinates of x) are the weights c1, … , cn such that
If c1,c2,…,cn are the B-Coordinates of x, then the vector in Rn  
is the coordinate of x (relative to B) or the B-coordinate vector of x. The mapping x à  [x]B is the coordinate mapping (determined by B)
Definition
If V is spanned by a finite set, then V is said to be finite-dimensional, and the dimension of V, written as dim V, is the number of vectors in a basis for V. The dimension of the zero vector space {0} is defined to be zero.  If V is not spanned by a finite set, then V is said to be infinite-dimensional.
Theorem
The pivot columns of a matrix A form a basis for Col A.
The Basis Theorem
Let V be a p-dimensional vector space, p> 1. Any linearly independent set of exactly p elements in V is automatically a basis for V. Any set of exactly p elements that spans V is automatically a basis for V.
The Dimensions of Nul A and Col A
The dimension of Nul A is the number of free variables in the equation Ax = 0.
The dimension of Col A is the number of pivot columns in A
Definition
The rank of A is the dimension of the column space of A. Since Row A is the same as Col AT, the dimension of the row space of A is the rank of AT. The dimension of the null space is sometimes called the nullity of A.
The Rank Theorem
The dimensions of the column space and the row space of an  m x n matrix A are equal. This common dimension, the rank of A, also equals the number of pivot positions in A and satisfies the equation
                        rank A + dim Nul A = n
Theorem
If A is an m x n, matrix, then
(a) rank (A) = the number of leading variables in the solution of Ax = 0
(b) nullity (A) = the number of parameters in the general solution of Ax = 0

Theorem
            If A is any matrix, then

  rank (A) = rank (AT)
Four Fundamental Matrix Spaces

Row space of A          
Column space of A
Null space of A          
Null space of AT
The Invertible Matrix Theorem
Let A be an n x n matrix. Then the following statements are each equivalent to the statement that A is an invertible matrix.                   
The columns of A form a basis of Rn.
            Col A = Rn
            dim Col A = n  
            rank A = n
            Nul A = {0}
            dim Nul A = 0

Theorem
Let B = {b1, … , bn} and
  C = {c1, … , cn} be bases of a vector space V. Then there is an n x n matrix       such that
The columns of  are the C-coordinate vectors of the vectors in the basis B. That is,
Observe
Definition
Given scalars a0, … , an, with a0 and an nonzero, and given a signal {zk}, the equation
is called a linear difference equation (or linear recurrence relation) of order n.
For simplicity, a0 is often taken equal to 1. If {zk} is the zero sequence, the equation is homogeneous; otherwise, the equation is non-homogeneous.

Theorem
If an      0 and if {zk} is given, the equation
yk+n+a1yk+n-1+…+an-1yk+1+anyk=zk,
for all k has a unique solution whenever y0,…, yn-1 are specified.
Theorem
The set H of all solutions of the nth-order homogeneous linear difference equation
yk+n+a1yk+n-1+…+an-1yk+1+anyk=0,
for all k is an n-dimensional vector space.
Reduction to Systems of First-Order Equations

A modern way to study a homogeneous nth-order linear difference equation is to replace it by an equivalent system of first order difference equations,
written in the form
xk+1 = Axk for k = 0, 1, 2, …
Where the vectors xk are in Rn and A is an n x n matrix.




Finish the mathematics  

No comments:

Post a Comment