Mathematics & Statistics
Texas Tech University
Kent Pearce

Department of Mathematics and Statistics
Texas Tech University
Lubbock, Texas 79409-1042
Voice: (806)742-2566 x 226
FAX: (806)742-1112
Email: kent.pearce@ttu.edu

Math 2360
Linear Algebra
Fall 2012
Larson, Ron
Linear Algebra
Cengage

Review Exam II
Section Content      Suggested Problems
Section 2.5
  • Stochastic Matrices
    • States
    • Matrix of transition probabilities
  • Least Squares Regression
    • Definition of least squares regression line
    • Matrix form for linear regression
Pages 95-97
5, 7, 9, 35, 37, 39
Section 3.1
  • Determinant of 2x2 matrix
  • Minors and co-factors of a square matrix
  • Definition of the determinant: Row one co-factor expansion
  • Theorem3.1: Computation of the determinant of matrix by any row or any column co-factor expansion
  • Aternate method for computation of the determinant of a 3x3 matrix
  • Determinants of triangular matrices
Pages 110-111
5-7, 19-21, 27-28, 33-34, 39-41, 47, 51
Section 3.2
  • Elementary Row Operations and Determinants
    • Type I: B obtained from A by interchanging two rows -> det(B) = -det(A)
    • Type II: B obtained from A by multipling a row of A by a non-zero constant c -> det(B) = c det(A)
    • Type III: B obtained from A by adding a multiple of one row to a second row and replacing the second row with the sum -> det(B) = det(A)
  • Finding a determinant using elementary row operations
  • Determinants and elementary column operations
  • Theorem 3.4: Conditions that yield zero determinants
    • if A has a zero row (column)
    • if A has two identical rows (columns)
  • Finding determinants
Pages 118-119
25, 27, 29, 31, 33, 35, 39-42
Section 3.3
  • Determinant of a matrix product det(AB) = det(A) det(B)
  • Determinant of a scalar multiple of a matrix
  • Determinant of an invertible matrix
  • Determinant of the inverse of a matrix
  • Determinant of the transpose of a matrix
Pages 125-127
3, 9, 13, 17-20, 26-27, 37-39, 53, 55
Section 3.4
  • Adjoint of a matrix
    • Matrix of co-factors
    • Inverse of matrix via its adjoint
  • Cramer's Rule
  • Area in the plane
    • Area of triangle
    • Test for co-linearity
    • Two-point form of the equation of a line
  • Volume in space
    • Volume of a tetrahedron
    • Test for co-planarity
    • Three-point form of the equation of a plane
Pages 136-137
1, 3, 17, 19, 25, 27, 37, 39, 41, 43, 45, 47, 49, 51, 53, 55, 57, 59
Section 4.1
  • Vectors in the plane and space
  • Vector addition
    • Geometric definition
      • Geometric interpretation
    • Algebraic definition
  • Scalar multiplication
    • Geometric definition
      • Geometric interpretation
    • Algebraic definition
  • Vector addition and scalar multiplication properties
    • binary operation addition
      • closed
      • commutative
      • associative
      • existence of an additive identity (called 0)
      • existence of additive inverse for each a (called -a)
    • scalar multiplication
      • closed
      • distributive property i
      • distributive property ii
      • associativity of scalar multiplication
      • unity property of scalar multiplication
  • Vectors in Rn
  • Vector addition and scalar multiplication
  • Vector addition and scalar multiplication properties
  • Properties of additive identity and additive inverse
  • Linear combination of vectors
Pages 153-154
7-10, 19-24, 37-38, 39-40, 45-46, 53-54
Section 4.2
  • Definition of a vector space
    • binary operation addition
      • closed
      • commutative
      • associative
      • existence of an additive identity (called 0)
      • existence of additive inverse for each a (called -a)
    • scalar multiplication
      • closed
      • distributive property i
      • distributive property ii
      • associativity of scalar multiplication
      • unity property of scalar multiplication
  • Standard examples of vector spaces
  • Properties of scalar multiplication
  • Testing sets with operations to determine whether they form vector spaces
Pages 160-161
13, 15, 17-18, 21, 23, 25-26, 29-30, 35
Section 4.3
  • Definition of a subspace
  • Testing subsets of vector spaces to determine whether they form a subspace
  • Theorem 4.6: The intersection of two subspaces is a subspace
  • Subspaces of R2 and of R3
Pages 167-168
7-12, 21, 23, 25, 29, 31, 33, 37, 39, 41
Section 4.4
  • Definition of linear combination
  • Definition of a spannng set
    • Testing sets to determine whether they span a vector space
  • Defintion of the span of a set
  • Theorem 4.7: The span of a set is a subspace
  • Definition of linear independence and linear dependence
  • Testing sets to determine whether they are linearly independent
    • Sets containing two vectors
    • Subsets of Rn
    • Subsets of Pn
Pages 178-179
1-2, 5-6, 11, 13, 15, 21, 23, 25, 29, 32, 35, 38, 41-43, 45, 46
Section 4.5
  • Definition of basis B
    • B is a spanning set
    • B is linearly independent
  • Rn: Standard basis & non-standard basis
  • Pn: Standard basis & non-standard basis
  • Mm,n: Standard basis & non-standard basis
  • Theorem 4.9: Uniqueness basis representation
  • Theorem 4.10 Basis and linear dependence
  • Theorem 4.11 Every basis of a vector space V has the same number of elements
  • Definition of dimension of a vector space
  • Theorem 4.12 Basis tests in n-dimensional vector space
Pages 187-188
7-10, 15-18, 25-28, 33, 35, 37, 41, 43, 45
Section 4.6
  • Row space of a matrix A
  • Column space of matrix A
  • Theorem 4.13: Row-equivalent matrices have the same row space
  • Theorem 4.14: Mechanism for finding a basis for the row space of a matrix
    • Find a basis for the row space of a matrix A
      • Reduce A to RREF
      • The rows with leading ones of the RREF form a basis for the row space of A
    • Find a basis for a subspace
  • Cold Hard Fact: The column space of a matrix is destroyed under row-equivalent operations
  • Find a basis for the column space of a matrix A
    • Apply elementary column operations to A
    • Take the transpose of A and apply elementary row operations to AT
    • Elementary row operations on A preserve the dependency relationships between the columns of A
      • Reduce A to RREF
      • The columns of A which correspond to the columns of RREF which contains leading ones form a basis for the column space of A
  • Theorem 4.15: dim(row space of A) = dim(col space of A)
  • Definition of the rank of a matrix A
  • Theorem 4.16: The set of solutions of Ax = 0 forms a subspace of Rn
    • Definition of the null space of a matrix A
    • Definition of the nullity of a matrix A
  • Finding the nullspace of a matrix A
    • Finding a basis for the nullspace of a matrix A
  • Dimension theorem: Theorem 4.17
  • Solutions of linear systems of equations
    • Solutions of the homogeneous equation Ax = 0
      • subspace of Rn
    • Solutions of the non-homogeneous equation Ax = b
      • Affine subset of Rn
  • Theorem 4.19: Solution set of Ax = b is same as the column space of A
  • Equivalency of conditions for a matrix A to be invertible
Pages 199-201
7, 9, 11, 13, 15, 17, 21, 23, 29, 31, 33, 39, 41, 43, 57-58





Navigation Bar Home Vita Pre-Prints Courses Pre Lims Links Jokes

Home | Vita | Pre-Prints | Courses | Pre Lims | Links | Jokes

Comments maybe be mailed to: kent.pearce@ttu.edu

Last modified on: Monday, 10-Aug-2015 12:47:28 CDT