Laplace expansion
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression for the determinant |B| of an n × n matrix B that is a weighted sum of the determinants of n sub-matrices of B, each of size (n−1) × (n−1). The Laplace expansion is of theoretical interest as one of several ways to view the determinant, as well as of practical use in determinant computation.
The i, j cofactor of B is the scalar Cij defined by
where Mij is the i, j minor matrix of B, that is, the determinant of the (n − 1) × (n − 1) matrix that results from deleting the i-th row and the j-th column of B.
Then the Laplace expansion is given by the following
- Theorem. Suppose B = [bij] is an n × n matrix and fix any i, j ∈ {1, 2, ..., n}.
Then its determinant |B| is given by:
Examples
Consider the matrix
The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields:
Laplace expansion along the second column yields the same result:
It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
Proof
Suppose is an n × n matrix and For clarity we also label the entries of that compose its minor matrix as
for
Consider the terms in the expansion of that have as a factor. Each has the form
for some permutation τ ∈ Sn with , and a unique and evidently related permutation which selects the same minor entries as τ. Similarly each choice of σ determines a corresponding τ i.e. the correspondence is a bijection between and The explicit relation between and can be written as
where is a temporary shorthand notation for a cycle . This operation decrements all indexes which is larger than j so that every indexes fit in the set {1,2,...,n-1}
The permutation τ can be derived from σ as follows. Define by for and . Then is expressed as
Now, the operation which apply first and then apply is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in Cauchy's two-line notation )
where is temporary shorthand notation for .
the operation which apply first and then apply is
above two are equal thus,
where is the inverse of which is .
Thus
Since the two cycles can be written respectively as and transpositions,
And since the map is bijective,
from which the result follows.
Laplace expansion of a determinant by complementary minors
Laplaces cofactor expansion can be generalised as follows.
Example
Consider the matrix
The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in {1, 2, 3, 4}, namely let be the aforementioned set.
By defining the complementary cofactors to be
- ,
- ,
and the sign of their permutation to be
- .
The determinant of A can be written out as
where is the complementary set to .
In our explicit example this gives us
As above, It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
General statement
Let be an n × n matrix and the set of k-element subsets of {1, 2, ... , n}, an element in it. Then the determinant of can be expanded along the k rows identified by as follows:
where is the sign of the permutation determined by and , equal to , the square minor of obtained by deleting from rows and columns with indices in and respectively, and (called the complement of ) defined to be , and being the complement of and respectively.
This coincides with the theorem above when . The same thing holds for any fixed k columns.
Computational expense
The Laplace expansion is computationally inefficient for high dimension matrices, with a time complexity in big O notation of . Alternatively, using a decomposition into triangular matrices as in the LU decomposition can yield determinants with a time complexity of .[1]
See also
References
- ↑ Stoer Bulirsch: Introduction to Numerical Mathematics
- David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0-534-99845-3, p. 265-267 (restricted online copy, p. 265, at Google Books)
- Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3-7643-6905-1, p. 57-60 (restricted online copy, p. 57, at Google Books)