Message Turncoat in a DM to get moderator attention

Users Online(? lurkers):
10 / 24 posts
0 votes

Linear Algebra. Shilov


Posts: 70

Notes, solutions, and commentary on Linear Algebra by Georgi Shilov.

Posts: 70
0 votes RE: Linear Algebra. Shilov

1.1 Number Fields

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.1.1 Field Axioms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Number Systems ~ Number Fields

A Number Field is a set K of objects called "numbers" that when operated on via the four operations of arithmetic give elements of K.

Number fields require strict type consistency across mappings by definition?
Type K -> Type K -> Type K regardless of function (operator).

Addition :

Any object in K can be paired with any other object in K via addition operator and return a unique object in K.

Addition is commutative, α + β = β + α

Addition is associative, (α + β) + γ = α + (β + γ)

Addition has identity, α + 0 = 0 (0 must be in K)

Addition over negative element in k, α + γ = 0 (γ = -α)

Multiplication :

Any object in K can be paired with any other object in K via multiplication operator and return a unique object in K.

Multiplication is commutative, αβ = βα

Multiplication is associative, α(βγ) = (αβ)γ

Multiplication has identity, α*1 = α (1 must be in K)

Multiplication over reciprocal elements in k, αγ = 1

Addition & Multiplication relation :
Distributivity defined over k, α(β + γ) = αβ + αγ


K is isomorphic with K', all mappings take place via operator functions between K and k', that is,
K --(+,-,*,/)--> K -----> K

A number field is the mathematical object, we call it K, that satisfies the field axioms.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.1.2 Concrete Fields
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Not all number systems are fields.

Integers do not have a reciprocal type, as such Integers are not Fields.

Given all fields must have reciprocal types all fields must have a subtype isomorphic with the field of rational numbers

Some number systems have additional axioms than those to required to define a field, in such case we state some field K has properties xyz.

Axiom or Order :
* for α β : K one following must hold, α = β, β < α, or α < β
* for α β γ : K if α < β and β < γ then α < γ
* for α β γ : K if α < β then α + γ < β + γ
* for α β γ : K if α < β and 0 < γ then αγ < βγ

A field that satisfies the axiom of order is known as an ordered field.

Axiom of Completeness :
If S is a subtype of K that bounded above, then S has a least upper bound, that is Sup(S) exists.

Field of Rational Numbers :

Field of all objects p/q where p,q : Z and q ≠ 0.

Field of Real Numbers :

The field of real numbers , in addition to all field axioms, also satisfies axiom of order and completeness.
As such the real numbers are a ordered and complete field.

Field of Complex Numbers :
Field of objects a + ib where a and b are of type Reals while i is of type imaginary.
The complex numbers have a subtype that is isomorphic with the reals.

Posts: 70
0 votes RE: Linear Algebra. Shilov

1.2 Problems of the Theory of Systems of Linear Equations

General form of a system of linear equations,

a₁₁x₁ + a₁₂x₂ + . . . + a₁ₙ = b₁
a₂₁x₁ + a₂₂x₂ + . . . + a₂ₙ = b₂
. . . . . . . . . . . . . . . . .
aₖ₁x₁ + aₖ₂x₂ + . . . + aₖₙxₙ = bₖ

x₁, x₂, . . ., xₙ are unknown elements of the flied k, just variables.

a₁₁, a₁₂, . . . , aₖₙ are coefficients, also elements in the field k.

First index indicates which equation you are in, second index indicates the unknown the coefficient corresponds to.

b₁, b₂, . . . , bₖ are constant terms, also elements in the field k.

A system that has been solved is a system whose equations are ll identities.

A solution to a system is a list c₁, c₂, . . . , cₙ that when substituted for the list x₁, x₂, . . ., xₙ the equations in the system all become identities.


Not all systems of equations have solutions nor are solvable.

A system with at least one solution is called Compatible.

A system with zero solutions is called Incompatible.

A system with a single unique solution is called determinate.

A system with multiple solutions is called indeterminate.

Basic questions in the study of Systems of Linear Equations :
* Is the system compatible or incompatible?
* If compatible is the system determinate?
* If a system is compatible and determinate what is its solution?
* if a system is compatible and indeterminate what is the set of solutions?

Posts: 70
0 votes RE: Linear Algebra. Shilov

1.3 Determinants of Order n

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.3.1 Matrix and Determinant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Square matrix is an array of n² numbers aᵢⱼ (i, j = 1, 2, . . . , n) who belong to field K.

Number of rows and columns of a matrix is called Order.

aᵢⱼ are elements of the matrix.

i index indicates row, j index indicates column.

Elements a₁₁, a₂₂, . . . , aₙₙ form the principal diagonal of a matrix.

Definition of Determinant
-----------------------------------------------------------
For a square matrix A where aᵢⱼ i index rows while j index columns (i,j = 1,2,...,n) we define the determinant of A by det||aᵢⱼ|| = D = ∑ (-1)N⁽ᵃ⁽¹⁾ ᵃ⁽²⁾ ᵃ⁽ⁿ⁾⁾aₐ₍₁₎₁aₐ₍₂₎₂ . . . aₐ₍ₙ₎ₙ

===========================================================
1.3.1.1 Fundamentals of Permutations
===========================================================

-----------------------------------------------------------
Permutations ::
-----------------------------------------------------------
A permutation is a function α:{1,2,...,n} -> {1,2,...,n}

A permutation on some set {1,2,...,n} is an ordering of values from that set, that ordering can be a rearrangement.

A permutation, f : {1, 2, 3, 4, 5} -> {3 5 4 1 2}

Given a permutation is the output of a function α each permutation of n objects corresponds to some function, hence to derive the set of all permutations of n objects is to derive all functions α.

The number of permutations of n objects is always n!.

Ex. For instance, given a set {1,2,3,4,5,6} possible permutations are 1 2 3 4 5 6 or 6 5 4 3 2 1 or 4 1 2 3 6 5 etc.

The function α determines the order of a permutation.

Ex. If {1, 2, 3} and the permutation is 2 3 1 then α(1) = 2, α(2) = 3 and α(3) = 1. α maps a position to a value from the set, hence α(1) maps the position first position in the order to the value 3 from the set {1,2,3}.

We can obtain additional permutations via the group operation composition

Consider f : {1,2,3,4,5} -> {3,5,4,1,2} and g : {1,2,3,4,5} -> {3,1,5,2,5}

Composition,
f(g) : {1, 2, 3, 4, 5} -> {f(g(1)),f(g(2)),f(g(3)),f(g(4)),f(g(5))} = {f(3),f(1),f(5),f(2),f(4)} = {4,3,2,5,1}

-----------------------------------------------------------
Symmetric Groups ::
-----------------------------------------------------------

A symmetric group is the set of all permutations of n objects.

The symmetric group is denoted as Sₙ where n is the number of objects in that group.

Given a set of n objects has n! permutations, the number of elements of Sₙ is n!.

The group operation of a symmetric group is composition.

Ex. S₄ is a set of 4!=24 permutations where its domain is {1,2,3,4}

-----------------------------------------------------------
Two-Row Notation ::
-----------------------------------------------------------

The functional form α :{1,2,...n} -> {1,2,...n} is known as One-Row notation.

Two row notation stacks the sets so that the mappings between individual values is more explicit.

Two-Row notation,
α:{1,2,...,n}
{1,2,...,n}
Ex. π : {1,2,3,4} -> {4,2,3,1}
π: {1,2,3,4}
{4,2,3,1}

-----------------------------------------------------------
Cycle Notation ::
-----------------------------------------------------------
A cycle is a kind of notation for permutations, it is a way of writing a permutation where the only information we care about are the mappings between values in α.

The rule behind the mappings is that there is a map between any numbers that reference each other in anyway.

Ex. Consider permutation {1,2,3,4,5}
{3,5,4,1,2},

α(1) = 3, 1 -> 3 ;; Now 3 becomes the position of domain,
α(3) = 4, 3 -> 4 ;; Now 4 becomes the position of domain,
α(4) = 1, 4 -> 1

Notice the rule is that any value pointed to becomes the position to find the next mapping, this continues until a value maps to a previously referenced value. This is considered all one large mapping in the following way,
1 -> 3 -> 4 -> 1

This set of mappings is written as a cycle (1 3 4), a cycle with 3 unique values is called a 3-cycle.

Two values have not been referenced yet, 2 and 5.
α(2) = 5 ;; Now 5 becomes the position of domain
α(5) = 2
2 -> 5 -> 2
(2 5) a cycle with 2 unique values is called a 2-cycle or transposition

All together, (1 3 4)(2 5)

We can start at different positions, the order of our cycles may change but the cycles themselves (3-cycle, 2-cycle, etc) always remain the same. Under this notation we only care about the mappings and how they relate to cycles.


Ex. {1 2 3 4 5 6}
{3 4 2 6 5 1}
(1 3 2 4 5)(5)

Single cycles are called identity cycles.

We can convert cycle notation back to two line notation (or functional) by reversing the process and tracking the maps.

-----------------------------------------------------------
Products of Cycles ::
-----------------------------------------------------------

Finding the product of cycles allows us to deal with cycles that have overlap.

We read through our cycles from right to left matching values to our positions via tracking their mappings, the goal being reconstruct the correct permutation.

Ex. Consider the cycles, (3 4 1 5)(2 3 6 1)(3 1)
We note that we are trying to map the set {1,2,3,4,5,6} to the permutation we want to find.
α(1)
1 -> 3 -> 6 ;; α(1) = 6
α(2)
2 -> 3 -> 4 ;; α(2) = 4
α(3)
3 -> 1 -> 2 ;; α(3) = 2
α(4)
4 -> 1 ;; α(4) = 1
α(5)
5 -> 3 ;; α(5) = 3
α(6)
6 -> 1 -> 5 ;; α(6) = 5
α : {1,2,3,4,5,6} -> {6,4,2,1,3,5}

-----------------------------------------------------------
Permutations as Products of Transpositions ::
-----------------------------------------------------------

All permutations can be represented as a product of transpositions (2-cycles).

Every cycle can be expressed as a product of transpositions,
(a₁a₂a₃...aₙ₋₁aₙ) = (a₁aₙ)(a₁aₙ₋₁)(a₁aₙ₋₂)...(a₁a₃)(a₁a₂)

Ex. Consider, α : {1,2,3,4,5,6}
{6,4,2,1,3,5}
We convert to cycle notation,
(1 6 5 3 2 4)
Express as product of transpositions,
(1 4)(1 2)(1 3)(1 5)(1 6)

Ex. Consider π : {1,2,3,4,5}
{3,5,4,1,2},
Convert to cycle notation,
(1 3 4)(2 5)
Express as product of transpositions,
(1 4)(1 3)(2 5)

-----------------------------------------------------------
Inversions ::
-----------------------------------------------------------
An inversion takes place whenever α(x) > α(y) and x < y.

Ex. Given the ordered sequence 2 3 1 we say that α(1) = 2, α(2) = 3, and α(3) = 1.
α(1) < α(2) and 1 < 2, hence by definition no inversion.
α(2) > α(3) and 2 < 3, hence by definition an inversion.

For each α(n) there is number of terms in a ordered sequence less than α(n), we call this value βₙ

Ex. Given an ordered sequence 3 1 2 where α(1) = 3, α(2) = 1, and α(3) = 2 we know α(1) > α(2) while 1 < 2 and
α(1) > α(3) while 1 < 3, hence β₁ = 2.

The number of inversions of a permutation α is defined as N(α(1), α(2), . . ., α(n)) = β₁ + β₂ + . . . + βₙ

Ex.Consider the ordered sequence 3 2 1 where α(1) = 3, α(2) = 2, and α(3) = 1.
α(1) > α(2) and 1 < 2 ; α(1) > α(3) and 1 < 3 ---> β₁ = 2 because two inversions take place in respect to position 1. α(2) > α(3) and 2 < 3 ----> β₂ = 1 because one inversion takes place after position 2.
α(3) has no values that follow it, so β₃ = 0
N(α(1), α(2), α(3)) = β₁ + β₂ + β₃ = 2 + 1 + 0 = 3

A parity is even when N(α(1),α(2),...,α(n)) is even and odd when N(α(1),α(2),...,α(n)) is odd.

Ex.The ordered sequence 3 2 1 has a odd parity because N(α(1), α(2), α(3)) = β₁ + β₂ + β₃ = 2 + 1 + 0 = 3

Posts: 70
0 votes RE: Linear Algebra. Shilov

Continuation 1.3

-----------------------------------------------------------
Transposition and Parity Equivalence ::
-----------------------------------------------------------

A permutation is even if and only if it can be written as an even number of transpositions and odd if and only if it can be written as an odd number of transpositions.

Hence, the parity of a permutation in terms of inversions is equivalent to product of transpositions.

------------------------------------------------------------
Permutations are Even or Odd ::
------------------------------------------------------------

All permutations are either even or odd.

An even permutation is one with a positive sign (+) and a negative permutation is one with a negative sign (-).

We denote the sign of a permutation α as sign(α)

sign(α) = +1 if the number of inversions of α is even.
sign(α) = -1 if the number of inversions of α is odd.

so, sign(α) = -1ᴺ⁽ᵃ⁽¹⁾...ᵃ⁽ⁿ⁾⁾

Given the parity of a permutation in terms of inversions is equivalent to the number of transpositions the notion of sign can be re defined as...

sign(α) = -1ᵗ where t is the transpositions in α.

===========================================================
1.3.1.2 The Determinant
===========================================================

The definition of the Determinant is a specific application of theory of permutations and groups symmetry.

Now that we have all the pieces, lets state them formally and construct the determinant as presented by Shilov.

Let α ∈ Sₙ be a permutation of the symmetric group

Let there be inversions (i,j) of α where i,j ∈ {1,2,...,n}

For each i,j where i < j there is a transposition τᵢⱼ ∈ Sₙ

Let the permutation α be even or odd given τᵢⱼ by sign(α)=-1ᵀ

From a square matrix Aᵢⱼ with aᵢⱼ ∈ Aᵢⱼ where i,j ∈ {1,2,...,n} we can define a product containing just one element from each row and column as aₐ₍₁₎₁aₐ₍₂₎₂ . . . aₐ₍ₙ₎ₙ where where α(n) corresponds to rows and n corresponds to columns.
Πaₐ₍ₙ₎ₙ

Definition of Determinant
----------------------------
Given a square matrix Aᵢⱼ where i,j ∈ {1,2,...,n the determinant of Aᵢⱼ is
det||aᵢⱼ|| = ∑ₐₑₛₙ sign(α) Πaₐ₍ₙ₎ₙ

===========================================================
1.3.1.3 Determinants as Solutions to Linear Systems
===========================================================

Determinants can be used to solve systems of linear equations.

Consider,
a₁₁x₁ + a₁₂x₂ = b₁
a₂₁x₁ + a₂₂x₂ = b₂

|b₁ a₁₂|
       |b₂ a₂₂|    b₁a₂₂ - b₂a₁₂
x₁ = --------- = ----------------
       |a₁₁ a₂₁|  a₁₁a₂₂ - a₂₁a₁₂
       |a₁₂ b₂₂|

 

       |a₁₁ b₁|
       |a₂₁ b₂|      a₁₁b₂ - a₂₁b₁
x₂ = ----------- = ----------------
       |a₁₁ a₁₂|     a₁₁a₂₂ - a₂₁a₁₂
       |a₂₁ a₂₂|

Posts: 70
0 votes RE: Linear Algebra. Shilov

1.4 Properties of Determinants

Using the equation det||aᵢⱼ|| = ∑ sign(α) Πaₐ₍ₙ₎ₙ to directly compute determinants is inefficient because you need to sum up n! terms where each sum is a product of n factors.

To compute determinants efficiently we take advantage of its key properties.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.1 The Transposition Operation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The transpose of a matrix A is denoted as Aᵀ and obtained by switching the rows and columns, that is a mxn matrix A becomes a nxm matrix Aᵀ.

Geometrically, the transposition of a matrix of a determinant is the result of turning it 180 degrees about the principal diagonal.

The determinant of matrix Aᵀ, det(Aᵀ) = ∑ sign(α)Πaₙₐ₍ₙ₎

The determinant of A is equal to the determinant Aᵀ, that is the value of the determinant is conserved under transpose operation.

One explanation for the conservation of value is that a when transposed 180 degrees every segment that had a negative slope still has a negative slope and of course you are still dealing with the same n values.
So, given the diagonal elements are not changed by transposition, det(A) = det(Aᵀ)

A more formal definition can be given through properties of fields and the nature of transpositions of a symmetric group.

We need to show that ∑ sign(α) Πaₐ₍ₙ₎ₙ is equivalent to ∑ sign(α) Πaₙₐ₍ₙ₎

A square matrix must be defined over a field, as such we know that the commutativity of products is a property that holds,

Πaₐ₍ₙ₎ₙ = aₐ₍₁₎₁aₐ₍₂₎₂...aₐ₍ₙ₎ₙ = a₁ₐ₍₁₎a₂ₐ₍₂₎...aₙₐ₍ₙ₎ = Πaₙₐ₍ₙ₎

The square matrices A and Aᵀ are both the same symmetric group Sₙ given the set of all permutations of A and Aᵀ are equivalent.

As such the set of all cyclical transpositions is conserved because no permutation can be decomposed into both an even and odd number of transpositions.

Hence, ∑ sign(α) Πaₐ₍ₙ₎ₙ = ∑ sign(α) Πaₙₐ₍ₙ₎

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.2 The Anti-Symmetry Property
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Determinants are anti-symmetric with respect to columns and rows.

The value of a determinant will change sign when two of its columns or two of its rows are interchanged.

Consider interchanging the columns j and j+1, if the segment between these columns had a positive slope then it now has a negative slope and vise versa.
The other segments remain the same, as such only one new negative or positive slope exist effectively changing the sign of the determinant.
This example of course works if we consider interchanging rows i and i+1 instead.

None of the terms actually change, as such the value of the determinant remains.

In terms of inversions, transposing columns or rows causes the number of inversions to change from positive to negative or negative to position depending on the original sign of the permutation in question.

We can see why the parity of inversions change by taking a loot at the permutations in the context of transpositions.

D = ∑ (-1)N⁽ᵃ⁽¹⁾ ᵃ⁽²⁾ ᵃ⁽ⁿ⁾⁾aₐ₍₁₎₁aₐ₍₂₎₂ . . . aₐ₍ₙ₎ₙ

In cycle notation, ( 1 2 3 ...... n )
(α(1) α(2) α(3) α(n))

As transpositions, (1)(2)(3)...(n) which implies even parity.

Interchange the columns 2 and 3,
D = ∑ (-1)N⁽ᵃ⁽¹⁾ ᵃ⁽³⁾ ᵃ⁽²⁾ ᵃ⁽ⁿ⁾⁾aₐ₍₁₎₁aₐ₍₂₎₃aₐ₍₃₎₂ . . . aₐ₍ₙ₎ₙ

In cycle notation, ( 1 2 3 ...... n )
(α(1) α(3) α(2) α(n))

As transpositions, (1)(2 3) . . . (n) reduces to (2 3) which implies odd parity


As an actual computation it can be seen that it really is just a matter of the products being differenced changing places,

|1 2| = 1(4) - 3(2) = 4 - 6 = -2
|3 4|

transpose columns,
|2 1| = 2(3) - 4(1) = 6 - 4 = 2
|4 3|
transpose rows,
|3 4| = 3(2) - 1(4) = 6 - 4 = 2

In summary, when rows or columns are transposed all of the permutations that had even parity become odd parity and all of the permutations that had off parity have even parity. This changes the sign of the determinant.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.3 Identical Columns and Identical Rows
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A determinant with two identical columns is equal to zero.

A determinant with two identical rows is equal to zero.

We know that when two columns are transposed that the determinant D changes its sign, that is D becomes -D.
If two columns are identical and you transpose them they must by definition change sign, but given they are identical the square matrix has not actually changed.
The only way this is possible is if a determinant with identical columns is equal to zero.

D = -D is and only if D = 0

The same reasoning applies to the case of identical rows.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.4 Linear Property
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If all the elements of the jth column of a determinant D are linear combinations of two columns of numbers, i.e, if
aᵢⱼ = λbᵢ + μcᵢ where i=1,2,...,n, λ and μ are fixed numbers, then D is equal to a linear combination of two determinants.

That is, D = λD₁ + μD₂

Determinants D₁ and D₂ have the same columns as the determinant D except for the jth column; the jth column of D₁ consists of the numbers bᵢ, while the jth column D₂ consists of the numbers cᵢ.

This property can be extended to any number of terms.

For instance a 3x3 determinant,

            |a b c|
let A = |d e f|
            |g h i|

det(A) = a|e f| - b|d f| + c|d e| = a(ei - hf) - b(di - gf) + c(dh - ge)
               |h i|      |g i|      |g h|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.5 Common Factors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Any common factor of a column of a determinant can be factored out of the determinant.

This follows directly from the linear property of the determinant.

Given some square matrix aᵢⱼ = λbᵢ the determinant det(aᵢⱼ) = λdⱼ(bᵢ) because this is just a representation of the determinant of aᵢⱼ as a linear combination.

Ex. |2 4| = 2(9) - 6(4) = 18 - 24 = -6
      |6 9|

2|1 2| = 6|1 2| = 6(1(3) - 2(2)) = 6(3 - 4) = 6(-1) = -6
3|2 3|      |2 3|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.6 Columns or Rows of zeros
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If a column or row of a determinant consists entirely of zeros, then the determinant is zero.

If a column consists of all zeros, then every permutation contains a 0. Given each permutation is a product, that product must equal 0. As such all permutations will equal 0 and so will the determinant.

This same line of reasoning applies to determinants that have rows made entirely of zero.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.4.7 Products of Determinants
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The value of a determinant is not changed by adding the elements of one column multiplied by an arbitrary number to the corresponding elements of another column.

The product of determinants is fundamentally tied to permutations of summed products.

The logical conclusion of the above is that det(A)det(B) = det(AB)

Consider A = |a b| and B = |e f|
                     |c d|              |g h|

A.B = |ae + bg af + bh|, det(A.B) = adeh - adfg +bcfg - bceh
          |ce + dg cf + dh|

det(A) = ad - bc and det(B) = eh - fg

det(A)det(A) = (ad - bc)(eh - fg) = adeh - adfg +bcfg - bceh

Posts: 46
0 votes RE: Linear Algebra. Shilov

Ah yes, of course lmao

Posts: 180
0 votes RE: Linear Algebra. Shilov

Alice?

Posts: 70
0 votes RE: Linear Algebra. Shilov

1.5 Co-factors and Minors

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.5.1 Co-factors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Co-factors are a means of solving a determinant as well as being an alternative method of definition.

The co-factor is a recursive formula.

It can be considered more efficient than our previous definition, however computationally it is still hazardous.

Consider any jth column of the determinant D and let aᵢⱼ be any element of this column.
In det||aᵢⱼ|| = ∑sign(α) Πaₐ₍ₙ₎ₙ each term has one factor from that column.
We factor out all terms of a particular element aᵢⱼ, as such we obtain a partial sum.
The partial sum D = a₁ⱼA₁ⱼ + a₂ⱼAᵢ₂ + ... + aₙⱼAₙⱼ is known as the expansion of D and value Aᵢⱼ is called teh co-factor of the element aᵢⱼ.

The same process can be carried out to obtain the expansion of D is respect to the ith row.
D = aᵢ₁Aᵢ₁ + aᵢ₂Aᵢ₂ + . . . + aᵢₙAᵢₙ

The sum of all the products of the elements of any column (or row) of the determinant D with the corresponding co-factors is equal to the determinant D itself.

det||aᵢⱼ|| = ∑aᵢⱼAᵢⱼ

The sum of all the products of the elements of columns or rows of the determinant D with the co-factors of the corresp
onding elements of another column or row is equal to zero.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.5.2 Minors and the Computation of Co-factors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If we delete a row and column from a matrix with order n, we obtain a matrix of order n-1.

The determinant of the matrix n-1 is known as a minor of the original nth-order matrix.

We denote the minor as Mᵢⱼ.

We get the relation Aᵢⱼ = (-1)ⁱ⁺ʲMᵢⱼ

Hence computing a co-factor boils down to computing its corresponding minors.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.5.3 Computing the Determinant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Computing with columns,

D = (-1)¹⁺ʲa₁ⱼM₁ⱼ + (-1)²⁺ʲa₂ⱼM₂ⱼ + . . . + (-1)ⁿ⁺ʲaₙⱼMₙⱼ

That is, det||aᵢⱼ|| = ∑(-1)ⁿ⁺ʲaₙⱼMₙⱼ

Computing with rows,

D = (-1)ⁱ⁺¹aᵢ₁Mᵢ₁ + (-1)ⁱ⁺²aᵢ₂Mᵢ₂ + . . . + (-1)ⁱ+ⁿaᵢₙMᵢₙ

That is, det||aᵢⱼ|| = ∑(-1)ⁱ⁺ⁿaᵢₙMᵢₙ

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1.5.4 Examples
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-----------------------------------------------------------
Third Order Determinant : :
-----------------------------------------------------------

A third order determinant can be expanded in six unique ways: three ways using columns and three ways using rows.

Expansion using first row,

|a₁₁ a₁₂ a₁₃|
|a₂₁ a₂₂ a₂₃| = a₁₁|a₂₂ a₂₃| - a₁₂|a₂₁ a₂₃| + a₁₃|a₂₁ a₂₂|
|a₃₁ a₃₂ a₃₃|         |a₃₂ a₃₃|       |a₃₁ a₃₃|        |a₃₁ a₃₂|

-----------------------------------------------------------
Triangular nth Order Determinant : :
-----------------------------------------------------------

        |a₁₁ 0 0 .   .   .   0|
        |a₂₁ a₂₂ 0  . . . .  0|
Dₙ = |a₃₁ a₃₂ a₃₃ . . .   0|
        | .   .   .   .   .   .  .|
        |aₙ₁ aₙ₂ aₙ₃ . . .aₙₙ|

Expanding Dₙ with respect to row one we see Dₙ equals the product of elements the triangular determinant of order n-1.

           |a₂₂ 0 . . . . 0|
           |a₃₂ a₃₃ . . . 0|
Dₙ₋₁ =  |. .  .  .  .  . . |
          |aₙ₂ aₙ₃ . . aₙₙ|

Expanding Dₙ₋₁ with respect to the first row gives us Dₙ₋₁ = a₂₂Dₙ₋₂

We we count down the triangular determinants we finally obtain D = a₁₁a₂₂a₃₃...aₙₙ

Hence, a triangular determinant is equivalent to the product of its diagonal.

 

Posts: 70
0 votes RE: Linear Algebra. Shilov

Example, Vandermonde

-----------------------------------------------------------
Vandermonde Matrix : :
-----------------------------------------------------------
The Vandermonde matrix Vₙ is a nxn matrix for which the jth column is the vector (x₁ʲ⁻¹, x₂ʲ⁻¹, . . . , xₙʲ⁻¹0 for
j = 1, 2, 3, . . . , n.

Posted Image

The most common use case for the Vandermonde Matrix is in interpolation.

Suppose we want to compute a polynomial pₙ₋₁(x) = aₙxⁿ⁻¹ + aₙ₋₁xⁿ⁻² + ... + a₁ of degree n -1 that interpolates
{xᵢ,yᵢ)} where i=0,1,2,...,n.

We can compute this via VₙᵀA = f where A = [a₁,a₂,...,aₙ]ᵀ.

Taking the determinant of Vₙ

-----------------------------------------------------------
Vandermonde Determinant : :
-----------------------------------------------------------

det(Vₙ) = Π_(1<=i<m<=n) (xₘ - xᵢ)

The polynomial vanishes for i≠m if one substitutes xᵢ for xₘ the determinant will be 0 do to identical columns.

By definition of factor theorem if a polynomial vanishes at xₙ where xₙ takes on values x₁,x₂,...,xₙ₋₁ then the function v(x₁,...,xₙ) is divisible by the products (xₙ - x₁) . . . (xₙ - xₙ₋₁).

Products (xₙ - x₁) . . . (xₙ - xₙ₋₁) are a factorial ring, as such it can be written as a product of all xᵢ - xₘ that divides det(vₙ).

Hence, v(x₁,...,xₙ₋₁) = a(x₁,...,xₙ₌₁)Π(xₘ - xᵢ)

The quantity a(x₁,...xₙ₋₁) is the leading coefficient of the polynomial v(x₁,...,xₙ).

Given each row has a total degree of i-1, all terms in the determinant have a degree n(n-1)/2 and as such the polynomial is quantic.

As such the coefficient a(x₁,...,xₙ₋₁) is a constant.

Given the principal diagonal of vₙ is x₂x₃² . . . xₙⁿ⁻¹ the constant a(x₁,...,xₙ₋₁) must be equivalent to 1.

Therefore, det(Vₙ) = Π_(1<=i<m<=n) (xₘ - xᵢ)

last edit on 6/1/2022 7:57:11 PM
10 / 24 posts
This site contains NSFW material. To view and use this site, you must be 18+ years of age.