Internet Windows Android

Properties of linearly dependent and linearly independent matrix columns. Linear Row Dependency Rows Linear Dependent

Linear independence of matrix rows

Given a matrix of size

Let's denote the rows of the matrix as follows:

The two lines are called equal if their corresponding elements are equal. ...

Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element by element:

Definition. A string is called a linear combination of matrix rows if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not equal to zero at the same time such that the linear combination of the rows of the matrix is ​​equal to the zero row:

Where . (1.1)

Linear dependence of the rows of a matrix means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If the linear combination of rows (1.1) is zero if and only if all the coefficients, then the rows are called linearly independent .

Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

6,13,14,15,16. Vectors. Operations on vectors (addition, subtraction, multiplication by a number),n -dimensional vector. The concept of a vector space and its basis.

A vector is a directed segment with a starting point A and the end point V(which can be moved parallel to itself).

Vectors can be designated as 2 uppercase letters, or one lowercase with a line or arrow.

Length (or module) vector is a number equal to the length of the segment AB representing the vector.

Vectors lying on one straight line or on parallel lines are called collinear .

If the beginning and end of the vector coincide (), then such a vector is called zero and is denoted by =. The length of the zero vector is zero:

1) By multiplying a vector by a number:

There will be a vector with length, the direction of which coincides with the direction of the vector, if, and opposite to it, if.

2) Opposite vector - the product of the vector is called - by the number (-1), i.e. - =.

3) The sum of two vectors and a vector is called, the beginning of which coincides with the beginning of the vector, and the end with the end of the vector, provided that the beginning coincides with the end. (rule of triangles). The sum of several vectors is determined in a similar way.



4) By the difference of two vectors and called the sum of the vector and the vector -, the opposite.

Scalar product

Definition: The scalar product of two vectors is the number equal to the product of the lengths of these vectors by the cosine of the angle between them:

n-dimensional vector and vector space

Definition... An n-dimensional vector is an ordered collection n real numbers written as x = (x 1, x 2, ..., x n), where x i i -th vector component NS.

The concept of an n-dimensional vector is widely used in economics, for example, a certain set of goods can be characterized by the vector x = (x 1, x 2, ..., x n), and the corresponding prices y = (y 1, y 2,…, y n).

- Two n-dimensional vectors are equal if and only if their corresponding components are equal, i.e. x = y if x i= y i, i = 1,2,…,n.

- The sum of two vectors the same dimension n is called a vector z = x + y whose components are equal to the sum of the corresponding components of the vector terms, i.e. z i= x i+ y i, i = 1,2, ..., n.

- The product of a vector x by a real number a vector is called, the components of which are equal to the product by the corresponding components of the vector, i.e. , i= 1,2,…,n.

Linear operations on any vectors satisfy the following properties:



1) - commutative (displaceable) property of the sum;

2) - associative (combination) property of the sum;

3) is an associative property with respect to a numerical factor;

4) - distributive (distributive) property with respect to the sum of vectors;

5) is a distributive property with respect to the sum of numerical factors;

6) There is a zero vector such that for any vector (the special role of the zero vector);

7) For any vector, there is an opposite vector such that;

8) for any vector (a special role of the numerical factor 1).

Definition... The set of vectors with real components, in which the operations of adding vectors and multiplying a vector by a number satisfying the above eight properties (considered as axioms) are defined, is called vector state .

Dimension and basis of vector space

Definition... Linear space is called n-dimensional if it contains n linearly independent vectors, and any of the vectors are already dependent. In other words, dimension of space Is the maximum number of linearly independent vectors it contains. The number n is called the dimension of space and is denoted by.

The collection of n linearly independent vectors of an n-dimensional space is called basis .

7. Eigenvectors and eigenvalues ​​of the matrix. Characteristic equation of the matrix.

Definition... The vector is called own vector linear operator if there is a number such that:

The number is called proper operator value (matrices A) corresponding to the vector.

Can be written in matrix form:

Where is the column matrix from the coordinates of the vector, or in expanded form:

Let's rewrite the system so that there are zeros on the right-hand sides:

or in matrix form:. The resulting homogeneous system always has a zero solution. For the existence of a nonzero solution, it is necessary and sufficient that the determinant of the system:.

The determinant is a polynomial n-th degree relatively. This polynomial is called the characteristic polynomial of the operator or matrix A, and the resulting equation is the characteristic equation of the operator or matrix A.

Example:

Find the eigenvalues ​​and eigenvectors of the linear operator given by the matrix.

Solution: Composing the characteristic equation or, whence the eigenvalue of the linear operator.

Find the eigenvector corresponding to the eigenvalue. To do this, we solve the matrix equation:

Or , or, whence we find:, or

Or .

Suppose that, we get that the vectors, for any, are eigenvectors of a linear operator with an eigenvalue.

Similarly, a vector.

8. System NS linear equations with NS variables (general view). The matrix notation of such a system. System solution (definition). Joint and inconsistent, definite and indefinite systems of linear equations.

Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables and free terms of equations , respectively.

Short entry: ().

Definition. A solution to a system is a set of values ​​such that, when substituted, each equation in the system turns into a true equality.

1) The system of equations is called joint if it has at least one solution, and inconsistent if it has no solutions.

2) The joint system of equations is called a certain if it has a unique solution, and undefined if it has more than one solution.

3) Two systems of equations are called tantamount to (equivalent) if they have the same set of solutions (for example, one solution).

Let's write the system in matrix form:

Let's denote: , where

A- matrix of coefficients for variables, or matrix of the system, NS - matrix-column of variables, V - matrix-column of free members.

Because the number of matrix columns is equal to the number of matrix rows, then their product:

There is a column matrix. The elements of the resulting matrix are the left-hand sides of the initial system. Based on the definition of equality of matrices, the initial system can be written in the form:.

Cramer's theorem. Let be the determinant of the matrix of the system, and be the determinant of the matrix obtained from the matrix by replacing the th column with the column of free terms. Then, if, then the system has a unique solution determined by the formulas:

Cramer's formula.

Example. Solve a system of equations using Cramer's formulas

Solution... Determinant of the system matrix. Therefore, the system has a unique solution. We calculate, obtained from replacing, respectively, the first, second, third columns with a column of free members:

According to Cramer's formulas:

9. Gaussian method for solving the systemn linear equations with NS variables. The concept of the Jordan-Gauss method.

Gauss method - method of successive elimination of variables.

The Gauss method consists in the fact that, using elementary transformations of rows and permutations of columns, the system of equations is reduced to an equivalent system of a stepwise (or triangular) form, from which all other variables are found sequentially, starting from the last (by number) variables.

It is convenient to carry out Gaussian transformations not with the equations themselves, but with an expanded matrix of their coefficients, obtained by assigning a column of free terms to the matrix:

.

It should be noted that the Gauss method can be used to solve any system of equations of the form .

Example. Solve the system using the Gauss method:

Let us write out the extended matrix of the system.

Step 1 . Let's swap the first and second lines so that it becomes equal to 1.

Step 2. Multiply the elements of the first row by (–2) and (–1) and add them to the elements of the second and third rows so that zeros appear under the element in the first column. ...

The following theorems are true for compatible systems of linear equations:

Theorem 1. If the rank of the matrix of the compatible system is equal to the number of variables, i.e. , then the system has a unique solution.

Theorem 2. If the rank of the matrix of the compatible system is less than the number of variables, i.e. , then the system is indefinite and has an infinite number of solutions.

Definition. A basic minor of a matrix is ​​any nonzero minor whose order is equal to the rank of the matrix.

Definition. Those unknowns whose coefficients are included in the notation of the basic minor are called basic (or basic), the rest of the unknowns are called free (or minor).

To solve the system of equations in the case means to express and (since the determinant composed of their coefficients is not equal to zero), then and are free unknowns.

Let us express the basic variables in terms of free ones.

From the second row of the resulting matrix, we express the variable:

From the first line, we express:,

General solution of the system of equations:,.

Let k rows and k columns (k ≤ min (m; n)) be chosen arbitrarily in a matrix A of sizes (m; n). The elements of the matrix at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the kth order minor of the matrix A.

The rank of a matrix is ​​the maximum order r of nonzero minors of the matrix A, and any minor of order r other than zero is a basic minor. Designation: rang A = r. If rang A = rang B and the sizes of the matrices A and B are the same, then the matrices A and B are said to be equivalent. Designation: A ~ B.

The main methods for calculating the rank of a matrix are the bordering minors method and the method.

Border Minors Method

The essence of the bordering minors method is as follows. Suppose that a nonzero minor of order k has already been found in the matrix. Then, in what follows, only those minors of order k + 1 are considered that contain (i.e., border) a minor of the kth order, which is different from zero. If all of them are equal to zero, then the rank of the matrix is ​​k, otherwise, among the bordering minors of the (k + 1) th order, there is a nonzero one and the whole procedure is repeated.

Linear independence of rows (columns) of a matrix

The concept of the rank of a matrix is ​​closely related to the concept of linear independence of its rows (columns).

Matrix rows:

are called linearly dependent if there are numbers λ 1, λ 2, λ k such that the equality is true:

Rows of matrix A are called linearly independent if the above equality is possible only if all numbers λ 1 = λ 2 =… = λ k = 0

The linear dependence and independence of the columns of the matrix A are determined in a similar way.

If any row (a l) of the matrix A (where (a l) = (a l1, a l2, ..., a ln)) can be represented as

The concept of a linear combination of columns is defined in a similar way. The following basic minor theorem is true.

Base lines and base columns are linearly independent. Any row (or column) of the matrix A is a linear combination of basic rows (columns), that is, rows (columns) intersecting the basic minor. Thus, the rank of the matrix A: rank A = k is equal to the maximum number of linearly independent rows (columns) of the matrix A.

Those. the rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which you need to determine the rank, for which the determinant is not zero. If the original matrix is ​​not square, or if it is square, but its determinant is zero, then for square matrices of lower order, the rows and columns are chosen arbitrarily.

Apart from using determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is less. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

Examples of finding the rank of a matrix

Using the bordering minors method, find the rank of the matrix

Decision. Minor of the second order

the bordering minor M 2 is also nonzero. However, both minors of the fourth order, bordering M 3.

are equal to zero. Therefore, the rank of the matrix A is 3, and the basic minor is, for example, the above minor M 3.

The method of elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, we can bring the matrix to the form when all its elements, except for a 11, a 22,…, a rr (r ≤min (m, n)), are equal to zero. This obviously means that rang A = r. Note that if the n-th order matrix has the form of an upper triangular matrix, that is, a matrix in which all the elements under the main diagonal are equal to zero, then its definition is equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix by the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one, and then, having selected the corresponding determinant, we find that the rank of the matrix is ​​equal to the number of nonzero elements of the main diagonal.

Using the method of elementary transformations, find the rank of the matrix

Solution. Let us denote the i-th row of the matrix A by the symbol α i. At the first stage, we perform elementary transformations

At the second stage, we will perform the transformations

As a result, we get

A system of vectors of the same order is called linear-dependent if a zero vector can be obtained from these vectors by means of an appropriate linear combination. (In this case, it is not allowed that all coefficients of the linear combination are equal to zero, since this would be trivial.) Otherwise, the vectors are called linearly independent. For example, the following three vectors:

are linearly dependent, since it is easy to check. In the case of a linear dependence, any vector can always be expressed in terms of a linear combination of the remaining vectors. In our example: either or It is easy to check with appropriate calculations. This implies the following definition: a vector is linearly independent of other vectors if it cannot be represented as a linear combination of these vectors.

Consider a system of vectors without specifying whether it is linearly dependent or linearly independent. For each system consisting of column vectors a, it is possible to identify the maximum possible number of linearly independent vectors. This number, denoted by a letter, is the rank of the given vector system. Since each matrix can be viewed as a system of column vectors, the rank of a matrix is ​​defined as the maximum number of linearly independent column vectors it contains. Row vectors are also used to determine the rank of a matrix. Both methods give the same result for the same matrix, and cannot exceed the smallest of or The rank of a square matrix of order ranges from 0 to. If all vectors are zero, then the rank of such a matrix is ​​zero. If all vectors are linearly independent of each other, then the rank of the matrix is. If you form a matrix from the above vectors, then the rank of this matrix is ​​2. Since every two vectors can be reduced to the third by a linear combination, the rank is less than 3.

But one can make sure that any two vectors of them are linearly independent, hence the rank

A square matrix is ​​called degenerate if its column vectors or row vectors are linearly dependent. The determinant of such a matrix is ​​equal to zero and its inverse matrix does not exist, as noted above. These findings are equivalent to each other. As a consequence, a square matrix is ​​called non-degenerate, or non-singular if its column vectors or row vectors are independent of each other. The determinant of such a matrix is ​​not equal to zero and its inverse matrix exists (compare with p. 43)

The rank of the matrix has an obvious geometric interpretation. If the rank of the matrix is ​​equal, then the -dimensional space is said to be spanned by vectors. If the rank then the vectors lie in the -dimensional subspace, which includes all of them. So, the rank of the matrix corresponds to the minimum required dimension of the space "in which all vectors are contained", the -dimensional subspace in the -dimensional space is called the -dimensional hyperplane. The rank of the matrix corresponds to the smallest dimension of the hyperplane in which all vectors still lie.

Consider an arbitrary, not necessarily square, mxn matrix A.

The rank of the matrix.

The concept of the rank of a matrix is ​​associated with the concept of linear dependence (independence) of rows (columns) of a matrix. Let's consider this concept for strings. For columns - the same.

Let's denote the sinks of the matrix A:

e 1 = (a 11, a 12, ..., a 1n); е 2 = (а 21, а 22, ..., а 2n); ..., е m = (а m1, а m2, ..., а mn)

e k = e s if a kj = a sj, j = 1,2,…, n

Arithmetic operations on rows of a matrix (addition, multiplication by a number) are introduced as operations performed element by element: λе k = (λа k1, λа k2,…, λа kn);

e k + e s = [(a k1 + a s1), (a k2 + a s2),…, (a kn + a sn)].

Line e is called linear combination lines e 1, e 2, ..., e k, if it is equal to the sum of the products of these lines by arbitrary real numbers:

е = λ 1 е 1 + λ 2 е 2 +… + λ k е k

Lines e 1, e 2, ..., e m are called linearly dependent if there are real numbers λ 1, λ 2,…, λ m, not all equal to zero, that the linear combination of these rows is equal to the zero row: λ 1 е 1 + λ 2 е 2 +… + λ m е m = 0 ,where 0 =(0,0,…,0) (1)

If the linear combination is equal to zero if and only if all the coefficients λ i are equal to zero (λ 1 = λ 2 =… = λ m = 0), then the rows e 1, e 2, ..., e m are called linearly independent.

Theorem 1... For the lines e 1, e 2, ..., e m to be linearly dependent, it is necessary and sufficient that one of these lines be a linear combination of the rest of the lines.

Proof. Need... Let the rows e 1, e 2,…, e m be linearly dependent. Let, for definiteness, in (1) λ m ≠ 0, then

That. string e m is a linear combination of the rest of the strings. Ch.t.d.

Adequacy... Let one of the strings, for example e m, be a linear combination of the rest of the strings. Then there are numbers such that the equality holds, which can be rewritten as,

where at least 1 of the coefficients, (-1), is not equal to zero. Those. strings are linearly dependent. Ch.t.d.

Definition. Minor of the kth order of a matrix A of size mxn is called a determinant of the kth order with elements lying at the intersection of any k rows and any k columns of the matrix A. (k≤min (m, n)). ...

Example., minors of the 1st order: =, =;

2nd order minors:, 3rd order

The matrix of the 3rd order has 9 minors of the 1st order, 9 minors of the 2nd order and 1 minor of the 3rd order (the determinant of this matrix).

Definition. By the rank of the matrix A is the highest order of nonzero minors of this matrix. Designation - rg A or r (A).

Matrix rank properties.

1) the rank of the matrix A nxm does not exceed the smaller of its dimensions, that is,

r (A) ≤min (m, n).

2) r (A) = 0 when all elements of the matrix are equal to 0, i.e. A = 0.

3) For a square matrix А of n -th order r (A) = n, when А is non-degenerate.



(The rank of a diagonal matrix is ​​equal to the number of its nonzero diagonal elements).

4) If the rank of the matrix is ​​r, then the matrix has at least one minor of order r, which is not equal to zero, and all minors of higher orders are equal to zero.

For the ranks of the matrix, the following relations are valid:

2) r (A + B) ≤r (A) + r (B); 3) r (AB) ≤min (r (A), r (B));

3) r (A + B) ≥│r (A) -r (B) │; 4) r (A T A) = r (A);

5) r (AB) = r (A) if B is a square nondegenerate matrix.

6) r (AB) ≥r (A) + r (B) -n, where n is the number of columns of matrix A or rows of matrix B.

Definition. A nonzero minor of order r (A) is called base minor... (Matrix A can have several basic minors). Rows and columns, at the intersection of which there is a base minor, are named accordingly base lines and base columns.

Theorem 2 (on the basic minor). Base rows (columns) are linearly independent. Any row (any column) matrix A is a linear combination of base rows (columns).

Proof... (For strings). If the basic strings were linearly dependent, then according to theorem (1) one of these strings would be a linear combination of other basic strings, then, without changing the value of the basic minor, you can subtract the specified linear combination from this string and get a zero string, and this contradicts the fact that the base minor is nonzero. That. base lines are linearly independent.

Let us prove that any row of matrix A is a linear combination of basic rows. Because with arbitrary changes of rows (columns) the determinant preserves the property of being equal to zero, then, without loss of generality, we can assume that the basic minor is in the upper left corner of the matrix

A =, those. located on the first r rows and the first r columns. Let 1 £ j £ n, 1 £ i £ m. Let us show that the determinant of the (r + 1) th order

If j £ r or i £ r, then this determinant is equal to zero, since it will have two identical columns or two identical rows.

If j> r and i> r, then this determinant is a minor of the (r + 1) -th order of the matrix A. the rank of the matrix is ​​r, which means that any minor of higher order is equal to 0.

Expanding it according to the elements of the last (added) column, we get

a 1j A 1j + a 2j A 2j +… + a rj A rj + a ij A ij = 0, where the last algebraic complement A ij coincides with the basic minor M r and therefore A ij = M r ≠ 0.

Dividing the last equality by A ij, we can express the element a ij as a linear combination:, where.

We fix the value i (i> r) and get that for any j (j = 1,2, ..., n) the elements of the i-th row ei are linearly expressed through the elements of the rows e 1, e 2, ..., e r, i.e. e. The i-th line is a linear combination of base lines:. Ch.t.d.

Theorem 3. (a necessary and sufficient condition for the vanishing of the determinant). For the n-order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Proof (p. 40). Need... If the nth order determinant D is equal to zero, then the base minor of its matrix is ​​of order r

Thus, one line is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

Adequacy... If the rows of D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the remaining rows. Subtracting the specified linear combination from the line A i without changing the value of D, we get the zero line. Therefore, by the properties of the determinants, D = 0. h.t.d.

Theorem 4. Elementary transformations do not change the rank of the matrix.

Proof... As it was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a nonzero number, or change sign. In this case, the highest order of nonzero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Ch.t.d.

If r (A) = r (B), then A and B - equivalent: A ~ B.

Theorem 5. With the help of elementary transformations, the matrix can be reduced to stepwise view. The matrix is ​​called step, if it has the form:

А =, where a ii ≠ 0, i = 1,2, ..., r; r≤k.

The condition r≤k can always be achieved by transposition.

Theorem 6. The rank of a stepped matrix is ​​equal to the number of its nonzero rows .

Those. The rank of the stepped matrix is ​​r, since there is a nonzero minor of order r:

Note that the rows and columns of the matrix can be viewed as arithmetic vectors of sizes m and n, respectively. Thus, the size matrix can be interpreted as a collection m n-dimensional or n m-dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of rows and columns of a matrix.

4.8.1. Definition. Line
called linear combination of strings with coefficients
if the equality is true for all elements of this row:

,
.

4.8.2. Definition.

Strings
are called linearly dependent if there is a nontrivial linear combination of them equal to the zero row, i.e. there are not all such numbers equal to zero


,
.

4.8.3. Definition.

Strings
are called linearly independent if only their trivial linear combination is equal to the zero string, i.e.

,

4.8.4. Theorem. (Criterion for linear dependence of matrix rows)

For the rows to be linearly dependent, it is necessary and sufficient that at least one of them be a linear combination of the others.

Proof:

Need. Let the lines
are linearly dependent, then there is a nontrivial linear combination of them equal to the zero string:

.

Without loss of generality, we will assume that the first of the coefficients of the linear combination is nonzero (otherwise the rows can be renumbered). Dividing this ratio by , we get


,

that is, the first line is a linear combination of the rest.

Adequacy. Let one of the lines, for example, , is a linear combination of the others, then

that is, there is a nontrivial linear combination of strings
equal to the null string:

which means the lines
are linearly dependent, as required.

Comment.

Similar definitions and statements can be formulated for the columns of a matrix.

§4.9. The rank of the matrix.

4.9.1. Definition. Minor order matrices size
is called the determinant of the order with elements located at the intersection of some of its lines and columns.

4.9.2. Definition. Nonzero minor order matrices size
called basic minor if all minors of the order matrix
are equal to zero.

Comment. A matrix can have several basic minors. Obviously, they will all be of the same order. The case is also possible when the matrix size
minor order nonzero, and the minors of order
does not exist, that is
.

4.9.3. Definition. The rows (columns) forming the base minor are called basic rows (columns).

4.9.4. Definition. By rank of a matrix is ​​called the order of its basic minor. Matrix rank denoted
or
.

Comment.

Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change when it is transposed.

4.9.5. Theorem. (Invariance of the rank of the matrix under elementary transformations)

The rank of the matrix does not change under its elementary transformations.

No proof.

4.9.6. Theorem. (About the basic minor).

Base rows (columns) are linearly independent. Any row (column) of a matrix can be represented as a linear combination of its basic rows (columns).

Proof:

Let us carry out the proof for strings. The proof of the statement for columns can be carried out by analogy.

Let the rank of the matrix sizes
is equal to , a
- base minor. Without loss of generality, assume that the base minor is located in the upper left corner (otherwise, you can bring the matrix to this form using elementary transformations):

.

Let us first prove the linear independence of the basic rows. We carry out the proof by contradiction. Suppose the baselines are linearly dependent. Then, according to Theorem 4.8.4, one of the strings can be represented as a linear combination of the remaining basic strings. Therefore, if we subtract the specified linear combination from this string, then we get a zero string, which means that the minor
is equal to zero, which contradicts the definition of a basic minor. Thus, we have obtained a contradiction; therefore, the linear independence of the basis rows is proved.

Let us now prove that any row of a matrix can be represented as a linear combination of basic rows. If the number of the line in question from 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 for the row and zero coefficients for the rest of the lines. Let us now show that if the line number from
before
, it can be represented as a linear combination of base lines. Consider the minor of the matrix
derived from the basic minor
adding the line and an arbitrary column
:

Let us show that the given minor
from
before
and for any column number from 1 to .

Indeed, if the column number from 1 to r, then we have a determinant with two identical columns, which is obviously equal to zero. If the column number from r+1 to and the line number from
before
, then
is the minor of the original matrix of higher order than the basic minor, which means that it is equal to zero from the definition of the basic minor. Thus, it is proved that the minor
is zero for any line number from
before
and for any column number from 1 to ... Expanding it according to the last column, we get:

Here
- the corresponding algebraic complements. notice, that
since, therefore,
is the base minor. Hence the elements of the row k can be represented as a linear combination of the corresponding elements of the basic rows with coefficients that do not depend on the column number :

Thus, we have proved that an arbitrary row of a matrix can be represented as a linear combination of its base rows. The theorem is proved.

Lecture 13

4.9.7. Theorem. (On the rank of a nondegenerate square matrix)

For a square matrix to be non-degenerate, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of this matrix.

Proof:

Need. Let the square matrix size n is nondegenerate, then
, therefore, the determinant of the matrix is ​​a basic minor, i.e.

Adequacy. Let be
then the order of the basic minor is equal to the size of the matrix, therefore, the basic minor is the determinant of the matrix , i.e.
by the definition of a basic minor.

Consequence.

For a square matrix to be nondegenerate, it is necessary and sufficient that its rows be linearly independent.

Proof:

Need. Since a square matrix is ​​non-degenerate, its rank is equal to the size of the matrix
that is, the determinant of the matrix is ​​the base minor. Consequently, by Theorem 4.9.6 on the basic minor, the rows of the matrix are linearly independent.

Adequacy. Since all rows of the matrix are linearly independent, its rank is not less than the size of the matrix, and therefore,
therefore, by the previous Theorem 4.9.7, the matrix is non-degenerate.

4.9.8. The method of bordering minors for finding the rank of a matrix.

Note that this method has already been partially implicitly described in the proof of the basic minor theorem.

4.9.8.1. Definition. Minor
called bordering in relation to minor
if it is derived from a minor
adding one new row and one new column of the original matrix.

4.9.8.2. The procedure for finding the rank of a matrix by the bordering minors method.

    Find any current minor of the matrix other than zero.

    We calculate all the minors bordering it.

    If they are all equal to zero, then the current minor is basic, and the rank of the matrix is ​​equal to the order of the current minor.

    If at least one nonzero is found among the bordering minors, then it is considered current and the procedure continues.

Let us find, using the method of bordering minors, the rank of the matrix

.

It is easy to indicate the current nonzero second order minor, for example,

.

We calculate the minors bordering it:




Therefore, since all bordering minors of the third order are equal to zero, then the minor
is basic, that is

Comment. From the considered example, it can be seen that the method is quite laborious. Therefore, in practice, the method of elementary transformations is much more often used, which will be discussed below.

4.9.9. Finding the rank of a matrix by the method of elementary transformations.

Based on Theorem 4.9.5, it can be argued that the rank of the matrix does not change under elementary transformations (that is, the ranks of the equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the stepped matrix obtained from the original one by elementary transformations. The rank of a stepped matrix is ​​obviously equal to the number of its nonzero rows.

We define the rank of the matrix

by the method of elementary transformations.

Let's give the matrix to the stepped view:

The number of nonzero rows of the resulting stepped matrix is ​​three, therefore,

4.9.10. The rank of a system of vectors in a linear space.

Consider a system of vectors
some linear space ... If it is linearly dependent, then a linearly independent subsystem can be distinguished in it.

4.9.10.1. Definition. The rank of the system of vectors
linear space is the maximum number of linearly independent vectors of this system. Rank of the system of vectors
denoted as
.

Comment. If a system of vectors is linearly independent, then its rank is equal to the number of vectors in the system.

Let us formulate a theorem showing the connection between the concepts of the rank of a system of vectors in a linear space and the rank of a matrix.

4.9.10.2. Theorem. (On the rank of a system of vectors in a linear space)

The rank of a system of vectors in a linear space is equal to the rank of a matrix whose columns or rows are the coordinates of vectors in some basis of the linear space.

No proof.

Consequence.

For a system of vectors in a linear space to be linearly independent, it is necessary and sufficient that the rank of a matrix, columns or rows of which are the coordinates of vectors in a certain basis, is equal to the number of vectors of the system.

The proof is obvious.

4.9.10.3. Theorem (On the dimension of the linear envelope).

Dimension of the linear hull of vectors
linear space is equal to the rank of this system of vectors:

No proof.