Solve linear matrix equation. Inverse matrix

Let us be given a system of linear equations with unknown:

We will assume that the main matrix non-degenerate. Then, by Theorem 3.1, there exists an inverse matrix
Multiplying the matrix equation
to the matrix
on the left, using Definition 3.2, as well as statement 8) of Theorem 1.1, we obtain the formula on which the matrix method for solving systems of linear equations is based:

Comment. Note that the matrix method for solving systems of linear equations, in contrast to the Gauss method, has limited application: this method can only solve systems of linear equations in which, firstly, the number of unknowns is equal to the number of equations, and secondly, the main matrix is ​​non-singular .

Example. Solve a system of linear equations using the matrix method.

A system of three linear equations with three unknowns is given
Where

The main matrix of the system of equations is non-singular, since its determinant is non-zero:

Inverse matrix
Let's compose using one of the methods described in paragraph 3.

Using the formula of the matrix method for solving systems of linear equations, we obtain

5.3. Cramer method

This method, like the matrix method, is applicable only for systems of linear equations in which the number of unknowns coincides with the number of equations. Cramer's method is based on the theorem of the same name:

Theorem 5.2. System linear equations with unknown

whose main matrix is ​​non-singular, has a unique solution that can be obtained using the formulas

Where
determinant of a matrix derived from the base matrix system of equations by replacing it
th column with a column of free members.

Example. Let's find the solution to the system of linear equations considered in the previous example using Cramer's method. The main matrix of the system of equations is non-degenerate, since
Let's calculate the determinants



Using the formulas presented in Theorem 5.2, we calculate the values ​​of the unknowns:

6. Study of systems of linear equations.

Basic solution

To study a system of linear equations means to determine whether this system is compatible or incompatible, and if it is compatible, to find out whether this system is definite or indefinite.

The compatibility condition for a system of linear equations is given by the following theorem

Theorem 6.1 (Kronecker–Capelli).

A system of linear equations is consistent if and only if the rank of the main matrix of the system is equal to the rank of its extended matrix:

For a simultaneous system of linear equations, the question of its definiteness or uncertainty is solved using the following theorems.

Theorem 6.2. If the rank of the main matrix of a joint system is equal to the number of unknowns, then the system is definite

Theorem 6.3. If the rank of the main matrix of a joint system is less than the number of unknowns, then the system is uncertain.

Thus, from the formulated theorems follows a method for studying systems of linear algebraic equations. Let n– number of unknowns,

Then:


Definition 6.1. The basic solution of an indefinite system of linear equations is a solution in which all free unknowns are equal to zero.

Example. Explore a system of linear equations. If the system is uncertain, find its basic solution.

Let's calculate the ranks of the main and extended matrices of this system of equations, for which we bring the extended (and at the same time the main) matrix of the system to a stepwise form:

Add the second row of the matrix to its first row, multiplied by third line - with the first line multiplied by
and the fourth line - with the first, multiplied by we get a matrix

To the third row of this matrix we add the second row multiplied by
and to the fourth line – the first, multiplied by
As a result, we get the matrix

removing the third and fourth rows from which we get a step matrix

Thus,

Consequently, this system of linear equations is consistent, and since the rank value is less than the number of unknowns, the system is uncertain. The step matrix obtained as a result of elementary transformations corresponds to the system of equations

Unknown And are the main ones, and the unknowns And
free. By assigning zero values ​​to the free unknowns, we obtain a basic solution to this system of linear equations.

A system of m linear equations with n unknowns called a system of the form

Where a ij And b i (i=1,…,m; b=1,…,n) are some known numbers, and x 1 ,…,x n– unknown. In the designation of coefficients a ij first index i denotes the equation number, and the second j– the number of the unknown at which this coefficient stands.

We will write the coefficients for the unknowns in the form of a matrix , which we'll call matrix of the system.

The numbers on the right side of the equations are b 1 ,…,b m are called free members.

Totality n numbers c 1 ,…,c n called decision of a given system, if each equation of the system becomes an equality after substituting numbers into it c 1 ,…,c n instead of the corresponding unknowns x 1 ,…,x n.

Our task will be to find solutions to the system. In this case, three situations may arise:

A system of linear equations that has at least one solution is called joint. Otherwise, i.e. if the system has no solutions, then it is called non-joint.

Let's consider ways to find solutions to the system.


MATRIX METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Matrices make it possible to briefly write down a system of linear equations. Let a system of 3 equations with three unknowns be given:

Consider the system matrix and matrices columns of unknown and free terms

Let's find the work

those. as a result of the product, we obtain the left-hand sides of the equations of this system. Then, using the definition of matrix equality, this system can be written in the form

or shorter AX=B.

Here are the matrices A And B are known, and the matrix X unknown. It is necessary to find it, because... its elements are the solution to this system. This equation is called matrix equation.

Let the determinant of the matrix be different from zero | A| ≠ 0. Then the matrix equation is solved as follows. Multiply both sides of the equation on the left by the matrix A-1, inverse of the matrix A: . Because A -1 A = E And EX = X, then we obtain a solution to the matrix equation in the form X = A -1 B .

Note that since the inverse matrix can only be found for square matrices, the matrix method can only solve those systems in which the number of equations coincides with the number of unknowns. However, matrix recording of the system is also possible in the case when the number of equations is not equal to the number of unknowns, then the matrix A will not be square and therefore it is impossible to find a solution to the system in the form X = A -1 B.

Examples. Solve systems of equations.

CRAMER'S RULE

Consider a system of 3 linear equations with three unknowns:

Third-order determinant corresponding to the system matrix, i.e. composed of coefficients for unknowns,

called determinant of the system.

Let's compose three more determinants as follows: replace columns 1, 2 and 3 in determinant D successively with a column of free terms

Then we can prove the following result.

Theorem (Cramer's rule). If the determinant of the system Δ ≠ 0, then the system under consideration has one and only one solution, and

Proof. So, let's consider a system of 3 equations with three unknowns. Let's multiply the 1st equation of the system by the algebraic complement A 11 element a 11, 2nd equation – on A 21 and 3rd – on A 31:

Let's add these equations:

Let's look at each of the brackets and the right side of this equation. By the theorem on the expansion of the determinant in elements of the 1st column

Similarly, it can be shown that and .

Finally, it is easy to notice that

Thus, we obtain the equality: .

Hence, .

The equalities and are derived similarly, from which the statement of the theorem follows.

Thus, we note that if the determinant of the system Δ ≠ 0, then the system has a unique solution and vice versa. If the determinant of the system is equal to zero, then the system either has an infinite number of solutions or has no solutions, i.e. incompatible.

Examples. Solve system of equations


GAUSS METHOD

The previously discussed methods can be used to solve only those systems in which the number of equations coincides with the number of unknowns, and the determinant of the system must be different from zero. The Gauss method is more universal and suitable for systems with any number of equations. It consists in the consistent elimination of unknowns from the equations of the system.

Consider again a system of three equations with three unknowns:

.

We will leave the first equation unchanged, and from the 2nd and 3rd we will exclude the terms containing x 1. To do this, divide the second equation by A 21 and multiply by – A 11, and then add it to the 1st equation. Similarly, we divide the third equation by A 31 and multiply by – A 11, and then add it with the first one. As a result, the original system will take the form:

Now from the last equation we eliminate the term containing x 2. To do this, divide the third equation by, multiply by and add with the second. Then we will have a system of equations:

From here, from the last equation it is easy to find x 3, then from the 2nd equation x 2 and finally, from 1st - x 1.

When using the Gaussian method, the equations can be swapped if necessary.

Often, instead of writing a new system of equations, they limit themselves to writing out the extended matrix of the system:

and then bring it to a triangular or diagonal form using elementary transformations.

TO elementary transformations matrices include the following transformations:

  1. rearranging rows or columns;
  2. multiplying a string by a number other than zero;
  3. adding other lines to one line.

Examples: Solve systems of equations using the Gauss method.


Thus, the system has an infinite number of solutions.

  • 6.4. Some applications of the dot product
  • 11. Expression of the scalar product of a vector through the coordinates of the factors. Theorem.
  • 12. Length of a vector, length of a segment, angle between vectors, condition of perpendicularity of vectors.
  • 13. Vector product of vectors, its properties. Area of ​​a parallelogram.
  • 14. Mixed product of vectors, its properties. Condition for vector coplanarity. Volume of a parallelepiped. Volume of the pyramid.
  • 15. Methods for defining a straight line on a plane.
  • 16. Normal equation of a line on a plane (derivation). Geometric meaning of coefficients.
  • 17. Equation of a straight line on a plane in segments (derivation).
  • Reducing the general equation of the plane to the equation of the plane in segments.
  • 18. Equation of a straight line on a plane with an angular coefficient (derivation).
  • 19. Equation of a straight line on a plane passing through two points (derivation).
  • 20. Angle between straight lines on a plane (output).
  • 21. Distance from a point to a straight line on a plane (output).
  • 22. Conditions for parallelism and perpendicularity of lines on a plane (derivation).
  • 23. Equation of a plane. Normal plane equation (derivation). Geometric meaning of coefficients.
  • 24. Equation of a plane in segments (derivation).
  • 25. Equation of a plane passing through three points (derivation).
  • 26. Angle between planes (output).
  • 27. Distance from a point to a plane (output).
  • 28. Conditions for parallelism and perpendicularity of planes (conclusion).
  • 29. Equations of a line in r3. Equations of a line passing through two fixed points (derivation).
  • 30. Canonical equations of a straight line in space (derivation).
  • Drawing up canonical equations of a straight line in space.
  • Special cases of canonical equations of a straight line in space.
  • Canonical equations of a line passing through two given points in space.
  • Transition from the canonical equations of a line in space to other types of equations of a line.
  • 31. Angle between straight lines (output).
  • 32. Distance from a point to a straight line on a plane (output).
  • Distance from a point to a straight line on a plane - theory, examples, solutions.
  • The first way to find the distance from a given point to a given straight line on a plane.
  • The second method allows you to find the distance from a given point to a given straight line on a plane.
  • Solving problems of finding the distance from a given point to a given straight line on a plane.
  • Distance from a point to a line in space - theory, examples, solutions.
  • The first way to find the distance from a point to a line in space.
  • The second method allows you to find the distance from a point to a line in space.
  • 33. Conditions for parallelism and perpendicularity of lines in space.
  • 34. The relative position of lines in space and a line with a plane.
  • 35. Classical ellipse equation (derivation) and its construction. The canonical equation of an ellipse has the form where are positive real numbers, and. How to construct an ellipse?
  • 36. Classic hyperbola equation (derivation) and its construction. Asymptotes.
  • 37. Canonical parabola equation (derivation) and construction.
  • 38. Function. Basic definitions. Graphs of basic elementary functions.
  • 39. Number sequences. Limit of number sequence.
  • 40. Infinitely small and infinitely large quantities. Theorem about the connection between them, properties.
  • 41. Theorems on actions on variables having finite limits.
  • 42. Number e.
  • Content
  • Determination methods
  • Properties
  • Story
  • Approximations
  • 43. Determination of the limit of a function. Uncovering uncertainties.
  • 44. Remarkable limits, their conclusion. Equivalent infinitesimal quantities.
  • Content
  • The first wonderful limit
  • Second wonderful limit
  • 45. One-sided limits. Continuity and discontinuities of function. One-sided limits
  • Left and right limits of a function
  • Discontinuity point of the first kind
  • Discontinuity point of the second kind
  • Removable break point
  • 46. ​​Definition of derivative. Geometrical meaning, mechanical meaning of derivative. Tangent and normal equations for a curve and a point.
  • 47. Theorems on the derivative of inverse, complex functions.
  • 48. Derivatives of the simplest elementary functions.
  • 49. Differentiation of parametric, implicit and power-exponential functions.
  • 21. Differentiation of implicit and parametrically specified functions
  • 21.1. Implicit function
  • 21.2. Parametrically defined function
  • 50. Higher order derivatives. Taylor's formula.
  • 51. Differential. Application of differential to approximate calculations.
  • 52. Theorems of Rolle, Lagrange, Cauchy. L'Hopital's rule.
  • 53. Theorem on necessary and sufficient conditions for the monotonicity of a function.
  • 54. Determination of the maximum and minimum of a function. Theorems on necessary and sufficient conditions for the existence of an extremum of a function.
  • Theorem (necessary condition for extremum)
  • 55. Convexity and concavity of curves. Inflection points. Theorems on necessary and sufficient conditions for the existence of inflection points.
  • Proof
  • 57. Determinants of the nth order, their properties.
  • 58. Matrices and actions on them. Matrix rank.
  • Definition
  • Related definitions
  • Properties
  • Linear transformation and matrix rank
  • 59. Inverse matrix. Theorem on the existence of an inverse matrix.
  • 60. Systems of linear equations. Matrix solution of systems of linear equations. Cramer's rule. Gauss method. Kronecker-Capelli theorem.
  • Solving systems of linear algebraic equations, solution methods, examples.
  • Definitions, concepts, designations.
  • Solving elementary systems of linear algebraic equations.
  • Solving systems of linear equations using Cramer's method.
  • Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).
  • Solving systems of linear equations using the Gauss method.
  • Solving systems of linear algebraic equations of general form.
  • Kronecker–Capelli theorem.
  • Gauss method for solving systems of linear algebraic equations of general form.
  • Writing a general solution to homogeneous and inhomogeneous linear algebraic systems using vectors of the fundamental system of solutions.
  • Solving systems of equations that reduce to slough.
  • Examples of problems that reduce to solving systems of linear algebraic equations.
  • Solving systems of linear algebraic equations using the matrix method (using an inverse matrix).

    Let the system of linear algebraic equations be given in matrix form, where the matrix A has dimension n on n and its determinant is nonzero.

    Since , then the matrix A– is invertible, that is, there is an inverse matrix. If we multiply both sides of the equality to the left, we obtain a formula for finding a matrix-column of unknown variables. This is how we obtained a solution to a system of linear algebraic equations using the matrix method.

    matrix method.

    Let's rewrite the system of equations in matrix form:

    Because then the SLAE can be solved using the matrix method. Using the inverse matrix, the solution to this system can be found as .

    Let's construct an inverse matrix using a matrix from algebraic complements of matrix elements A(if necessary, see the article methods for finding the inverse matrix):

    It remains to calculate the matrix of unknown variables by multiplying the inverse matrix to a matrix-column of free members (if necessary, see the article operations on matrices):

    or in another post x 1 = 4, x 2 = 0, x 3 = -1 .

    The main problem when finding solutions to systems of linear algebraic equations using the matrix method is the complexity of finding the inverse matrix, especially for square matrices of order higher than third.

    For a more detailed description of the theory and additional examples, see the article matrix method for solving systems of linear equations.

    Top of page

    Solving systems of linear equations using the Gauss method.

    Suppose we need to find a solution to the system from n linear equations with n unknown variables the determinant of the main matrix of which is different from zero.

    The essence of the Gauss method consists of sequentially eliminating unknown variables: first eliminating x 1 from all equations of the system, starting from the second, is further excluded x 2 from all equations, starting with the third, and so on, until only the unknown variable remains in the last equation x n. This process of transforming system equations to sequentially eliminate unknown variables is called direct Gaussian method. After completing the forward progression of the Gaussian method, from the last equation we find x n, using this value from the penultimate equation we calculate x n-1, and so on, from the first equation we find x 1 . The process of calculating unknown variables when moving from the last equation of the system to the first is called inverse of the Gaussian method.

    Let us briefly describe the algorithm for eliminating unknown variables.

    We will assume that , since we can always achieve this by rearranging the equations of the system. Eliminate the unknown variable x 1 from all equations of the system, starting from the second. To do this, to the second equation of the system we add the first, multiplied by, to the third equation we add the first, multiplied by, and so on, to nth to the equation we add the first one multiplied by. The system of equations after such transformations will take the form where and .

    We would arrive at the same result if we expressed x 1 through other unknown variables in the first equation of the system and the resulting expression was substituted into all other equations. So the variable x 1 excluded from all equations, starting from the second.

    Next, we proceed in a similar way, but only with part of the resulting system, which is marked in the figure

    To do this, to the third equation of the system we add the second, multiplied by, to the fourth equation we add the second, multiplied by, and so on, to nth to the equation we add the second, multiplied by. The system of equations after such transformations will take the form where and . So the variable x 2 excluded from all equations starting from the third.

    Next we proceed to eliminating the unknown x 3 , in this case we act similarly with the part of the system marked in the figure

    So we continue the direct progression of the Gaussian method until the system takes the form

    From this moment we begin the reverse of the Gaussian method: we calculate x n from the last equation as, using the obtained value x n we find x n-1 from the penultimate equation, and so on, we find x 1 from the first equation.

    Solve system of linear equations Gauss method.

    Eliminate the unknown variable x 1 from the second and third equations of the system. To do this, to both sides of the second and third equations we add the corresponding parts of the first equation, multiplied by and respectively:

    Now let's exclude from the third equation x 2 , adding to its left and right sides the left and right sides of the second equation, multiplied by:

    This completes the forward stroke of the Gauss method; we begin the reverse stroke.

    From the last equation of the resulting system of equations we find x 3 :

    From the second equation we get .

    From the first equation we find the remaining unknown variable and thereby complete the reverse of the Gauss method.

    x 1 = 4, x 2 = 0, x 3 = -1 .

    For more detailed information and additional examples, see the section on solving elementary systems of linear algebraic equations using the Gauss method.

    Top of page

    This online calculator solves a system of linear equations using the matrix method. A very detailed solution is given. To solve a system of linear equations, select the number of variables. Choose a method for calculating the inverse matrix. Then enter the data in the cells and click on the "Calculate" button.

    ×

    Warning

    Clear all cells?

    Close Clear

    Data entry instructions. Numbers are entered as integers (examples: 487, 5, -7623, etc.), decimals (ex. 67., 102.54, etc.) or fractions. The fraction must be entered in the form a/b, where a and b are integers or decimals. Examples 45/5, 6.6/76.4, -7/6.7, etc.

    Matrix method for solving systems of linear equations

    Consider the following system of linear equations:

    Given the definition of an inverse matrix, we have A −1 A=E, Where E- identity matrix. Therefore (4) can be written as follows:

    Thus, to solve the system of linear equations (1) (or (2)), it is enough to multiply the inverse of A matrix per constraint vector b.

    Examples of solving a system of linear equations using the matrix method

    Example 1. Solve the following system of linear equations using the matrix method:

    Let's find the inverse of matrix A using the Jordan-Gauss method. On the right side of the matrix A Let's write the identity matrix:

    Let's exclude the elements of the 1st column of the matrix below the main diagonal. To do this, add lines 2,3 with line 1, multiplied by -1/3, -1/3, respectively:

    Let's exclude the elements of the 2nd column of the matrix below the main diagonal. To do this, add line 3 with line 2 multiplied by -24/51:

    Let's exclude the elements of the 2nd column of the matrix above the main diagonal. To do this, add line 1 with line 2 multiplied by -3/17:

    Separate the right side of the matrix. The resulting matrix is ​​the inverse matrix of A :

    Matrix form of writing a system of linear equations: Ax=b, Where

    Let's calculate all algebraic complements of the matrix A:

    ,
    ,
    ,
    ,
    ,

    Where A ij − algebraic complement of a matrix element A, located at the intersection i-th line and j-th column, and Δ is the determinant of the matrix A.

    Using the inverse matrix formula, we get:

    According to Cramer's formulas;

    Gauss method;

    Solution: Kronecker-Capelli theorem. A system is consistent if and only if the rank of the matrix of this system is equal to the rank of its extended matrix, i.e. r(A)=r(A 1), Where

    The extended matrix of the system looks like:

    Multiply the first line by ( –3 ), and the second to ( 2 ); After this, add the elements of the first line to the corresponding elements of the second line; subtract the third from the second line. In the resulting matrix, we leave the first row unchanged.

    6 ) and swap the second and third lines:

    Multiply the second line by ( –11 ) and add to the corresponding elements of the third line.

    Divide the elements of the third line by ( 10 ).

    Let's find the determinant of the matrix A.

    Hence, r(A)=3 . Rank of the extended matrix r(A 1) is also equal 3 , i.e.

    r(A)=r(A 1)=3 Þ The system is cooperative.

    1) When examining the system for consistency, the extended matrix was transformed using the Gaussian method.

    The Gaussian method is as follows:

    1. Reducing the matrix to a triangular form, i.e., there should be zeros below the main diagonal (direct motion).

    2. From the last equation we find x 3 and substitute it into the second one, we find x 2, and knowing x 3, x 2 we substitute them into the first equation, we find x 1(reverse).

    Let us write the Gaussian-transformed extended matrix

    in the form of a system of three equations:

    Þ x 3 =1

    x 2 = x 3Þ x 3 =1

    2x 1 =4+x 2 +x 3Þ 2x 1 =4+1+1Þ

    Þ 2x 1 =6 Þ x 1 =3

    .

    2) Let’s solve the system using Cramer’s formulas: if the determinant of the system of equations Δ is different from zero, then the system has a unique solution, which is found using the formulas

    Let us calculate the determinant of the system Δ:

    Because If the determinant of the system is different from zero, then according to Cramer's rule, the system has a unique solution. Let's calculate the determinants Δ 1 , Δ 2 , Δ 3 . They are obtained from the determinant of the system Δ by replacing the corresponding column with a column of free coefficients.

    We find the unknowns using the formulas:

    Answer: x 1 =3, x 2 =1, x 3 =1 .

    3) Let's solve the system using matrix calculus, i.e. using the inverse matrix.

    A×X=B Þ X=A -1 × B, Where A -1– inverse matrix to A,

    Column of free members,

    Matrix-column of unknowns.

    The inverse matrix is ​​calculated using the formula:

    Where D- matrix determinant A, A ij– algebraic complements of element a ij matrices A. D= 60 (from the previous paragraph). The determinant is nonzero, therefore, matrix A is invertible, and its inverse matrix can be found using formula (*). Let us find algebraic complements for all elements of matrix A using the formula:



    And ij =(-1 )i+j M ij .

    x 1, x 2, x 3 turned each equation into an identity, then they were found correctly.

    Example 6. Solve the system using the Gaussian method and find some two basic solutions of the system.