4.07: LU Decomposition for Solving Simultaneous Linear Equations
After successful completion of this lesson, you should be able to: 1) solve a set of simultaneous linear equations using the LU decomposition method 2) decompose a nonsingular square matrix into the LU form.
I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. What is it?
We already studied two numerical methods of finding the solution to simultaneous linear equations – Naive Gauss elimination and Gaussian elimination with partial pivoting. Then, why do we need to learn yet another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases, let us first discuss what LU decomposition is about. For a nonsingular square matrix \(\left\lbrack A \right\rbrack\) on which one can successfully conduct the Naive Gauss elimination forward elimination part, one can write it as \[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber\] where \[\left\lbrack L \right\rbrack = \text \nonumber\] \[\left\lbrack U \right\rbrack = \text \nonumber\] Then if one is solving a set of equations \[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack, \nonumber\] then \[\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\ \text\left( \lbrack A\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \right) \nonumber\] Multiplying both sides by \(\left\lbrack L \right\rbrack^\) , \[\left\lbrack L \right\rbrack^\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^\left\lbrack C \right\rbrack \nonumber\] \[\left\lbrack I \right\rbrack \left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^\left\lbrack C \right\rbrack\ \text\left( \left\lbrack L \right\rbrack^\left\lbrack L \right\rbrack = \lbrack I\rbrack \right) \nonumber\] \[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^\left\lbrack C \right\rbrack\ \text\left( \left\lbrack I \right\rbrack\ \left\lbrack U \right\rbrack = \lbrack U\rbrack \right) \nonumber\] Let \[\left\lbrack L \right\rbrack^\left\lbrack C \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\] then \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\;\;\;\;\;\;\;\;\;\;\;\;(\PageIndex) \nonumber\] and \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] So we can solve Equation \((\PageIndex)\) first for \(\lbrack Z\rbrack\) by using forward substitution and then use Equation \((\PageIndex)\) to calculate the solution vector \(\left\lbrack X \right\rbrack\) by back substitution.
So how do I decompose a nonsingular matrix [A], that is, how do I find [A] = [L][U]?
If the forward elimination part of the Naive Gauss elimination methods can be applied on a nonsingular matrix \(\left\lbrack A \right\rbrack\) , then \(\left\lbrack A \right\rbrack\) can be decomposed into LU form as \[\begin \lbrack A\rbrack &= \begin a_ & a_ & \ldots & a_ \\ a_ & a_ & \cdots & a_ \\ \vdots & \vdots & \cdots & \vdots \\ a_ & a_ & \cdots & a_ \\ \end\\ &= \begin 1 & 0 & \ldots & 0 \\ _ & 1 & \cdots & 0 \\ \vdots & \vdots & \cdots & \vdots \\ _ & _ & \cdots & 1 \\ \end\ \ \begin u_ & u_ & \ldots & u_ \\ 0 & u_ & \cdots & u_ \\ \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & \cdots & u_ \\ \end \end \nonumber\] The elements of the \(\left\lbrack U \right\rbrack\) matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination part in Naive Gauss elimination. The lower triangular matrix \(\left\lbrack L \right\rbrack\) has \(1\) in its diagonal entries. The non-zero elements on the non-diagonal elements in \(\left\lbrack L \right\rbrack\) are multipliers that made the corresponding entries zero in the upper triangular matrix \(\left\lbrack U \right\rbrack\) during the forward elimination.
Audiovisual Lecture
Title: LU Decomposition Method - Theory Summary: This video shows the basis of the LU decomposition method of solving simultaneous linear equations.
Lesson 2: Application of LU Decomposition Method
Learning Objectives
After successful completion of this lesson, you should be able to: 1) decompose a nonsingular matrix into the LU form. 2) solve a set of simultaneous linear equations using the LU decomposition method
Applications
So far, you now know how the LU decomposition method is used to solve simultaneous linear equations. We also learned how to decompose the coefficient matrix to use forward substitution and back substitution parts to solve a set of simultaneous linear equations.. In this lesson, we apply what we learned in the previous lesson.
Example \(\PageIndex\)
Find the LU decomposition of the matrix \[\left\lbrack A \right\rbrack = \begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end \nonumber\]
Solution
\[\begin \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &= \begin 1 & 0 & 0 \\ _ & 1 & 0 \\ _ & _ & 1 \\ \end\begin u_ & u_ & u_ \\ 0 & u_ & u_ \\ 0 & 0 & u_ \\ \end \end \nonumber\] The \(\left\lbrack U \right\rbrack\) matrix is the same as found at the end of the forward elimination part of the Naive Gauss elimination method, that is \[\left\lbrack U \right\rbrack = \begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end \nonumber\] To find \(_\) and \(_\) , find the multipliers that were used to make the \(a_\) and \(a_\) elements zero in the first step of the forward elimination part of the Naive Gauss elimination method. They were \[\begin _ &= \frac\\ &= 2.56 \end \nonumber\] \[\begin _ &= \frac\\ &= 5.76 \end \nonumber\] To find \(_\) , what multiplier was used to make \(a_\) element zero? Remember that \(a_\) element was made zero in the second step of the forward elimination part. The \(\left\lbrack A \right\rbrack\) matrix at the beginning of the second step of the forward elimination part was \[\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & - 16.8 & - 4.76 \\ \end \nonumber\] So \[\begin _ &= \frac\\ &= 3.5 \end \nonumber\] Hence \[\left\lbrack L \right\rbrack = \begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end \nonumber\] Confirm \(\left\lbrack L \right\rbrack \left\lbrack U \right\rbrack = \left\lbrack A \right\rbrack\) . \[\begin \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack &= \begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end\\ &= \begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end \end \nonumber\]
Example \(\PageIndex\)
Use the LU decomposition method to solve the following simultaneous linear equations. \[\begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end\begin a_ \\ a_ \\ a_ \\ \end = \begin 106.8 \\ 177.2 \\ 279.2 \\ \end \nonumber\]
Solution
Recall that \[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack \nonumber\] and if \[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber\] then first solving \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber\] and then \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\] gives the solution vector \(\left\lbrack X \right\rbrack\) . Now in Example \(\PageIndex<2.1>\), we showed \[\begin \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &=\begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end \end \nonumber\] First, solve \[\left\lbrack L \right\rbrack\ \left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber\] \[\begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end\begin z_ \\ z_ \\ z_ \\ \end = \begin 106.8 \\ 177.2 \\ 279.2 \\ \end \nonumber\] to give \[\begin &z_ = 106.8\\ &2.56z_ + z_ = 177.2\\ &5.76z_ + 3.5z_ + z_ = 279.2\end \nonumber\] Forward substitution starting from the first equation gives \[z_ = 106.8 \nonumber\] \[\begin z_ &= 177.2 - 2.56z_\\ &= 177.2 - 2.56 \times 106.8\\ &= - 96.208 \end \nonumber\] \[\begin z_ &= 279.2 - 5.76z_ - 3.5z_\\ &= 279.2 - 5.76 \times 106.8 - 3.5 \times \left( - 96.208 \right)\\ &= 0.76 \end \nonumber\] Hence \[\begin \left\lbrack Z \right\rbrack &= \begin z_ \\ z_ \\ z_ \\ \end\\ &= \begin 106.8 \\ - 96.208 \\ 0.76 \\ \end \end \nonumber\] This matrix is the same as the right-hand side obtained at the end of the forward elimination part of the Naive Gauss elimination method. Is this a coincidence? Now solve \[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\] \[\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end\begin a_ \\ a_ \\ a_ \\ \end = \begin 106.8 \\ - 96.208 \\ 0.76 \\ \end \nonumber\] \[\begin 25a_ + 5a_ + a_ &= 106.8\\ - 4.8a_ - 1.56a_ &= - 96.208\\ 0.7a_ &= 0.76 \end \nonumber\] From the third equation \[0.7a_ = 0.76 \nonumber\] \[\begin a_ &= \frac\\ &= 1.0857\end \nonumber\] Substituting the value of \(a_\) in the second equation, \[- 4.8a_ - 1.56a_ = - 96.208 \nonumber\] \[\begin a_ &= \frac<- 96.208 + 1.56a_>\\ &= \frac\\ &= 19.691 \end \nonumber\] Substituting the value of \(a_\) and \(a_\) in the first equation, \[25a_ + 5a_ + a_ = 106.8 \nonumber\] \[\begin a_ &= \frac <106.8 - 5a_- a_>\\ &= \frac\\ &= 0.29048 \end \nonumber\] Hence the solution vector is \[\begin a_ \\ a_ \\ a_ \\ \end = \begin 0.29048 \\ 19.691 \\ 1.0857 \\ \end \nonumber\]2.1>
Audiovisual Lecture
Title: LU Decomposition Example Summary: Learn the LU decomposition method of solving simultaneous linear equations via an example.
Lesson 3: Finding Inverse of a Matrix Using LU Decomposition Method
Learning Objectives
After successful completion of this lesson, you should be able to: 1) find the inverse of a matrix using the LU decomposition method.
How do I find the inverse of a square matrix using LU decomposition?
A matrix \(\left\lbrack B \right\rbrack\) is the inverse of \(\left\lbrack A \right\rbrack\) if \[\left\lbrack A \right\rbrack\left\lbrack B \right\rbrack = \left\lbrack I \right\rbrack = \left\lbrack B \right\rbrack\left\lbrack A \right\rbrack \nonumber\] How can we use LU decomposition to find the inverse of the matrix? Assume the first column of \(\left\lbrack B \right\rbrack\) (the inverse of \(\left\lbrack A \right\rbrack\) ) is \[\lbrack b_<11>\ b_\ldots\ \ldots b_\rbrack^ \nonumber\] Then from the above definition of an inverse and the definition of matrix multiplication \[\left\lbrack A \right\rbrack\begin b_ <11>\\ b_ \\ \vdots \\ b_ \\ \end = \begin 1 \\ 0 \\ \vdots \\ 0 \\ \end \nonumber\] Similarly, the second column of \(\left\lbrack B \right\rbrack\) is given by \[\left\lbrack A \right\rbrack\begin b_ \\ b_ \\ \vdots \\ b_ \\ \end = \begin 0 \\ 1 \\ \vdots \\ 0 \\ \end \nonumber\] Similarly, all columns of \(\left\lbrack B \right\rbrack\) can be found by solving \(n\) different sets of equations with the column of the right-hand side being the \(n\) columns of the identity matrix. In the previous lessons, we learned how to solve simultaneous linear equations using the LU decomposition method.11>
Example \(\PageIndex\)
Use LU decomposition to find the inverse of \[\left\lbrack A \right\rbrack = \begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end \nonumber\]
Solution
Knowing that from the previous lesson, \[\begin \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\\ &= \begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end \end \nonumber\] We can solve for the first column of \(\lbrack B\rbrack = \left\lbrack A \right\rbrack^\) by solving for \[\begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end\begin b_ \\ b_ \\ b_ \\ \end = \begin 1 \\ 0 \\ 0 \\ \end \nonumber\] First, solve \[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack, \nonumber\] that is, \[\begin 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end\begin z_ \\ z_ \\ z_ \\ \end = \begin 1 \\ 0 \\ 0 \\ \end \nonumber\] to give \[\begin &z_ = 1\\ &2.56z_ + z_ = 0\\ &5.76z_ + 3.5z_ + z_ = 0 \end \nonumber\] Forward substitution starting from the first equation gives \[z_ = 1 \nonumber\] \[\begin z_ &= 0 - 2.56z_\\ &= 0 - 2.56\left( 1 \right)\\ &= - 2.56 \end \nonumber\] \[\begin z_ &= 0 - 5.76z_ - 3.5z_\\ &= 0 - 5.76\left( 1 \right) - 3.5\left( - 2.56 \right)\\ &= 3.2 \end \nonumber\] Hence \[\begin \left\lbrack Z \right\rbrack &= \begin z_ \\ z_ \\ z_ \\ \end\\ &= \begin 1 \\ - 2.56 \\ 3.2 \\ \end \end \nonumber\] Now solve \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\] that is \[\begin 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end\begin b_ \\ b_ \\ b_ \\ \end = \begin 1 \\ - 2.56 \\ 3.2 \\ \end \nonumber\] \[\begin 25b_ + 5b_ + b_ &= 1\\ - 4.8b_ - 1.56b_ &= - 2.56\\ 0.7b_ &= 3.2 \end \nonumber\] Backward substitution starting from the third equation gives \[\begin b_ &= \frac\\ &= 4.571 \end \nonumber\] \[\begin b_ &= \frac<- 2.56 + 1.56b_>\\ &= \frac\\ &= - 0.9524 \end \nonumber\] \[\begin b_ &= \frac <1 - 5b_- b_>\\ &= \frac\\ &= 0.04762 \end \nonumber\] Hence the first column of the inverse of \(\left\lbrack A \right\rbrack\) is \[\begin b_ \\ b_ \\ b_ \\ \end = \begin 0.04762 \\ - 0.9524 \\ 4.571 \\ \end \nonumber\] Similarly, solving \[\begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end\begin b_ \\ b_ \\ b_ \\ \end = \begin 0 \\ 1 \\ 0 \\ \end\ \text\begin b_ \\ b_ \\ b_ \\ \end = \begin - 0.08333 \\ 1.417 \\ - 5.000 \\ \end \nonumber\] and solving \[\begin 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end\begin b_ \\ b_ \\ b_ \\ \end = \begin 0 \\ 0 \\ 1 \\ \end\ \text\begin b_ \\ b_ \\ b_ \\ \end = \begin 0.03571 \\ - 0.4643 \\ 1.429 \\ \end \nonumber\] Hence \[\left\lbrack A \right\rbrack^ = \begin 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end \nonumber\] Can you confirm the following for the above example? \[\left\lbrack A \right\rbrack\ \left\lbrack A \right\rbrack^ = \left\lbrack I \right\rbrack = \left\lbrack A \right\rbrack^\left\lbrack A \right\rbrack \nonumber\] Did you also note that the decomposition of \([A]\) to \([L][U]\) needed to be done only once?
Audiovisual Lecture
Title: LU Decomposition Method: Finding Inverse of a Matrix - Example Summary: This video shows an example how LU decomposition method can be used to find inverse of a matrix.
Lesson 4: Computational Efficiency
Learning Objectives
After successful completion of this lesson, you should be able to: 1) justify why using the LU decomposition method is more efficient than Gaussian elimination in some cases.
LU decomposition looks more complicated than Gaussian elimination. Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by \([A][X]=[C]\) ?
Note: The formulas for the computational time in this lesson assume it takes 4 clock cycles each for an add, subtract, or multiply operation, and 16 clock cycles for a divide operation as is the case for a typical AMD®-K7 chip. . These clock cycle times taken by an arithmetic operation and the number of such operations allow us to find the computational time for each of the processes. For a square matrix \(\lbrack A\rbrack\) of \(n \times n\) size, the computational time \(|_\) to decompose the \(\lbrack A\rbrack\) matrix to \(\lbrack L\rbrack\lbrack U\rbrack\) form is given by \[|_ = T\left( \frac> + 4n^ - \frac \right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] where \[T = \text \nonumber\] The computational time \(|_\) for forward substitution: \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by \[|_ = T\left( 4n^ - 4n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] The computational time \(|_\) for back substitution: \(\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\) is given by \[|_ = T\left( 4n^ + 12n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] So, the total computational time to solve a set of simultaneous linear equations by LU decomposition is \[\begin |_ &= |_ + |_ + |_\\ &= T\left( \frac> + 4n^ - \frac \right) + T\left( 4n^ - 4n \right) + T\left( 4n^ + 12n \right)\\ &= T\left( \frac> + 12n^ + \frac \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \end \nonumber\] Now let us look at the computational time taken by Naive Gaussian elimination. The computational time \(|_\) for the forward elimination part is \[|_ = T\left( \frac> + 8n^ - \frac \right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] and the computational time \(|_\) for the back substitution part is \[|_ = T\left( 4n^ + 12n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] So, the total computational time \(|_\) to solve a set of simultaneous linear equations by Gaussian Elimination is \[\begin |_ &= |_ + |_\\ &= T\left( \frac> + 8n^ - \frac \right) + T\left( 4n^ + 12n \right)\\ &= T\left( \frac> + 12n^ + \frac \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \end \nonumber\] The computational times for Gaussian elimination and LU decomposition are identical.
This has confused me further! Why should I learn LU decomposition method when it takes the same computational time as Gaussian elimination, and that too when the two methods are closely related in the procedure? Please convince me that LU decomposition has its place in solving linear equations!
We now know to convince you that the LU decomposition method has its place in the solution of simultaneous linear equations. Let us examine an example where the LU decomposition method would be computationally more efficient than Gaussian elimination. Remember, in trying to find the inverse of the matrix \(\lbrack A\rbrack\) in a previous lesson, the problem reduces to solving \(n\) sets of equations with the \(n\) columns of the identity matrix as the RHS vector. For calculations of each column of the inverse of the \(\lbrack A\rbrack\) matrix, the coefficient matrix \(\lbrack A\rbrack\) matrix in the set of equation \(\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\) does not change. So, if we use the LU decomposition method, the decomposition \(\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) needs to be done only once, the forward substitution (Equation \(\PageIndex<4.1>\)) \(n\) times, and the back substitution (Equation \(\PageIndex\)) \(n\) times. Therefore, the total computational time \(|_\) required to find the inverse of a matrix using \(LU\) decomposition is \[\begin |_ &= 1 \times |_ + n \times |_ + n \times |_\\ &= 1 \times T\left( \frac> + 4n^ - \frac \right) + n \times T\left( 4n^ - 4n \right) + n \times T\left( 4n^ + 12n \right)\\ &= T\left( \frac<32n^> + 12n^ - \frac \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \end \nonumber\] In comparison, if the Naive Gaussian elimination method were used to find the inverse of a matrix, the forward elimination part, as well as the back substitution part will have to be done \(n\) times. The total computational time \(|_\) required to find the inverse of a matrix by using Naive Gaussian elimination then is \[\begin |_ &= n \times |_ + n \times |_\\ &= n \times T\left( \frac> + 8n^ - \frac \right) + n \times T\left( 4n^ + 12n \right)\\ &= T\left( \frac> + 12n^ + \frac<4n^> \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \end\] Clearly, for large \(n\) , \(|_ >>|_\), as \(|_\) has the dominating terms of \(n^\) and \(|_\) has the dominating terms of \(n^\) . For large values of \(n\) , the Gaussian elimination method would take more computational time (approximately \(n/4\) times – prove it) than the LU decomposition method. Typical values of the ratio of the computational time for different values of \(n\) are given in Table \(\PageIndex<4.1>\).4.1>
Table \(\PageIndex\). Comparing computational times of finding the inverse of a matrix using the LU decomposition and Naive Gaussian elimination methods. \(n\) | \(10\) | \(100\) | \(1000\) | \(10000\) |
\(\dfrac>>\) | \(3.28\) | \(25.83\) | \(2 50.8\) | \(2501\) |
Are you convinced now that LU decomposition has its place in solving systems of equations when we have to solve for multiple right-hand side vectors while the coefficient matrix stays unchanged?
Audiovisual Lecture
Title: Why Use LU Decomposition? Summary: This video discusses why LU decomposition method is used for solving simultaneous linear equations.
Appendix
How much computational time does it take to conduct back substitution in the LU Decomposition method? Solution At the beginning of back substitution, the set of equations is of the form \[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack B \right\rbrack \nonumber\] where \[\left\lbrack U \right\rbrack = \text,\ n \times n, \nonumber\] \[\left\lbrack X \right\rbrack = \text,\ n \times 1,\ \text \nonumber\] \[\left\lbrack B \right\rbrack = \text,\ n \times 1. \nonumber\] The algorithm for finding the unknown vector is \[x_ = \frac> \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] \[x_ = \frac >,\ i = n-1,\ n-2,\ . \ 2,\ 1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex) \nonumber\] Hint 1 For an arithmetic progression series summation \[S = a_ + a_ + . + a_ + a_ \nonumber\] \[\frac(a_ + a_) \nonumber\] or \[\frac(2a_1 + (m-1)d) \nonumber\] where \[d = a_ - a_,i = 1,2. m - 1 \nonumber\] Hint 2 When you add \(n\) terms together, there are \(n-1\) additions. Now let us see how many arithmetic operations are needed to complete the algorithm. Number of Divisions See equation \((\PageIndex)\). We have one operation of division for the calculation of \(x_\) . See equation \((\PageIndex)\). For each \(x_\) , \(i = n - 1,n - 2. 2,1\) , the result is divided once by \(u_\) . So, the total number of divisions is \(1+(n-1) =n\) Number of Subtractions See equation \((\PageIndex)\). For each \(x_\) , \(i = n - 1,n - 2. 2,1\) , we have 1 subtraction in the numerator. So, the total number of subtractions is \(n-1\) . Number of Multiplications See equation \((\PageIndex)\). For a particular \(i\) , there are \(\left( n - i \right)\) multiplications — just count the \(u_x_\) terms within the summation in the numerator. Since \(i\) takes the values of \(i = n - 1,n - 2. 2,1\) , you can see that when \(i=n-1\) , there is \((n-(n-1))=1\) multiplication, when \(i=n-2\) , there are \((n-(n-2))=2\) multiplications, \(\vdots\) when \(i=1\) , there are \((n-1)=n-1\) multiplications. So, the total number of multiplications is \[\begin &1 + 2 + . + (n - 1)\\ &= \frac(1 + (n - 1))\\ &= \frac \end \nonumber\] Number of Additions For a particular \(i\) , there are \(\left( n - i - 1 \right)\) additions – just see the number of \(u_x_\) terms within the summation in the numerator of equation \((\PageIndex)\). Since \(i\) takes the values of \(i = n - 1,n - 2. 2,1\) , you can see that when \(i=n-1\) , there is \((n-(n-1)-1))=0\) addition, when \(i=n-2\) , there is \((n-(n-2)-1))=1\) addition, \(\vdots\) when \(i=1\) , there are \((n-(1)-1))=n-2\) additions. So, the total number of additions is \[\begin &0 + 1 + 2 + . + (n - 2)\\ &= \frac(0 + (n - 2))\\ &= \frac<(n - 1)(n - 2)> \end \nonumber\] Computational Time Assuming it takes 4 clock cycles for each addition, subtraction, and multiplication, and 16 clock cycles for each division, then if \(C\) is the clock cycle time, computational time spent on divisions= \(n \times (16C) = 16Cn\) computational time spent on subtractions \(= \left( n - 1 \right) \times (4C) = 4C(n - 1)\) computational time spent on multiplications \(\displaystyle = \frac \times (4C) = 2Cn(n - 1)\) computational time spent on additions \(\displaystyle= \frac<(n - 1)(n - 2)> \times (4C) = 2C(n - 1)(n - 2)\) The total computational time spent on back substitution then is \[\begin &=\text + \text \\ &+ \text + \text\\ &= (16Cn) + (4Cn - 4C) + (2Cn^ - 2Cn) + (2Cn^ - 6Cn + 4C)\\ &= 4Cn^ + 12Cn\\ &= C(4n^ + 12n) \end \nonumber\] Questions Related to the Appendix 1. How much computational time does it take to conduct forward substitution? 2. How much computational time does it take to conduct forward elimination? 3. How much computational time does it take to conduct back substitutions? 4. How much computational time does it take to conduct Naive Gaussian elimination? 5. How much computational time does it take to conduct LU decomposition? 6. How much computational time does it take to find the inverse of a square matrix using LU decomposition? 7. How much computational time does it take to find the inverse of a square matrix using Gaussian elimination?
Multiple Choice Test
(1). The \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method is computationally more efficient than Naive Gauss elimination for solving (A). a single set of simultaneous linear equations. (B). multiple sets of simultaneous linear equations with different coefficient matrices and the same right-hand side vectors. (C). multiple sets of simultaneous linear equations with the same coefficient matrix and different right-hand side vectors. (D). less than ten simultaneous linear equations. (2). The lower triangular matrix \(\left\lbrack L \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below \[\begin 25 & 5 & 4 \\ 10 & 8 & 16 \\ 8 & 12 & 22 \\ \end = \begin 1 & 0 & 0 \\ \mathcal_ & 1 & 0 \\ \mathcal_ & \mathcal_ & 1 \\ \end\begin u_ & u_ & u_ \\ 0 & u_ & u_ \\ 0 & 0 & u_ \\ \end \nonumber\] is (A) \(\begin 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end\) (B) \(\begin 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end\) (C) \(\begin 1 & 0 & 0 \\ 10 & 1 & 0 \\ 8 & 12 & 0 \\ \end\) (D) \(\begin 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.5000 & 1 \\ \end\) (3). The upper triangular matrix \(\left\lbrack U \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below \[\begin 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 12 & 22 \\ \end = \begin 1 & 0 & 0 \\ \mathcal_ & 1 & 0 \\ \mathcal_ & \mathcal_ & 1 \\ \end\begin u_ & u_ & u_ \\ 0 & u_ & u_ \\ 0 & 0 & u_ \\ \end \nonumber\] is (A) \(\begin 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end\) (B) \(\begin 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end\) (C) \(\begin 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 0 & - 2 \\ \end\) (D) \(\begin 1 & 0.2000 & 0.16000 \\ 0 & 1 & 2.4000 \\ 0 & 0 & - 4.240 \\ \end\) (4). For a given \(2000 \times 2000\) matrix \(\left\lbrack A \right\rbrack\) , assume that it takes about \(15\) seconds to find the inverse of \(\left\lbrack A \right\rbrack\) by the use of the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method, that is, finding the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) once, and then doing forward substitution and back substitution \(2000\) times using the \(2000\) columns of the identity matrix as the right-hand side vector. The approximate time in seconds that it will take to find the inverse if found by repeated use of the Naive Gauss elimination method, that is, doing forward elimination and back substitution \(2000\) times by using the \(2000\) columns of the identity matrix as the right-hand side vector, is most nearly (A) \(300\) (B) \(1500\) (C) \(7500\) (D) \(30000\) (5). The algorithm for solving a set of \(n\) equations \(\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\) , where \(\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) involves solving \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) by forward substitution. The algorithm to solve \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by (A) \(z_ = c_/l_\) for i from 2 to n do sum = 0 for j from 1 to i do sum = sum + l_ij * z_j end do z_i = (c_i - sum)/l_i end do (B) \(z_ = c_/l_\) for i from 2 to n do sum = 0 for j from 1 to (i - 1) do sum = sum + l_ij * z_j end do z_i = (c_i - sum)/l_ij end do (C) \(z_ = c_/l_\) for i from 2 to n do for j from 1 to (i - 1) do sum = sum + l_ij * z_j end do z_i = (c_i - sum)/l_ii end do (D) for i from 2 to n do sum = 0 for j from 1 to (i - 1) do sum = sum + l_ij * z_j end do z_i = (c_i - sum)/l_ii end do (6). To solve boundary value problems, a numerical method based on the finite difference method is used. This results in simultaneous linear equations with tridiagonal coefficient matrices. These are solved using a specialized \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method. Choose the set of equations that approximately solves the boundary value problem \[\fracy>> = 6x - 0.5x^,\ \ y\left( 0 \right) = 0,\ \ y\left( 12 \right) = 0,\ \ 0 \leq x \leq 12 \nonumber\] The second derivative in the above equation is approximated by the second-order accurate central divided difference approximation as learned in the numerical differentiation lessons (Chapter 02.02). A step size of \(h = 4\) is used, and hence the value of \(y\) can be found approximately at equidistantly placed 4 nodes between \(x = 0\) and \(x = 12\) . (A) \(\begin 1 & 0 & 0 & 0 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end\begin y_ \\ y_ \\ y_ \\ y_ \\ \end = \begin 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end\) (B) \(\begin 1 & 0 & 0 & 0 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end\begin y_ \\ y_ \\ y_ \\ y_ \\ \end = \begin 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end\) (C) \(\begin 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ \end\begin y_ \\ y_ \\ y_ \\ y_ \\ \end = \begin 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end\) (D) \(\begin 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ \end\begin y_ \\ y_ \\ y_ \\ y_ \\ \end = \begin 0 \\ 0 \\ 16.0 \\ 16.0 \\ \end\) For complete solution, go to http://nm.mathforcollege.com/mcquizzes/04sle/quiz_04sle_ludecomposition_solution.pdf
Problem Set
This page titled 4.07: LU Decomposition for Solving Simultaneous Linear Equations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Autar Kaw via source content that was edited to the style and standards of the LibreTexts platform.
- Back to top
- 4.06: Gaussian Elimination Method for Solving Simultaneous Linear Equations
- 5: Interpolation