Difference between revisions of "Software tutorial/Matrix operations"
m (Created page with "== Matrix operations == In this section we will see how to calculate the LU decomposition, matrix inverse, norms and condition number of a matrix. Let A = \(\left(\begin{arra...") |
m |
||
Line 1: | Line 1: | ||
== Matrix operations == | == Matrix operations == | ||
In this section we will | In this section we will show how to calculate the LU decomposition, matrix inverse, norms and condition number of a matrix in MATLAB and Python. | ||
Let A = | Let \(A = \left(\begin{array}{cccc} 4 & 0 & 1 \\ 0 & 6 & 2 \\ 3.2 & 7 & 4 \end{array} \right)\) and let \(b = \left(\begin{array}{cccc} 9.1 \\ 2 \\ 0 \end{array} \right) \) | ||
{| class="wikitable" | {| class="wikitable" | ||
Line 12: | Line 12: | ||
| width="50%" valign="top" | | | width="50%" valign="top" | | ||
<syntaxhighlight lang="matlab"> | <syntaxhighlight lang="matlab"> | ||
A | A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4]; | ||
% LU decomposition of | % LU decomposition of A | ||
[L,U,P] = lu( | [L,U,P] = lu(A) | ||
L = | L = | ||
1.0000 0 0 | 1.0000 0 0 | ||
Line 30: | Line 29: | ||
0 1 0 | 0 1 0 | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The <tt>P</tt> matrix shows that rows 2 and rows 3 were interchanged during the Gauss elimination pivoting steps. | |||
| width="50%" valign="top" | | | width="50%" valign="top" | | ||
<syntaxhighlight lang=" | <syntaxhighlight lang="python"> | ||
import numpy as np | import numpy as np | ||
from scipy.linalg import * | from scipy.linalg import * | ||
A | A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]]) | ||
LU, P = lu_factor(A) | |||
LU, P = lu_factor( | |||
LU | >>> LU | ||
array([[ 4. , 0. , 1. ], | array([[ 4. , 0. , 1. ], | ||
[ 0.8 , 7. , 3.2 ], | [ 0.8 , 7. , 3.2 ], | ||
[ 0. , 0.85714286, -0.74285714]]) | [ 0. , 0.85714286, -0.74285714]]) | ||
P | >>> P | ||
array([0, 2, 2]) | array([0, 2, 2]) | ||
</syntaxhighlight> | </syntaxhighlight> | ||
The LU matrices are overlayed in the output. Notice that the results are the same as MATLAB's, except we do not need to store the diagonal set of ones from the L matrix. | |||
The P vector is also a bit cryptic - it should be read from left to right, in order. The way to read it is that row \(i\) of matrix \(A\) was interchanged with row <tt>P[i]</tt> in \(A\), where <tt>i</tt> is the index into vector <tt>P</tt>. | |||
* row 0 in \(A\) was interchanged with row 0: i.e. no pivoting for the first row | |||
* row 1 in \(A\) was interchanged with row 2: i.e. rows 2 and 3 in \(A\) were swapped | |||
* row 2 in \(A\) was interchanged with row 2: i.e. no further pivoting. | |||
|} | |||
Once we have \(L, U\) and \(P\) we can solve the system of equations for a given vector, \(b\). It takes a bit more work to do this in MATLAB than with Python. Recall from class that \(A = LU\), but with a permutation or pivot matrix, it becomes \(PA = LU\), or alternatively: \(A = P^{-1}LU\) | |||
\[ \begin{align} Ax &= b \\ P^{-1}LUx &= b \\ \text{Let}\qquad Ux &= y \\ \text{Then solve}\qquad \left(P^{-1}L\right)y &= b \qquad \text{for}\,\, y \\ \text{Finally, solve}\qquad Ux &= y \qquad \text{for}\,\, x \end{align} \] | |||
{| class="wikitable" | |||
|- | |||
! MATLAB | |||
! Python | |||
|- | |||
| width="50%" valign="top" | | |||
<syntaxhighlight lang="matlab"> | |||
b = [9.1, 2, 0]'; | |||
[L,U,P] = lu(A); | |||
y = mldivide(inv(P)*L, b); | |||
x = mldivide(U, y); | |||
>> x | |||
x = | |||
5.0481 | |||
4.0308 | |||
-11.0923 | |||
% Verify the solution: | |||
>> A*x - b | |||
ans = | |||
1.0e-14 * | |||
0.1776 | |||
0 | |||
0 | |||
</syntaxhighlight> | |||
The small values of the solution error indicate that our LU solution is accurate. You should compare the LU solution with solution from <tt>mldivide(A, b)</tt>. | |||
| width="50%" valign="top" | | |||
<syntaxhighlight lang="python"> | |||
LU, P = lu_factor(A) | |||
b = np.array([9.1, 2, 0]) | |||
x = lu_solve((LU,P), b) | |||
>>> x | |||
array([ 5.04807692, 4.03076923, -11.09230769]) | |||
# Verify the solution: | |||
>>> np.dot(A, x) - b | |||
array([ 1.77635684e-15, 0.00000000e+00, 0.00000000e+00]) | |||
</syntaxhighlight> | |||
|} | |} |
Revision as of 12:48, 1 October 2010
Matrix operations
In this section we will show how to calculate the LU decomposition, matrix inverse, norms and condition number of a matrix in MATLAB and Python.
Let \(A = \left(\begin{array}{cccc} 4 & 0 & 1 \\ 0 & 6 & 2 \\ 3.2 & 7 & 4 \end{array} \right)\) and let \(b = \left(\begin{array}{cccc} 9.1 \\ 2 \\ 0 \end{array} \right) \)
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
% LU decomposition of A
[L,U,P] = lu(A)
L =
1.0000 0 0
0.8000 1.0000 0
0 0.8571 1.0000
U =
4.0000 0 1.0000
0 7.0000 3.2000
0 0 -0.7429
P =
1 0 0
0 0 1
0 1 0
The P matrix shows that rows 2 and rows 3 were interchanged during the Gauss elimination pivoting steps. |
import numpy as np
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
LU, P = lu_factor(A)
>>> LU
array([[ 4. , 0. , 1. ],
[ 0.8 , 7. , 3.2 ],
[ 0. , 0.85714286, -0.74285714]])
>>> P
array([0, 2, 2])
The LU matrices are overlayed in the output. Notice that the results are the same as MATLAB's, except we do not need to store the diagonal set of ones from the L matrix. The P vector is also a bit cryptic - it should be read from left to right, in order. The way to read it is that row \(i\) of matrix \(A\) was interchanged with row P[i] in \(A\), where i is the index into vector P.
|
Once we have \(L, U\) and \(P\) we can solve the system of equations for a given vector, \(b\). It takes a bit more work to do this in MATLAB than with Python. Recall from class that \(A = LU\), but with a permutation or pivot matrix, it becomes \(PA = LU\), or alternatively: \(A = P^{-1}LU\)
\[ \begin{align} Ax &= b \\ P^{-1}LUx &= b \\ \text{Let}\qquad Ux &= y \\ \text{Then solve}\qquad \left(P^{-1}L\right)y &= b \qquad \text{for}\,\, y \\ \text{Finally, solve}\qquad Ux &= y \qquad \text{for}\,\, x \end{align} \]
MATLAB | Python |
---|---|
b = [9.1, 2, 0]';
[L,U,P] = lu(A);
y = mldivide(inv(P)*L, b);
x = mldivide(U, y);
>> x
x =
5.0481
4.0308
-11.0923
% Verify the solution:
>> A*x - b
ans =
1.0e-14 *
0.1776
0
0
The small values of the solution error indicate that our LU solution is accurate. You should compare the LU solution with solution from mldivide(A, b). |
LU, P = lu_factor(A)
b = np.array([9.1, 2, 0])
x = lu_solve((LU,P), b)
>>> x
array([ 5.04807692, 4.03076923, -11.09230769])
# Verify the solution:
>>> np.dot(A, x) - b
array([ 1.77635684e-15, 0.00000000e+00, 0.00000000e+00])
|