Software tutorial/Matrix operations
LU decomposition
Let
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
% LU decomposition of A
[L,U,P] = lu(A)
L =
1.0000 0 0
0.8000 1.0000 0
0 0.8571 1.0000
U =
4.0000 0 1.0000
0 7.0000 3.2000
0 0 -0.7429
P =
1 0 0
0 0 1
0 1 0
The P matrix shows that rows 2 and rows 3 were interchanged during the Gauss elimination pivoting steps. |
import numpy as np
from numpy.linalg import *
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
LU, P = lu_factor(A)
>>> LU
array([[ 4. , 0. , 1. ],
[ 0.8 , 7. , 3.2 ],
[ 0. , 0.85714286, -0.74285714]])
>>> P
array([0, 2, 2])
The LU matrices are overlayed in the output. Notice that the results are the same as MATLAB's, except we do not need to store the diagonal set of ones from the L matrix. The P vector is also a bit cryptic - it should be read from left to right, in order. The way to read it is that row
|
Once we have
MATLAB | Python |
---|---|
b = [9.1, 2, 0]';
[L,U,P] = lu(A);
y = mldivide(inv(P)*L, b);
x = mldivide(U, y);
>> x
x =
5.0481
4.0308
-11.0923
% Verify the solution:
>> A*x - b
ans =
1.0e-14 *
0.1776
0
0
The small values of the solution error indicate that our LU solution is accurate. You should compare the LU solution with solution from mldivide(A, b). |
LU, P = lu_factor(A)
b = np.array([9.1, 2, 0])
x = lu_solve((LU,P), b)
>>> x
array([ 5.04807692, 4.03076923, -11.09230769])
# Verify the solution:
>>> np.dot(A, x) - b
array([ 1.77635684e-15, 0.00000000e+00, 0.00000000e+00])
|
Matrix determinant
One way to test if a matrix is invertable is to calculate the determinant. If it is non-zero, then
Let
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
>> det(A)
ans =
20.8000
|
import numpy as np
from numpy.linalg import *
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
>>> det(A)
20.800000000000011
|
Matrix inverse
The matrix inverse is defined so that
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
>> inv(A)
ans =
0.4808 0.3365 -0.2885
0.3077 0.6154 -0.3846
-0.9231 -1.3462 1.1538
>> A * inv(A)
ans =
1.0000 0 0
-0.0000 1.0000 0.0000
0 0 1.0000
|
import numpy as np
from numpy.linalg import *
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
>>> inv(A)
array([[ 0.48076923, 0.33653846, -0.28846154],
[ 0.30769231, 0.61538462, -0.38461538],
[-0.92307692, -1.34615385, 1.15384615]])
>>> np.dot(A, inv(A))
array([[ 1.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ -2.22044605e-16, 1.00000000e+00, 4.44089210e-16],
[ 0.00000000e+00, 0.00000000e+00, 1.00000000e+00]])
Notice the small, essentially zero elements in the off-diagonal elements. |
Matrix and vector norms
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
format long
% The default norm in MATLAB = max(svd(A))
>> norm(A)
ans =
10.627892098461123
>> max(svd(A))
ans =
10.627892098461123
% Other norms available:
>> norm(A, 'fro') % Frobenius norm: square root of
% the sum of squares of all entries in A
ans =
11.499565209172042
% Other norms
% norm(A, 1) : column sum norm: 13.0
% norm(A, inf) : row sum norm: 14.199999999999999
|
import numpy as np
from numpy.linalg import *
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
# The default norm in Python = Frobenius norm
>>> norm(A)
ans =
11.499565209172042
>>> norm(A, 'fro')
ans =
11.499565209172042
# Other norms available:
>>> norm(A, 2) % 2-norm: maximum singular value
ans =
10.627892098461123
# Other norms
# norm(A, 1) : column sum norm: 13.0
# norm(A, np.inf) : row sum norm: 14.199999999999999
|
Matrix conditioning
The condition number was defined as
MATLAB | Python |
---|---|
A = [4, 0, 1; 0, 6, 2; 3.2, 7, 4];
format long
>> cond(A)
ans =
23.724794742061690
% Condition numbers, calculated using other norms are also available
>> help cond
COND Condition number with respect to inversion.
COND(X) returns the 2-norm condition number (the ratio of the
largest singular value of X to the smallest). Large condition
numbers indicate a nearly singular matrix.
COND(X,P) returns the condition number of X in P-norm:
NORM(X,P) * NORM(INV(X),P).
where P = 1, 2, inf, or 'fro'.
|
import numpy as np
from numpy.linalg import *
from scipy.linalg import *
A = np.array([[4, 0, 1],[0, 6, 2],[3.2, 7, 4]])
>>> cond(A)
23.72479474206169
# Condition numbers, calculated using other norms are also available
>>> help(cond)
cond(x, p=None)
Compute the condition number of a matrix.
This function is capable of returning the condition number using
one of seven different norms, depending on the value of `p` (see
Parameters below).
Parameters
----------
x : array_like, shape (M, N)
The matrix whose condition number is sought.
p : {None, 1, -1, 2, -2, inf, -inf, 'fro'}, optional
Order of the norm:
===== ============================
p norm for matrices
===== ============================
None 2-norm, computed directly using the ``SVD``
'fro' Frobenius norm
inf max(sum(abs(x), axis=1))
-inf min(sum(abs(x), axis=1))
1 max(sum(abs(x), axis=0))
-1 min(sum(abs(x), axis=0))
2 2-norm (largest sing. value)
-2 smallest singular value
===== ============================
|