eigenvalues and eigenvectors. In the function documentation it says: "If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent." In this example we know that two eigenvectors are going to be produced. If they're actually the same vector, this will be indicated by making the eigenvalues identical.
Note |
---|
This does not imply that all eigenvectors with the same eigenvalue are linearly dependant! To further clarify this, take a look at exercise 3. |
Exercise 3
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Inverse
Exercise 1
Expand |
---|
|
Find the inverse of the matrix
Expand |
---|
|
Code Block |
---|
|
import numpy as np
A = np.array([[2, 2, 0],
[0, 0, 1],
[4, 2, 0]])
inv_A = np.linalg.inv(A)
print(inv_A)
# Output:
[[-0.5 0. 0.5]
[ 1. 0. -0.5]
[ 0. 1. 0. ]] |
Least squares solution to a linear system
Exercise 1
Expand |
---|
|
This example is taken from an example exam in TMA4110 - Calculus 3, check out the exam and the solution for hand calculations (only in norwegain). This is task 5.
Use the least squares method to find an approximation to the linear system
Expand |
---|
|
First, lets write the equations as matrices on the format
, where
To solve the problem in NumPy, we will use the function numpy.linalg.lstsq (or with SciPys scipy.linalg.lstsq) which is currently not described in Matrices and linear algebra. We recommend you to read the function documentation before proceeding.
Code Block |
---|
|
import numpy as np
# Defining the matrices
A = np.array([[1, 2],
[3, 4],
[5, 6]])
b = np.array([[1],[2],[3]])
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
# "rcond=None" sets new default for rcond, see function documentation
print(x) # "x" is the solution
# Output:
[[-5.97106181e-17]
[ 5.00000000e-01]] |
We can interpret this such as the least squares solution is
Solving a system of linear equations
Exercise 1
Expand |
---|
|
Solve the following system of linear equations:
Expand |
---|
|
Code Block |
---|
|
import numpy as np
A = np.array([[2, -4, 9], # Right-hand side
[4, -3, 8],
[-2, 4, -2]])
b = np.array([-38, -26, 17]) # Left-hand side
solution = np.linalg.solve(A, b)
print(solution)
# Output:
[ 2.5 4. -3. ] |
This is interpreted as
and
Exercise 2
Expand |
---|
|
Solve the following system of linear equations:
Expand |
---|
|
Code Block |
---|
|
import numpy as np
A = np.array([[1, 3, 6], # Right-hand side
[2, 8, 16],
[2, 6, 12]])
b = np.array([4, 8, 8]) # Left-hand side
solution = np.linalg.solve(A, b)
print(solution)
# Output:
numpy.linalg.LinAlgError: Singular matrix |
As we can see, using np.linalg.solve returns a LinAlgError telling us that the matrix (in this case A) is a singular matrix. A singular matrix is one that is not invertible. This means that the system of equations we are trying to solve does not have a unique solution (either none or multiple); np.linalg.solve can't handle this.
By using np.linalg.lstsq (least squares solution) instead, we will at least get one solution.
Code Block |
---|
language | py |
---|
title | Solution using least squares method |
---|
|
import numpy as np
A = np.array([[1, 3, 6], # Right-hand side
[2, 8, 16],
[2, 6, 12]])
b = np.array([4, 8, 8]) # Left-hand side
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
# "rcond=None" sets new default for rcond, see function documentation
print(x) # "x" is the solution
# Output:
[4. 0. 0.] |
This is interpreted as
and
Note |
---|
Keep in mind that this is only one of multiple solutions! |
Determinant
Exercise 1: Simple matrix
Expand |
---|
title | Exercise: Simple matrix |
---|
|
Compute the determinant
Expand |
---|
title | Solution: Simple matrix |
---|
|
Code Block |
---|
language | py |
---|
title | Solution: Simple matrix |
---|
|
import numpy as np
# Defining the matrix
A = np.array([[1, 3, 2],
[4, 1, 3],
[2, 5, 2]])
det = np.linalg.det(A)
print(det)
# Output:
16.999999999999993 # e.i. det(A) = 17 |
Exercise 2: Complex numbers
Expand |
---|
title | Exercise: Complex numbers |
---|
|
This example is taken from an example exam in TMA4110 - Calculus 3, check out the exam and the solution for hand calculations (only in norwegain). This is task 4.
Comput the determinant
Expand |
---|
title | Solution: Complex numbers |
---|
|
Code Block |
---|
language | py |
---|
title | Solution: Complex Numbers |
---|
|
import numpy as np
# Defining the matrix
# Notice how we can define complex numbers in python by adding the letter "j"
# "j" is used to denote an imaginary unit instead of "i", because Python follows engineering,
# where "i" denotes the electric current as a function of time (especially electrial engineering)
A = np.array([[3, 1-1j, 1j, 4],
[3, 1, 1-2j, 4+7j],
[6j, 2+2j, -2, 3j],
[-3, -1+1j, 1, 3-4j]], dtype=complex)
det = np.linalg.det(A)
print(det)
# Output:
(-15-15j) # e.i. det(A) = -15-15i |
Eigenvalues and eigenvectors
Exercise 1
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Expand |
---|
|
Code Block |
---|
|
import numpy as np
A = np.array([[1, 2],
[2, 1]])
w, v = np.linalg.eig(A) # "w" are the eigenvalues, "v" the eigenvectors
print(w)
print(v)
# Output:
[ 3. -1.]
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]] |
We can interpret this as
and
.
Exercise 2
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Expand |
---|
|
Code Block |
---|
|
import numpy as np
A = np.array([[0, 1],
[0, 0]])
w, v = np.linalg.eig(A) # "w" are the eigenvalues, "v" the eigenvectors
print(w)
print(v)
# Output:
[0. 0.]
[[ 1.00000000e+000 -1.00000000e+000]
[ 0.00000000e+000 2.00416836e-292]] # ...e-292 ~ 0 |
We can interpret this as
, which is one eigenvalue and one eigenvector (since
, thus linearly dependent). If this is the case, why does np.linalg.eig then return the same eigenvalue twice with two different eigenvectors?
Simplified, np.linalg.eig of an
matrix is guaranteed to always return
Determinant
Exercise 1: Simple matrix
Expand |
---|
title | Exercise: Simple matrix |
---|
|
Compute the determinant
Expand |
---|
title | Solution: Simple matrix |
---|
|
Code Block |
---|
language | py |
---|
title | Solution: Simple matrix |
---|
| import numpy as np
# Defining the matrix
A = np.array([[1, 3, 2],
[4, 1, 3],
[2, 5, 2]])
det = np.linalg.det(A)
print(det)
# Output:
16.999999999999993 # e.i. det(A) = 17 |
|
|
Exercise 2: Complex numbers
Expand |
---|
title | Exercise: Complex numbers |
---|
|
This example is taken from an example exam in TMA4110 - Calculus 3, check out the exam and the solution for hand calculations (only in norwegainNorwegian). This is task 4. Comput the determinant
Expand |
---|
title | Solution: Complex numbers |
---|
|
Code Block |
---|
language | py |
---|
title | Solution: Complex Numbers |
---|
| import numpy as np
# Defining the matrix
# Notice how we can define complex numbers in python by adding the letter "j"
# "j" is used to denote an imaginary unit instead of "i", because Python follows engineering,
# where "i" denotes the electric current as a function of time (especially electrial engineering)
A = np.array([[3, 1-1j, 1j, 4],
[3, 1, 1-2j, 4+7j],
[6j, 2+2j, -2, 3j],
[-3, -1+1j, 1, 3-4j]], dtype=complex)
det = np.linalg.det(A)
print(det)
# Output:
(-15-15j) # e.i. det(A) = -15-15i |
|
|
Eigenvalues and eigenvectors
Exercise 1
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[1, 2],
[2, 1]])
w, v = np.linalg.eig(A) # "w" are the eigenvalues, "v" the eigenvectors
print(w)
print(v)
# Output:
[ 3. -1.]
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]] |
We can interpret this as and . |
|
Exercise 2
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[0, 1],
[0, 0]])
w, v = np.linalg.eig(A) # "w" are the eigenvalues, "v" the eigenvectors
print(w)
print(v)
# Output:
[0. 0.]
[[ 1.00000000e+000 -1.00000000e+000]
[ 0.00000000e+000 2.00416836e-292]] # ...e-292 ~ 0 |
We can interpret this as , which is one eigenvalue and one eigenvector (since , thus linearly dependent). If this is the case, why does np.linalg.eig then return the same eigenvalue twice with two different eigenvectors? Simplified, np.linalg.eig of an matrix is guaranteed to always return eigenvalues and eigenvectors. In the function documentation it says: "If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent." In this example we know that two eigenvectors are going to be produced. If they're actually the same vector, this will be indicated by making the eigenvalues identical. Note |
---|
This does not imply that all eigenvectors with the same eigenvalue are linearly dependant! To further clarify this, take a look at exercise 3. |
|
|
Exercise 3
Expand |
---|
|
Find the eigenvalues and corresponding eigenvectors to matrix A, where
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[4, 2, 3],
[-1, 1, -3],
[2, 4, 9]])
w, v = np.linalg.eig(A) # "w" are the eigenvalues, "v" the eigenvectors
print(w)
print(v)
# Output:
[8. 3. 3.]
[[-0.40824829 0.0153225 0.9635814 ]
[ 0.40824829 -0.83430241 -0.14040935]
[-0.81649658 0.55109411 -0.22758756]] |
As in exercise 2, the same eigenvalue are produced several times, though it's not the same eigenvector eigenvectors this time. We can interpret the output as
and , where and are linearly independent and together spans a plane. The same plane can be span by the e.g. the vectors and for easier interpretation, though both solution are correct. |
|
Inverse
Exercise 1
Expand |
---|
|
Find the inverse of the matrix
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[2, 2, 0],
[0, 0, 1],
[4, 2, 0]])
inv_A = np.linalg.inv(A)
print(inv_A)
# Output:
[[-0.5 0. 0.5]
[ 1. 0. -0.5]
[ 0. 1. 0. ]] |
|
|
Least squares solution to a linear system
Exercise 1
Expand |
---|
|
This example is taken from an example exam in TMA4110 - Calculus 3, check out the exam and the solution for hand calculations (only in norwegainNorwegian). This is task 5. Use the least squares method to find an approximation to the linear system
Expand |
---|
| First, lets write the equations as matrices on the format , where To solve the problem in NumPy, we will use the function numpy.linalg.lstsq (or with SciPys scipy.linalg.lstsq) which is currently not described in Matrices and linear algebra in Python. We recommend you to read the function documentation before proceeding. Code Block |
---|
| import numpy as np
# Defining the matrices
A = np.array([[1, 2],
[3, 4],
[5, 6]])
b = np.array([[1],[2],[3]])
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
# "rcond=None" sets new default for rcond, see function documentation
print(x) # "x" is the solution
# Output:
[[-5.97106181e-17]
[ 5.00000000e-01]] |
We can interpret this such as the least squares solution is
|
|
Solving a system of linear equations
Exercise 1
Expand |
---|
|
Solve the following system of linear equations:
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[2, -4, 9], # Right-hand side
[4, -3, 8],
[-2, 4, -2]])
b = np.array([-38, -26, 17]) # Left-hand side
solution = np.linalg.solve(A, b)
print(solution)
# Output:
[ 2.5 4. -3. ] |
This is interpreted as and
|
|
Exercise 2: Singular matrix
Expand |
---|
|
Solve the following system of linear equations:
Expand |
---|
|
Code Block |
---|
| import numpy as np
A = np.array([[1, 3, 6], # Right-hand side
[2, 8, 16],
[2, 6, 12]])
b = np.array([4, 8, 8]) # Left-hand side
solution = np.linalg.solve(A, b)
print(solution)
# Output:
numpy.linalg.LinAlgError: Singular matrix |
As we can see, using np.linalg.solve returns a LinAlgError telling us that the matrix (in this case A) is a singular matrix. A singular matrix is one that is not invertible. This means that the system of equations we are trying to solve does not have a unique solution (either none or multiple); np.linalg.solve can't handle this. By using np.linalg.lstsq (least squares solution) instead, we will at least get one solution. Code Block |
---|
language | py |
---|
title | Solution using least squares method |
---|
| import numpy as np
A = np.array([[1, 3, 6], # Right-hand side
[2, 8, 16],
[2, 6, 12]])
b = np.array([4, 8, 8]) # Left-hand side
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
# "rcond=None" sets new default for rcond, see function documentation
print(x) # "x" is the solution
# Output:
[4. 0. 0.] |
This is interpreted as and
Note |
---|
Keep in mind that this is only one of multiple solutions! |
|
|