numpy.linalg.lstsq(a, b, rcond='warn')
[source]
Return the least-squares solution to a linear matrix equation.
Solves the equation by computing a vector x
that minimizes the squared Euclidean 2-norm . The equation may be under-, well-, or over-determined (i.e., the number of linearly independent rows of a
can be less than, equal to, or greater than its number of linearly independent columns). If a
is square and of full rank, then x
(but for round-off error) is the “exact” solution of the equation.
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
If b
is a matrix, then all array results are returned as matrices.
Fit a line, y = mx + c
, through some noisy data-points:
>>> x = np.array([0, 1, 2, 3]) >>> y = np.array([-1, 0.2, 0.9, 2.1])
By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1.
We can rewrite the line equation as y = Ap
, where A = [[x 1]]
and p = [[m], [c]]
. Now use lstsq
to solve for p
:
>>> A = np.vstack([x, np.ones(len(x))]).T >>> A array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]])
>>> m, c = np.linalg.lstsq(A, y, rcond=None)[0] >>> m, c (1.0 -0.95) # may vary
Plot the data along with the fitted line:
>>> import matplotlib.pyplot as plt >>> _ = plt.plot(x, y, 'o', label='Original data', markersize=10) >>> _ = plt.plot(x, m*x + c, 'r', label='Fitted line') >>> _ = plt.legend() >>> plt.show()
© 2005–2019 NumPy Developers
Licensed under the 3-clause BSD License.
https://docs.scipy.org/doc/numpy-1.17.0/reference/generated/numpy.linalg.lstsq.html