Scipy -- Library of scientific algorithms for Python¶
Based on lecture at http://github.com/jrjohansson/scientific-python-lectures.
The SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:
- Special functions (scipy.special)
- Integration (scipy.integrate)
- Optimization (scipy.optimize)
- Interpolation (scipy.interpolate)
- Fourier Transforms (scipy.fftpack)
- Signal Processing (scipy.signal)
- Linear Algebra (scipy.linalg) [moved to numpy]
- Sparse Eigenvalue Problems (scipy.sparse)
- Statistics (scipy.stats)
- Multi-dimensional image processing (scipy.ndimage)
- File IO (scipy.io)
Each of these submodules provides a number of functions and classes that can be used to solve problems in their respective topics.
In this lecture we will look at how to use some of these subpackages.
To access the SciPy package in a Python program, we start by importing everything from the scipy
module.
WARNING: In the new version of python many functionalities are now moved from scipy
to numpy
, but they are still available in scipy
and a deprecated warning is displayed. The work-around is to first import functions from scipy
and after that from numpy
, to overwrite scipy functions with the same name.
from scipy import *
from numpy import *
If we only need to use part of the SciPy framework we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name la
, we can do:
import scipy.linalg as la
Special functions¶
A large number of mathematical special functions are important for many computional physics problems. SciPy provides implementations of a very extensive set of special functions. For details, see the list of functions in the reference documention at http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special.
To demonstrate the typical usage of special functions we will look in more detail at the Bessel functions:
from scipy import special
#help(special)
Bessel functions are special functions that appear when solving equations (like the wave equation) in cylindrical or spherical geometries.
Physical examples abound: vibrating circular membranes, heat conduction in a cylinder, electromagnetic waves in waveguides, and quantum particles in cylindrical potentials.
Example: Bessel functions appear as a solutions of the problem with circular
Chladni Plates
.
#
# The scipy.special module includes a large number of Bessel-functions
# Here we will use the functions jn and yn, which are the Bessel functions
# of the first and second kind and real-valued order. We also include the
# function jn_zeros and yn_zeros that gives the zeroes of the functions jn
# and yn.
#
from scipy.special import jn, yn, jn_zeros, yn_zeros
n = 0 # order
x = 0.0 # x-point
# Bessel function of first kind
print("J_%d(%f) = %f" % (n, x, jn(n, x)))
x = 1e-5
# Bessel function of second kind
print("Y_%d(%f) = %f" % (n, x, yn(n, x)))
J_0(0.000000) = 1.000000 Y_0(0.000010) = -7.403160
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import Image
x = linspace(0, 10, 100)
for n in range(4):
plt.plot(x, jn(n, x), label=('$J_%d(x)$' % n))
plt.legend(loc='best')
plt.show()
Second solution of the same Bessel differential equation: $$x^2 y_n''(x)+x y_n'(x)+(x^2-n^2) y_n(x)=0$$
Bessel functions of the first kind $J_n(x)$ are regular at the origin and $Y_n(x)$ are the solution which is irregular at the origin.
x = linspace(1, 10, 100)
for n in range(4):
plt.plot(x, yn(n, x), label=('$J_%d(x)$' % n))
plt.legend(loc='best')
plt.show()
Spherical harmonics¶
What are spherical harmonics used for?
Angular Momentum in Quantum Mechanics: Spherical harmonics arise naturally as eigenfunctions of the angular part of the Laplacian operator in spherical coordinates. They describe the angular dependence of wavefunctions in central potentials (like the hydrogen atom).
Electron Orbitals: The familiar s, p, d, f orbitals of electrons in atoms are labeled according to their angular momentum quantum numbers, with their angular part given by spherical harmonics.
Multipole Expansion of Potentials in Electrodynamics: In classical electromagnetism and gravitation, spherical harmonics are used to express the angular dependence of potentials due to charge or mass distributions. For example, the electric or gravitational field around a nonspherical object can be expanded in terms of monopole, dipole, quadrupole, and higher moments, each involving spherical harmonics.
Radiation Patterns in Electrodynamics: The angular distribution of radiation from antennas or other sources is often expressed using spherical harmonics.
import numpy as np
theta=linspace(0,pi,4)
phi = linspace(0,2*pi,3)
phi,theta = np.meshgrid(phi,theta)
Ylm = special.sph_harm(0,2,phi,theta)
print(f"{'i':>3} {'j':>3} {'phi':>10} {'theta':>10} {'|Ylm|':>10}")
for i in range(theta.shape[0]):
for j in range(theta.shape[1]):
print(f"{i:3d} {j:3d} {phi[i,j]:10.5f} {theta[i,j]:10.5f} {abs(Ylm[i,j]):10.5f}")
i j phi theta |Ylm| 0 0 0.00000 0.00000 0.63078 0 1 3.14159 0.00000 0.63078 0 2 6.28319 0.00000 0.63078 1 0 0.00000 1.04720 0.07885 1 1 3.14159 1.04720 0.07885 1 2 6.28319 1.04720 0.07885 2 0 0.00000 2.09440 0.07885 2 1 3.14159 2.09440 0.07885 2 2 6.28319 2.09440 0.07885 3 0 0.00000 3.14159 0.63078 3 1 3.14159 3.14159 0.63078 3 2 6.28319 3.14159 0.63078
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Define the range of spherical coordinates
r = np.linspace(0.5, 1, 10) # radial distance
theta = np.linspace(0,pi,30) # polar angle
phi = np.linspace(0,2*pi,60) # azimuthal angle
# Create a meshgrid for theta and phi
phi, theta = np.meshgrid(phi, theta)
# Use a single radius value to create a spherical surface
r = np.ones_like(theta) * 1
# Convert to Cartesian coordinates
x = r * sin(theta) * cos(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(theta)
# Plot
fig = plt.figure(figsize=plt.figaspect(1.))
ax = fig.add_subplot(projection='3d')
# Plot the spherical surface
ax.plot_surface(x, y, z, alpha=0.8, edgecolor='none')
# Add labels and title
ax.set_title("3D Spherical Coordinates")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
# Show the plot
plt.show()
help(ax.plot_surface)
Help on method plot_surface in module mpl_toolkits.mplot3d.axes3d: plot_surface(X, Y, Z, *, norm=None, vmin=None, vmax=None, lightsource=None, **kwargs) method of mpl_toolkits.mplot3d.axes3d.Axes3D instance Create a surface plot. By default, it will be colored in shades of a solid color, but it also supports colormapping by supplying the *cmap* argument. .. note:: The *rcount* and *ccount* kwargs, which both default to 50, determine the maximum number of samples used in each direction. If the input data is larger, it will be downsampled (by slicing) to these numbers of points. .. note:: To maximize rendering speed consider setting *rstride* and *cstride* to divisors of the number of rows minus 1 and columns minus 1 respectively. For example, given 51 rows rstride can be any of the divisors of 50. Similarly, a setting of *rstride* and *cstride* equal to 1 (or *rcount* and *ccount* equal the number of rows and columns) can use the optimized path. Parameters ---------- X, Y, Z : 2D arrays Data values. rcount, ccount : int Maximum number of samples used in each direction. If the input data is larger, it will be downsampled (by slicing) to these numbers of points. Defaults to 50. rstride, cstride : int Downsampling stride in each direction. These arguments are mutually exclusive with *rcount* and *ccount*. If only one of *rstride* or *cstride* is set, the other defaults to 10. 'classic' mode uses a default of ``rstride = cstride = 10`` instead of the new default of ``rcount = ccount = 50``. color : :mpltype:`color` Color of the surface patches. cmap : Colormap, optional Colormap of the surface patches. facecolors : list of :mpltype:`color` Colors of each individual patch. norm : `~matplotlib.colors.Normalize`, optional Normalization for the colormap. vmin, vmax : float, optional Bounds for the normalization. shade : bool, default: True Whether to shade the facecolors. Shading is always disabled when *cmap* is specified. lightsource : `~matplotlib.colors.LightSource`, optional The lightsource to use when *shade* is True. **kwargs Other keyword arguments are forwarded to `.Poly3DCollection`.
from scipy import special
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
theta = linspace(0, pi, 100) # polar angle
phi = linspace(0, 2*pi, 100) # azimuthal angle
phi, theta = np.meshgrid(phi,theta) # numpy routine
l,m=2,1
Ylm = special.sph_harm(m,l,phi,theta)
# Convert to Cartesian coordinates for plotting
r = abs(Ylm)
x = r * sin(theta) * cos(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(theta)
fig = plt.figure(figsize=plt.figaspect(1.))
ax = fig.add_subplot(projection='3d')
# Use plot_surface to create a 3D plot
# You can color the surface based on the value of the magnitude
surface = ax.plot_surface(x, y, z, rstride=1, cstride=1,
facecolors=plt.cm.viridis(r/np.max(r)),
antialiased=True, linewidth=0)
# Add color bar and labels
mappable = plt.cm.ScalarMappable(cmap='viridis')
mappable.set_array(r)
plt.colorbar(mappable, ax=ax, shrink=0.6)
ax.set_title(f"$Y_{{{l}{m}}}$")
# Adjust the view angle for better visualization
ax.view_init(elev=30, azim=45)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
plt.show()
Real spherical harmonics are linear combination of spherical harmonics, which are often used in theory of crystals and crystal potentials. If the crystal potential has symmetry of a cube, eigenstates will not be spherical harmonics, but will be real spherical harmonics.
$$p_z=Y_{1,0}^R = Y_{1,0}$$$$p_y = Y_{1,-1}^R = \sqrt{2} \, \text{Im}(Y_{1,-1}) = \sqrt{2} \, \frac{Y_{1,-1} - Y_{1,1}}{2i}$$$$p_x = Y_{1,1}^R = \sqrt{2} \, \text{Re}(Y_{1,1}) = \sqrt{2} \, \frac{Y_{1,1} + Y_{1,-1}}{2}$$$$d_{z^2}=Y_{2,0}^R = Y_{2,0}$$$$d_{yz}=Y_{2,-1}^R = \sqrt{2} \, \text{Im}(Y_{2,-1}) = \sqrt{2} \, \frac{Y_{2,-1} - Y_{2,1}}{2i}$$$$d_{xz}=Y_{2,1}^R = \sqrt{2} \, \text{Re}(Y_{2,1}) = \sqrt{2} \, \frac{Y_{2,1} + Y_{2,-1}}{2}$$$$d_{xy}=Y_{2,-2}^R = \sqrt{2} \, \text{Im}(Y_{2,-2}) = \sqrt{2} \, \frac{Y_{2,-2} - Y_{2,2}}{2i}$$$$d_{x^2-y^2}=Y_{2,2}^R = \sqrt{2} \, \text{Re}(Y_{2,2}) = \sqrt{2} \, \frac{Y_{2,2} + Y_{2,-2}}{2}$$theta = linspace(0, pi, 100) # polar angle
phi = linspace(0, 2*pi, 100) # azimuthal angle
phi, theta = np.meshgrid(phi,theta) # numpy routine
l,m=2,-2
Ylm = special.sph_harm(m,l,phi,theta)
# conversion to real spherical harmonics
if m < 0:
Ylm = sqrt(2) * (-1)**m * Ylm.imag
elif m > 0:
Ylm = sqrt(2) * (-1)**m * Ylm.real
else: # m==0
Ylm = Ylm.real # should be real anyway
# Convert to Cartesian coordinates for plotting
r = np.abs(Ylm)
x = r * sin(theta) * cos(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(theta)
fig = plt.figure(figsize=plt.figaspect(1.))
ax = fig.add_subplot(projection='3d')
ax.set_box_aspect([1, 1, 1]) # if you want to stretch aspect ratio
# Set up colormap based on the real spherical harmonic values
cmap = plt.get_cmap('PRGn')
norm = plt.Normalize(vmin=-r.max(), vmax=r.max())
colors = cmap(norm(Ylm))
# Use plot_surface to create a 3D plot
# Color the plotted surface according to the sign of Y.
#cmap = plt.cm.ScalarMappable(cmap=plt.get_cmap('PRGn'))
#cmap.set_clim(-0.5, 0.5)
surface = ax.plot_surface(x, y, z, rstride=1, cstride=1,
facecolors=colors,
antialiased=True, linewidth=0)
# Add a color bar to show the relationship between color and value
mappable = plt.cm.ScalarMappable(cmap=cmap, norm=norm)
mappable.set_array(Ylm)
plt.colorbar(mappable, ax=ax, shrink=0.6)
ax.set_title(f"$Y_{{{l}{m}}}$")
# Adjust the view angle for better visualization
ax.view_init(elev=20, azim=45)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
plt.show()
Numerical evaluation of a function of the type
$\displaystyle \int_a^b f(x) dx$
is called numerical quadrature, or simply quadature. SciPy provides a series of functions for different kind of quadrature, for example the quad
, dblquad
and tplquad
for single, double and triple integrals, respectively.
from scipy.integrate import quad, dblquad, tplquad
The quad
function takes a large number of optional arguments, which can be used to fine-tune the behaviour of the function (try help(quad)
for details).
The basic usage is as follows:
x_lower, x_upper = 0.0, 1.0
val, abserr = quad( lambda x: x**2, x_lower, x_upper)
print('Value=', val, 'error=', abserr)
Value= 0.3333333333333333 error= 3.700743415417188e-15
# help(quad)
print( quad( lambda x: sin(x)/x, 0, 1000))
(1.5702669821983255, 0.24409510202674356)
/var/folders/j8/d9m3r0zx7j37l3ktfl_n1xw00000gn/T/ipykernel_6651/1866967918.py:1: IntegrationWarning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze the integrand in order to determine the difficulties. If the position of a local difficulty can be determined (singularity, discontinuity) one will probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used. print( quad( lambda x: sin(x)/x, 0, 1000))
L=10*pi
x=linspace(1e-12,L,100)
plt.plot(x,sin(x)/x)
plt.grid()
plt.show()
If we need to pass extra arguments to integrand function we can use the args
keyword argument. Let's say we want to evaluate
$f(x) = \displaystyle \int_0^x \frac{j_n(t)+j_m(t)}{t} dt$
# help(quad)
f2 = lambda t,n,m: (jn(n,t)+jn(m,t))/t
def f3(t,n,m):
return (jn(n,t)+jn(m,t))/t
# First specific case for n=1, m=2, x=1
# integrating (j_1(t)+j_2(t))/t
quad(f2, 0, 1, args=(1,2))
(0.5396292385998932, 5.991088054545437e-15)
Now simpler: $f_n(x) = n \displaystyle \int_0^x \frac{j_n(t)}{t} dt$
xs = linspace(1e-10,30,300) # mesh for x-variable
for ns in range(1,4): # n in 0,1,2,3
fs=[ns * quad(lambda t,n: jn(n, t)/t, 0, x, args=(ns,))[0] for x in xs]
plt.plot(xs,fs, label='n='+str(ns))
plt.legend(loc='best')
<matplotlib.legend.Legend at 0x150cbec10>
# here we can get away withouth args since function is defined inline
for ns in range(1,4):
fs=[ns * quad(lambda t: jn(ns, t)/t, 0, x)[0] for x in xs]
plt.plot(xs,fs, label='n='+str(ns))
plt.legend(loc='best')
plt.show()
Higher-dimensional integration works in the same way:
Note that we pass lambda functions for the limits for the y integration, since these in general can be functions of x.
dblquad
:
\begin{equation}
\int_{x_a}^{x_b} dx \int_{y_a(x)}^{y_b(x)} dy f(x,y)
\end{equation}
tplquad
:
\begin{equation}
\int_{x_a}^{x_b} dx \int_{y_a(x)}^{y_b(x)} dy \int_{z_a(x,y)}^{z_b(x,y)} dz f(x,y,z)
\end{equation}
def integrand(x, y, z):
return exp(-x**2-y**2-z**2)
x_a,x_b = 0,10
y_a,y_b = 0,10
z_a,z_b = 0,10
val1, abserr = dblquad(integrand, x_a, x_b, lambda x:y_a, lambda x:y_b, args=(0,))
val2, abserr = tplquad(integrand, x_a, x_b, lambda x:y_a, lambda x:y_b, lambda x,y:z_a, lambda x,y:z_b)
print(val1, val2, abserr)
0.7853981633974476 0.6960409996039545 1.4675161390126584e-08
Ordinary differential equations (ODEs)¶
SciPy provides two different ways to solve ODEs: An API based on the function odeint
, and object-oriented API based on the class ode
. Usually odeint
is easier to get started with, but the ode
class offers some finer level of control.
Here we will use the odeint
functions. For more information about the class ode
, try help(ode)
. It does pretty much the same thing as odeint
, but in an object-oriented fashion.
To use odeint
, first import it from the scipy.integrate
module
from scipy import *
from numpy import *
from scipy.integrate import odeint, ode
A system of ODEs should be formulated in standard form, which is:
$y' = f(y, t)$
where we are searching for function $y(t)$ and where
$y = [y_1(t), y_2(t), ..., y_n(t)]$
and $f$ is some function that gives the derivatives of the function $y_i(t)$. To solve an ODE we need to know the function $f$ and its initial condition, $y(0)$.
Note that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.
For example, for second order differential equation $$y''(t)=f(y,y',t)$$ we can choose $$Y(t) = [ y(t), y'(t)]$$ and then $$\frac{dY(t)}{dt}=[y'(t),y''(t)]$$ which can be expressed as $$[\frac{dY_0(t)}{dt},\frac{dY_1(t)}{dt}]=[Y_1(t),f(Y_0(t),Y_1(t),t)].$$
Once we have defined the Python function f
and array y_0
(that is $f$ and $y(0)$ in the mathematical formulation), we can use the odeint
function as:
y_t = odeint(f, y_0, t)
where t
is and array with time-coordinates for which to solve the ODE problem. y_t
is an array with one row for each point in time in t
, where each column corresponds to a solution y_i(t)
at that point in time.
We will see how we can implement f
and y_0
in Python code in the examples below.
Example: double pendulum¶
Let's consider a physical example: The double compound pendulum, described in some detail here: http://en.wikipedia.org/wiki/Double_pendulum
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
The equations of motion of the pendulum are given on the wiki page:
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
To make the Python code simpler to follow, let's introduce new variable names and the vector notation: $x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
${\dot x_1} = \frac{6}{m\ell^2} \frac{ 2 x_3 - 3 \cos(x_1-x_2) x_4}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_2} = \frac{6}{m\ell^2} \frac{ 8 x_4 - 3 \cos(x_1-x_2) x_3}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_3} = -\frac{1}{2} m \ell^2 \left [ {\dot x_1} {\dot x_2} \sin (x_1-x_2) + 3 \frac{g}{\ell} \sin x_1 \right ]$
${\dot x_4} = -\frac{1}{2} m \ell^2 \left [ -{\dot x_1} {\dot x_2} \sin (x_1-x_2) + \frac{g}{\ell} \sin x_2 \right]$
# help(odeint)
def dx(x,t):
"""
The right-hand side of the pendulum ODE
x=[x1,x2,x3,x4]
"""
g, L, m = 9.82, 1., 1.
x1,x2,x3,x4 = x # x is array
c1 = 1/(m*L**2)
dx1 = 6.*c1 * (2*x3-3*cos(x1-x2)*x4)/(16.-9.*cos(x1-x2)**2)
dx2 = 6.*c1 * (8*x4-3*cos(x1-x2)*x3)/(16.-9.*cos(x1-x2)**2)
dx3 = -0.5/c1 * (dx1*dx2 * sin(x1-x2) + 3*g/L * sin(x1))
dx4 = -0.5/c1 * (-dx1*dx2 * sin(x1-x2)+ g/L * sin(x2))
return array([dx1,dx2,dx3,dx4])
# choose an initial state
x0 = [pi/4, pi/2, 0, 0]
# time coodinate to solve the ODE for: from 0 to 10 seconds
t = linspace(0, 10, 250)
# solve the ODE problem
x = odeint(dx, x0, t)
%matplotlib inline
import matplotlib.pyplot as plt
# plot the angles as a function of time
fig, axes = plt.subplots(1,2,figsize=(12,4))
axes[0].plot(t, x[:, 0], label="theta1")
axes[0].plot(t, x[:, 1], label="theta2")
axes[0].legend(loc='best')
axes[0].set_xlabel('time')
L = 0.5
x1 = L * sin(x[:, 0])
y1 = -L * cos(x[:, 0])
x2 = x1 + L * sin(x[:, 1])
y2 = y1 - L * cos(x[:, 1])
axes[1].plot(x1, y1, label="pendulum1")
axes[1].plot(x2, y2, label="pendulum2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([-1, 1])
axes[1].legend(loc='best')
plt.show()
examples for subplot creation: https://towardsdatascience.com/the-many-ways-to-call-axes-in-matplotlib-2667a7b06e06
matplotlib examples: https://matplotlib.org/3.1.1/gallery/index.html
See animation in pendulum.py
.
(To get it work within jupyter seems a bit challenging at the moment.)
Example: Damped harmonic oscillator¶
ODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping
The equation of motion for the damped oscillator is:
$\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0$
where $x$ is the position of the oscillator, $\omega_0$ is the frequency, and $\zeta$ is the damping ratio. To write this second-order ODE on standard form we introduce $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
$\displaystyle \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x$
$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = p$
In the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument args
to the odeint
function:
def dy(y, t, zeta, w0):
"""
The right-hand side of the damped oscillator ODE
"""
x, p = y #y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return array([dx, dp])
from numpy import *
# initial state:
y0 = [1.0, 0.0]
# time coodinate to solve the ODE for
t = linspace(0, 10, 1000)
w0 = 2*pi*1.0
from scipy.integrate import odeint
# solve the ODE problem for three different values of the damping ratio
y1 = odeint(dy, y0, t, args=(0.0, w0)) # undamped
y2 = odeint(dy, y0, t, args=(0.2, w0)) # under damped
y3 = odeint(dy, y0, t, args=(1.0, w0)) # critial damping
y4 = odeint(dy, y0, t, args=(5.0, w0)) # over damped
import pylab as plt
plt.plot(t, y1[:,0], label="undamped", linewidth=0.5)
plt.plot(t, y2[:,0], label="under damped")
plt.plot(t, y3[:,0], label=r"critical damping")
plt.plot(t, y4[:,0], label="over damped")
plt.legend()
plt.show()
Fourier transform¶
Fourier transforms are one of the universal tools in computational physics, which appear over and over again in different contexts: solving partial DE, quantum mechanics, etc.
SciPy provides functions for accessing the classic FFTPACK library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
Mathematically Fourier transfrom is an operation that produces frequency distribution of a function: \begin{eqnarray} F(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{i\omega t} f(t) dt \end{eqnarray}
One way to picture this operation is to think of a sound wave: The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
The important property is that its inverse exists and gives back the original function:
\begin{eqnarray} f(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-i\omega t} F(\omega) d\omega \end{eqnarray}The proof is relatively straighforward. We insert expression for $F(\omega)$ into the second equation, and get \begin{eqnarray} f(t) = \frac{1}{2\pi} \int_{-\infty}^{\infty}d\omega e^{-i\omega t} \int_{-\infty}^{\infty} dt' e^{i\omega t'} f(t')=\frac{1}{2\pi} \int_{-\infty}^{\infty}dt' f(t')\int_{-\infty}^{\infty}d\omega e^{i\omega (t'-t)} \end{eqnarray}
Next we will prove that \begin{eqnarray} \frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{i\omega (t'-t)} =\delta(t-t') \end{eqnarray} which is the Kronecker $\delta$ function, and has property that \begin{equation} \int_{-\infty}^{\infty}dt' f(t')\delta(t-t')=f(t) \end{equation}
The properies of the Kronecker $\delta(\tau)$ functions are that for a small $\varepsilon$: \begin{eqnarray} && \delta(\tau)=0 ; |\tau|>\varepsilon \\ && \int_{-\varepsilon}^{\varepsilon}\delta(\tau) = 1 \end{eqnarray}
We need to prove that \begin{eqnarray} \frac{1}{2\pi}\int_{-\infty}^{\infty}d\omega e^{i\omega \tau} =\delta(\tau) \end{eqnarray}
If $\tau$ is nonzero, the value is vanishig as long as the limits in $\infty$ are properly arranged. The second property of $\delta$ function still needs to be proven: \begin{eqnarray} &&\frac{1}{2\pi}\int_{-\varepsilon}^{\varepsilon} d\tau \int_{-\infty}^{\infty}d\omega e^{i\omega \tau} =^? 1\\ &&\frac{1}{2\pi} \int_{-\infty}^{\infty}d\omega \int_{-\varepsilon}^{\varepsilon} d\tau e^{i\omega \tau}= \frac{1}{2\pi} \int_{-\infty}^{\infty}d\omega \frac{e^{i\omega\epsilon}-e^{-i\omega\epsilon}}{i\omega}= \frac{1}{2\pi} \int_{-\infty}^{\infty}d\omega \frac{2 i \sin(\omega\epsilon)}{i\omega}= \frac{1}{\pi} \int_{-\infty}^{\infty}d x \frac{\sin(x)}{x}=1 \end{eqnarray}
from math import *
from scipy import integrate
r=integrate.quad(lambda x: sin(x)/x, 1e-6,200.)
r[0]/pi
0.4992312856179283
from scipy import special
# help(special.sici)
from scipy import special
special.sici(100000.)[0]/pi
0.5000031810631101
Note that Fourier transform of a real function is in general a complex function because
\begin{eqnarray} \textrm{Im}(F(\omega)) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \sin(\omega t) f(t)dt \end{eqnarray}Only if $f(t)$ is even function of $t$ the imaginary part of $F$ vanishes, and $F(\omega)$ is real.
If $f(t)$ is a real function, than $F(\omega)$ has to satisfy the constrain:
\begin{eqnarray} F^*(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-i\omega t} f(t) dt = F(-\omega) \end{eqnarray}which means that the real part $Re F(\omega)= Re F(-\omega)$ is even, and $Im F(\omega)=-Im F(-\omega)$ is odd.
Periodic functions with period T¶
Note also that periodic functions have descrete frequency only, i.e., $F(\omega)$ is nonzero only for discrete set of numbers (discrete function).
The best way to see this is to request that function $f(t)$ is perodic and hence $f(t+T)=f(t)$ where $T$ is a period. Than we know that \begin{eqnarray} f(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty d\omega F(\omega) e^{-i\omega t}=f(t+T)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty d\omega F(\omega) e^{-i\omega t-i\omega T} \end{eqnarray}
and for this to be true, we must have $\omega T = 2\pi n$, or, $\omega_n = 2\pi n/T$. This means that frequency is descrete and only values $\omega_n=2\pi n/T$ are allowed, where $n$ is integer. If $T$ is large, there are still a lot of freuquencies in every small interval allowed.
For discrete case, we usually choose different normalization of functions. Instead of $F(\omega_n)$ we can use more appropriate normalization $\widetilde{F}(\omega_n)$ such that
\begin{eqnarray} f(t) = \frac{1}{\sqrt{T}}\sum_n \widetilde{F}(\omega_n)e^{-i\omega_n t}\\ \widetilde{F}(\omega_n) = \frac{1}{\sqrt{T}}\int_{t_0}^{t_0+T} f(t) e^{i\omega_n t} dt \end{eqnarray}We can prove that this is correct normalization, because $$\frac{1}{T}\int_{t_0}^{t_0+T}dt e^{i 2\pi(n-n')\frac{t}{T}} = \delta_{n-n'}$$
Discretized functions for practical FFT¶
Finally, in practice we discretize the funtion \begin{eqnarray} f(t) = \frac{1}{N}\sum_{j=0}^{N-1} \delta(t-t_j) f(t_j) \end{eqnarray}
If we rename $f(t_j)\equiv f_j$ and $F(\omega_n)\equiv F_n$ we arrive at the discrete Fourier transform.
For the variable $\frac{t}{T}$, we choosen to be disributed between $[0,1]$ ($t_0$ is set to 0). For a number distributed between $[0,1]$ we can choose enumeration $j/N$, where $j\in [0,N-1]$, i.e, $t = T j/N$ with $N$ being a large number. Then $\omega_n t_j= 2\pi n\; t_j/T = 2\pi n j/N$. The discrete Fourier transform thus takes the form \begin{eqnarray} && F_n = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1} f_j e^{i 2\pi n j/N}\\ && f_j = \frac{1}{\sqrt{N}} \sum_{n=0}^{N-1} F_n e^{-i 2\pi n j/N} \end{eqnarray}
The unusal feature of FFT algorithm is that it can be done in $O(N \log(N))$ time, rather than $O(N^2)$ time, as naively estimated from the above equation.
In QM position ($x$) and momentum($p$) are connected by Fourier transform. Similarly time $t$ and energy $E=\omega/\hbar$ are connected by Fourier transform.
Fourier transform is also used in solving partial differential equations.
Some examples displayed below:
To use the fftpack
module in a python program, include it using:
from scipy.fftpack import *
from numpy.fft import fftfreq
To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:
# calculate the fast fourier transform
# y2 is the solution to the under-damped oscillator from the previous section
F = fft(y2[:,0])
# calculate the frequencies for the components in F
w = fftfreq(len(t), t[1]-t[0])
T = len(t)*(t[1]-t[0])
print('Period T={:f} distance domega= 1/T={:f} last point=(N/2-1)/T={:f} first point=-N/(2T)={:f}'.format(
T, 1/T, (len(t)/2-1)*1/T, -len(t)/(2*T) ))
Period T=10.010010 distance domega= 1/T=0.099900 last point=(N/2-1)/T=49.850100 first point=-N/(2T)=-49.950000
# w_n=n/(len(t)*(t[1]-t[0])) == n/T
# note that analytically we expect w_n = 2*pi * n/T
print(range(10)/(len(t)*(t[1]-t[0])))
w
[0. 0.0999 0.1998 0.2997 0.3996 0.4995 0.5994 0.6993 0.7992 0.8991]
array([ 0. , 0.0999, 0.1998, 0.2997, 0.3996, 0.4995, 0.5994, 0.6993, 0.7992, 0.8991, 0.999 , 1.0989, 1.1988, 1.2987, 1.3986, 1.4985, 1.5984, 1.6983, 1.7982, 1.8981, 1.998 , 2.0979, 2.1978, 2.2977, 2.3976, 2.4975, 2.5974, 2.6973, 2.7972, 2.8971, 2.997 , 3.0969, 3.1968, 3.2967, 3.3966, 3.4965, 3.5964, 3.6963, 3.7962, 3.8961, 3.996 , 4.0959, 4.1958, 4.2957, 4.3956, 4.4955, 4.5954, 4.6953, 4.7952, 4.8951, 4.995 , 5.0949, 5.1948, 5.2947, 5.3946, 5.4945, 5.5944, 5.6943, 5.7942, 5.8941, 5.994 , 6.0939, 6.1938, 6.2937, 6.3936, 6.4935, 6.5934, 6.6933, 6.7932, 6.8931, 6.993 , 7.0929, 7.1928, 7.2927, 7.3926, 7.4925, 7.5924, 7.6923, 7.7922, 7.8921, 7.992 , 8.0919, 8.1918, 8.2917, 8.3916, 8.4915, 8.5914, 8.6913, 8.7912, 8.8911, 8.991 , 9.0909, 9.1908, 9.2907, 9.3906, 9.4905, 9.5904, 9.6903, 9.7902, 9.8901, 9.99 , 10.0899, 10.1898, 10.2897, 10.3896, 10.4895, 10.5894, 10.6893, 10.7892, 10.8891, 10.989 , 11.0889, 11.1888, 11.2887, 11.3886, 11.4885, 11.5884, 11.6883, 11.7882, 11.8881, 11.988 , 12.0879, 12.1878, 12.2877, 12.3876, 12.4875, 12.5874, 12.6873, 12.7872, 12.8871, 12.987 , 13.0869, 13.1868, 13.2867, 13.3866, 13.4865, 13.5864, 13.6863, 13.7862, 13.8861, 13.986 , 14.0859, 14.1858, 14.2857, 14.3856, 14.4855, 14.5854, 14.6853, 14.7852, 14.8851, 14.985 , 15.0849, 15.1848, 15.2847, 15.3846, 15.4845, 15.5844, 15.6843, 15.7842, 15.8841, 15.984 , 16.0839, 16.1838, 16.2837, 16.3836, 16.4835, 16.5834, 16.6833, 16.7832, 16.8831, 16.983 , 17.0829, 17.1828, 17.2827, 17.3826, 17.4825, 17.5824, 17.6823, 17.7822, 17.8821, 17.982 , 18.0819, 18.1818, 18.2817, 18.3816, 18.4815, 18.5814, 18.6813, 18.7812, 18.8811, 18.981 , 19.0809, 19.1808, 19.2807, 19.3806, 19.4805, 19.5804, 19.6803, 19.7802, 19.8801, 19.98 , 20.0799, 20.1798, 20.2797, 20.3796, 20.4795, 20.5794, 20.6793, 20.7792, 20.8791, 20.979 , 21.0789, 21.1788, 21.2787, 21.3786, 21.4785, 21.5784, 21.6783, 21.7782, 21.8781, 21.978 , 22.0779, 22.1778, 22.2777, 22.3776, 22.4775, 22.5774, 22.6773, 22.7772, 22.8771, 22.977 , 23.0769, 23.1768, 23.2767, 23.3766, 23.4765, 23.5764, 23.6763, 23.7762, 23.8761, 23.976 , 24.0759, 24.1758, 24.2757, 24.3756, 24.4755, 24.5754, 24.6753, 24.7752, 24.8751, 24.975 , 25.0749, 25.1748, 25.2747, 25.3746, 25.4745, 25.5744, 25.6743, 25.7742, 25.8741, 25.974 , 26.0739, 26.1738, 26.2737, 26.3736, 26.4735, 26.5734, 26.6733, 26.7732, 26.8731, 26.973 , 27.0729, 27.1728, 27.2727, 27.3726, 27.4725, 27.5724, 27.6723, 27.7722, 27.8721, 27.972 , 28.0719, 28.1718, 28.2717, 28.3716, 28.4715, 28.5714, 28.6713, 28.7712, 28.8711, 28.971 , 29.0709, 29.1708, 29.2707, 29.3706, 29.4705, 29.5704, 29.6703, 29.7702, 29.8701, 29.97 , 30.0699, 30.1698, 30.2697, 30.3696, 30.4695, 30.5694, 30.6693, 30.7692, 30.8691, 30.969 , 31.0689, 31.1688, 31.2687, 31.3686, 31.4685, 31.5684, 31.6683, 31.7682, 31.8681, 31.968 , 32.0679, 32.1678, 32.2677, 32.3676, 32.4675, 32.5674, 32.6673, 32.7672, 32.8671, 32.967 , 33.0669, 33.1668, 33.2667, 33.3666, 33.4665, 33.5664, 33.6663, 33.7662, 33.8661, 33.966 , 34.0659, 34.1658, 34.2657, 34.3656, 34.4655, 34.5654, 34.6653, 34.7652, 34.8651, 34.965 , 35.0649, 35.1648, 35.2647, 35.3646, 35.4645, 35.5644, 35.6643, 35.7642, 35.8641, 35.964 , 36.0639, 36.1638, 36.2637, 36.3636, 36.4635, 36.5634, 36.6633, 36.7632, 36.8631, 36.963 , 37.0629, 37.1628, 37.2627, 37.3626, 37.4625, 37.5624, 37.6623, 37.7622, 37.8621, 37.962 , 38.0619, 38.1618, 38.2617, 38.3616, 38.4615, 38.5614, 38.6613, 38.7612, 38.8611, 38.961 , 39.0609, 39.1608, 39.2607, 39.3606, 39.4605, 39.5604, 39.6603, 39.7602, 39.8601, 39.96 , 40.0599, 40.1598, 40.2597, 40.3596, 40.4595, 40.5594, 40.6593, 40.7592, 40.8591, 40.959 , 41.0589, 41.1588, 41.2587, 41.3586, 41.4585, 41.5584, 41.6583, 41.7582, 41.8581, 41.958 , 42.0579, 42.1578, 42.2577, 42.3576, 42.4575, 42.5574, 42.6573, 42.7572, 42.8571, 42.957 , 43.0569, 43.1568, 43.2567, 43.3566, 43.4565, 43.5564, 43.6563, 43.7562, 43.8561, 43.956 , 44.0559, 44.1558, 44.2557, 44.3556, 44.4555, 44.5554, 44.6553, 44.7552, 44.8551, 44.955 , 45.0549, 45.1548, 45.2547, 45.3546, 45.4545, 45.5544, 45.6543, 45.7542, 45.8541, 45.954 , 46.0539, 46.1538, 46.2537, 46.3536, 46.4535, 46.5534, 46.6533, 46.7532, 46.8531, 46.953 , 47.0529, 47.1528, 47.2527, 47.3526, 47.4525, 47.5524, 47.6523, 47.7522, 47.8521, 47.952 , 48.0519, 48.1518, 48.2517, 48.3516, 48.4515, 48.5514, 48.6513, 48.7512, 48.8511, 48.951 , 49.0509, 49.1508, 49.2507, 49.3506, 49.4505, 49.5504, 49.6503, 49.7502, 49.8501, -49.95 , -49.8501, -49.7502, -49.6503, -49.5504, -49.4505, -49.3506, -49.2507, -49.1508, -49.0509, -48.951 , -48.8511, -48.7512, -48.6513, -48.5514, -48.4515, -48.3516, -48.2517, -48.1518, -48.0519, -47.952 , -47.8521, -47.7522, -47.6523, -47.5524, -47.4525, -47.3526, -47.2527, -47.1528, -47.0529, -46.953 , -46.8531, -46.7532, -46.6533, -46.5534, -46.4535, -46.3536, -46.2537, -46.1538, -46.0539, -45.954 , -45.8541, -45.7542, -45.6543, -45.5544, -45.4545, -45.3546, -45.2547, -45.1548, -45.0549, -44.955 , -44.8551, -44.7552, -44.6553, -44.5554, -44.4555, -44.3556, -44.2557, -44.1558, -44.0559, -43.956 , -43.8561, -43.7562, -43.6563, -43.5564, -43.4565, -43.3566, -43.2567, -43.1568, -43.0569, -42.957 , -42.8571, -42.7572, -42.6573, -42.5574, -42.4575, -42.3576, -42.2577, -42.1578, -42.0579, -41.958 , -41.8581, -41.7582, -41.6583, -41.5584, -41.4585, -41.3586, -41.2587, -41.1588, -41.0589, -40.959 , -40.8591, -40.7592, -40.6593, -40.5594, -40.4595, -40.3596, -40.2597, -40.1598, -40.0599, -39.96 , -39.8601, -39.7602, -39.6603, -39.5604, -39.4605, -39.3606, -39.2607, -39.1608, -39.0609, -38.961 , -38.8611, -38.7612, -38.6613, -38.5614, -38.4615, -38.3616, -38.2617, -38.1618, -38.0619, -37.962 , -37.8621, -37.7622, -37.6623, -37.5624, -37.4625, -37.3626, -37.2627, -37.1628, -37.0629, -36.963 , -36.8631, -36.7632, -36.6633, -36.5634, -36.4635, -36.3636, -36.2637, -36.1638, -36.0639, -35.964 , -35.8641, -35.7642, -35.6643, -35.5644, -35.4645, -35.3646, -35.2647, -35.1648, -35.0649, -34.965 , -34.8651, -34.7652, -34.6653, -34.5654, -34.4655, -34.3656, -34.2657, -34.1658, -34.0659, -33.966 , -33.8661, -33.7662, -33.6663, -33.5664, -33.4665, -33.3666, -33.2667, -33.1668, -33.0669, -32.967 , -32.8671, -32.7672, -32.6673, -32.5674, -32.4675, -32.3676, -32.2677, -32.1678, -32.0679, -31.968 , -31.8681, -31.7682, -31.6683, -31.5684, -31.4685, -31.3686, -31.2687, -31.1688, -31.0689, -30.969 , -30.8691, -30.7692, -30.6693, -30.5694, -30.4695, -30.3696, -30.2697, -30.1698, -30.0699, -29.97 , -29.8701, -29.7702, -29.6703, -29.5704, -29.4705, -29.3706, -29.2707, -29.1708, -29.0709, -28.971 , -28.8711, -28.7712, -28.6713, -28.5714, -28.4715, -28.3716, -28.2717, -28.1718, -28.0719, -27.972 , -27.8721, -27.7722, -27.6723, -27.5724, -27.4725, -27.3726, -27.2727, -27.1728, -27.0729, -26.973 , -26.8731, -26.7732, -26.6733, -26.5734, -26.4735, -26.3736, -26.2737, -26.1738, -26.0739, -25.974 , -25.8741, -25.7742, -25.6743, -25.5744, -25.4745, -25.3746, -25.2747, -25.1748, -25.0749, -24.975 , -24.8751, -24.7752, -24.6753, -24.5754, -24.4755, -24.3756, -24.2757, -24.1758, -24.0759, -23.976 , -23.8761, -23.7762, -23.6763, -23.5764, -23.4765, -23.3766, -23.2767, -23.1768, -23.0769, -22.977 , -22.8771, -22.7772, -22.6773, -22.5774, -22.4775, -22.3776, -22.2777, -22.1778, -22.0779, -21.978 , -21.8781, -21.7782, -21.6783, -21.5784, -21.4785, -21.3786, -21.2787, -21.1788, -21.0789, -20.979 , -20.8791, -20.7792, -20.6793, -20.5794, -20.4795, -20.3796, -20.2797, -20.1798, -20.0799, -19.98 , -19.8801, -19.7802, -19.6803, -19.5804, -19.4805, -19.3806, -19.2807, -19.1808, -19.0809, -18.981 , -18.8811, -18.7812, -18.6813, -18.5814, -18.4815, -18.3816, -18.2817, -18.1818, -18.0819, -17.982 , -17.8821, -17.7822, -17.6823, -17.5824, -17.4825, -17.3826, -17.2827, -17.1828, -17.0829, -16.983 , -16.8831, -16.7832, -16.6833, -16.5834, -16.4835, -16.3836, -16.2837, -16.1838, -16.0839, -15.984 , -15.8841, -15.7842, -15.6843, -15.5844, -15.4845, -15.3846, -15.2847, -15.1848, -15.0849, -14.985 , -14.8851, -14.7852, -14.6853, -14.5854, -14.4855, -14.3856, -14.2857, -14.1858, -14.0859, -13.986 , -13.8861, -13.7862, -13.6863, -13.5864, -13.4865, -13.3866, -13.2867, -13.1868, -13.0869, -12.987 , -12.8871, -12.7872, -12.6873, -12.5874, -12.4875, -12.3876, -12.2877, -12.1878, -12.0879, -11.988 , -11.8881, -11.7882, -11.6883, -11.5884, -11.4885, -11.3886, -11.2887, -11.1888, -11.0889, -10.989 , -10.8891, -10.7892, -10.6893, -10.5894, -10.4895, -10.3896, -10.2897, -10.1898, -10.0899, -9.99 , -9.8901, -9.7902, -9.6903, -9.5904, -9.4905, -9.3906, -9.2907, -9.1908, -9.0909, -8.991 , -8.8911, -8.7912, -8.6913, -8.5914, -8.4915, -8.3916, -8.2917, -8.1918, -8.0919, -7.992 , -7.8921, -7.7922, -7.6923, -7.5924, -7.4925, -7.3926, -7.2927, -7.1928, -7.0929, -6.993 , -6.8931, -6.7932, -6.6933, -6.5934, -6.4935, -6.3936, -6.2937, -6.1938, -6.0939, -5.994 , -5.8941, -5.7942, -5.6943, -5.5944, -5.4945, -5.3946, -5.2947, -5.1948, -5.0949, -4.995 , -4.8951, -4.7952, -4.6953, -4.5954, -4.4955, -4.3956, -4.2957, -4.1958, -4.0959, -3.996 , -3.8961, -3.7962, -3.6963, -3.5964, -3.4965, -3.3966, -3.2967, -3.1968, -3.0969, -2.997 , -2.8971, -2.7972, -2.6973, -2.5974, -2.4975, -2.3976, -2.2977, -2.1978, -2.0979, -1.998 , -1.8981, -1.7982, -1.6983, -1.5984, -1.4985, -1.3986, -1.2987, -1.1988, -1.0989, -0.999 , -0.8991, -0.7992, -0.6993, -0.5994, -0.4995, -0.3996, -0.2997, -0.1998, -0.0999])
plt.plot(w, abs(F))
plt.show()
#plt.xlim([-5,5])
Frequencies are sorted in inconvenient way. First is $\omega_n=0$, followed by all positive frequencies, and than followed by all negative frequencies. The frequnecy thus jumps from $\frac{(N-2)}{2 T}$ to $-\frac{N}{2 T}$. The reason for that is efficiency of FFT routine.
To make sense of this output, we sort the frequencies in ascending order:
indx=sorted(range(len(w)),key=lambda i: w[i])
The frequencies are also missing $2\pi$ from our definitions. The discrete frequencies should be $2\pi n/T$, but the program gives us $n/T$, with $n$ between $-N/2$ to $N/2-1$ (for even $N$).
indx
[500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499]
ws = 2*pi*w[indx]
Fs = F[indx]
plt.plot(ws, abs(Fs));
plt.plot(ws, Fs.real);
plt.plot(ws, Fs.imag);
plt.xlim([-30,30])
plt.show()
Properties of Fourier transform of a real signal: \begin{eqnarray} && F(\omega) = \int e^{i\omega t} x(t) dt\\ && F^*(\omega) = F(-\omega)\\ && Re(F(\omega)) = Re(F(-\omega))\\ && Im(F(\omega)) = -Im(F(-\omega)) \end{eqnarray}
The spectrum is peaked at $2\pi$, because our frequency $\omega_0=2\pi$. The damping adds a lot of additional frequencies to the spectrum. The spectrum has correct properties.
Can we use the inverse Fourier transform to get the original spectra back?
Inverse Fourier is obtained by ifft
.
ft = ifft(Fs)
ts = fftfreq(len(ws), ws[1]-ws[0])*2*pi
plt.plot(ts,ft.real)
plt.xlim([-5,5])
plt.plot(t,y2[:,0])
plt.show()
It seems similar, but something seems wrong.... Every second point is correct, but every second point has exactly minus sign as compared to expectations. Why?
plt.plot(ts,ft.real)
plt.xlim([0,0.2])
plt.plot(t,y2[:,0])
plt.show()
The issue is that if variable $t$ starts with 0 and extends to $T$, than frequency should also start with $\omega=0$ and end with $\omega_{max}$. Alternatively, we could shift all functions to start with $t\in[-T/2,T/2]$ and $\omega=[-\omega_{max}/2,\omega_{max}/2]$.
What we expect from FFT is \begin{eqnarray} f(t_i) = \frac{1}{\sqrt{N}} \sum_{n=-N/2}^{n=N/2-1} e^{-i\omega_n t_i} F_n \end{eqnarray} where $\omega_n=2\pi n/T$. But what FFT does is \begin{eqnarray} f(t_i)=\frac{1}{\sqrt{N}} \sum_{n=0}^{N-1} e^{-i\omega_n t_i} F_n \end{eqnarray}
To correct for that, we can multiply the result with the Nyquist frequency phase $e^{-i\omega_{-N/2} t_i}$:
\begin{eqnarray} f(t_i) = \frac{1}{\sqrt{N}} \sum_{n=-N/2}^{n=N/2-1} e^{-i\omega_n t_i} F_n = e^{-i\omega_{-N/2} t_i} \frac{1}{\sqrt{N}} \sum_{n=-N/2}^{N/2-1} e^{-i(\omega_n-\omega_{-N/2}) t_i} F_n= e^{-i\omega_{-N/2} t_i}\frac{1}{\sqrt{N}} \sum_{n=0}^{N-1} e^{-i\omega_n t_i} F_n \end{eqnarray}from numpy import *
plt.plot(ts,(ft * exp(-1j*ts*ws[0])).real)
plt.plot(t,y2[:,0])
plt.show()
Now it works. Our plot extends from $[-5,5]$ instead of $[0,10]$, so the values $[5,10]$ are found in $[-5,0]$.
Since this issue with the shift of the function is very common, FFT implements a function called fftshift
, which shifts the spectra so that we can start summation with $\omega_n=0$. Check it out below:
Fs2 = fftshift(Fs)
f2=ifft(Fs2)
plt.plot(ws,Fs2,label='shifted' )
plt.plot(ws,Fs,label='original')
plt.legend(loc='best')
plt.show()
plt.plot(t,y2[:,0],label='original')
plt.plot(t,f2,label='transformed')
plt.show()
Now the result is exactly what is expected, even the interval is between $[0,10]$.
Optimization¶
Optimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see: http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
To use the optimization module in scipy first include the optimize
module:
from scipy import optimize
Finding a minima¶
Let's first look at how to find the minima of a simple function of a single variable:
def f(x):
return 4*x**3 + (x-2)**2 + x**4
x = linspace(-5, 3, 100)
plt.plot(x, f(x))
plt.show()
We can use the fmin_bfgs
function to find the minima of a function:
optimize.minimize(f,-0.)
message: Optimization terminated successfully. success: True status: 0 fun: 2.804987644868778 x: [ 4.696e-01] nit: 4 jac: [-1.073e-06] hess_inv: [[ 6.288e-02]] nfev: 14 njev: 7
res=optimize.minimize(f,-2)
print(res.x)
[-2.67298151]
x_min = optimize.fmin_bfgs(f, -2)
x_min
Optimization terminated successfully. Current function value: -3.506641 Iterations: 5 Function evaluations: 16 Gradient evaluations: 8
array([-2.67298151])
optimize.minimize(f,0.5,method='L-BFGS-B')
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL success: True status: 0 fun: 2.8049876448711224 x: [ 4.696e-01] nit: 3 jac: [ 8.837e-06] nfev: 10 njev: 5 hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
optimize.fmin_bfgs(f, 0.5)
Optimization terminated successfully. Current function value: 2.804988 Iterations: 3 Function evaluations: 10 Gradient evaluations: 5
array([0.46961745])
We can also use the brent
or fminbound
functions. They have a bit different syntax and use different algorithms.
optimize.brent(f, brack=(1,2))
0.4696174340948085
optimize.minimize_scalar(f, bracket=(1,2), method='brent')
message: Optimization terminated successfully; The returned value satisfies the termination criteria (using xtol = 1.48e-08 ) success: True fun: 2.804987644868733 x: 0.4696174340948085 nit: 12 nfev: 15
optimize.fminbound(f, -4, 2)
-2.6729822917513886
# supposedly global optimization seems to fail
optimize.basinhopping(f,2)
message: ['requested number of basinhopping iterations completed successfully'] success: True fun: 2.8049876448687328 x: [ 4.696e-01] nit: 100 minimization_failures: 0 nfev: 1380 njev: 690 lowest_optimization_result: message: Optimization terminated successfully. success: True status: 0 fun: 2.8049876448687328 x: [ 4.696e-01] nit: 5 jac: [ 1.788e-07] hess_inv: [[ 6.287e-02]] nfev: 14 njev: 7
Finding a solution to a function, i.e., zeros¶
To find the root for a function of the form $f(x) = 0$ we can use the fsolve
function.
It is based on Powell's hybrid method as implemented in MINPACK’s library (hybrd): https://www.extremeoptimization.com/Documentation/Mathematics/Solving-Equations/Solving-Systems-of-Non-Linear-Equations.aspx
Powell's dogleg method, also called Powell's hybrid method, attempts to minimize the sum of the squares of the function values. It does this using a combination of Newton's* method and the steepest descent method. This is a so-called trust region method. This means that every step moves the current point to within a finite region. This makes the method more stable than Newton's method.*
On the other hand, the fact that the method is, in essence, a specialized minimizer means that the algorithm can get stuck in a local minimum that does not correspond to a solution of the system of equations.
Image('https://upload.wikimedia.org/wikipedia/commons/e/e0/NewtonIteration_Ani.gif')
<IPython.core.display.Image object>
Image('https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Gradient_descent.svg/1920px-Gradient_descent.svg.png')
To use fsolve, we need an initial guess:
from numpy import * # to have tan(array) defined
omega_c = 3.0
def f(omega):
# a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave resonator
return tan(2*pi*omega) - omega_c/omega
x = linspace(1e-6, 3, 1000)
y = f(x)
plt.plot(x,y,'o-')
plt.ylim(-5,5)
plt.show()
x = linspace(1e-6, 3, 1000)
y = f(x)
mask = where(abs(y) > 50)
x[mask] = y[mask] = NaN # get rid of vertical line when the function flip sign
plt.plot(x, y)
plt.plot([0, 3], [0, 0], 'k')
plt.ylim(-5,5)
plt.show()
optimize.fsolve(f, 0.1)
array([0.23743014])
optimize.fsolve(f, 0.6)
array([0.71286972])
optimize.fsolve(f, 1.1)
array([1.18990285])
Change of sign can occur when there is a zero or a pole. To use bracketing technique, we need to carefully bracket the zeros (and not the poles)
f(0.01),f(0.5)
(-299.93708533274634, -6.0)
f(0.001),f(0.25),f(0.3),f(0.73)
(-2999.993716732008, 1.6331239353195358e+16, -13.077683537175254, 3.806226047209912)
toms748(f, a, b, args=(), k=1, xtol=2e-12, rtol=8.881784197001252e-16, maxiter=100, full_output=False, disp=True)
Finds a zero of the function f
on the interval [a , b]
, where f(a)
and
f(b)
must have opposite signs.
It uses a mixture of inverse cubic interpolation and "Newton-quadratic" steps. [APS1995].
[optimize.toms748(f,0.01,0.25),optimize.toms748(f, 0.3, 0.73)]
[0.2374301406360339, 0.7128697158579146]
optimize.brentq(f, 0.25, 0.3)
0.25000000000145517
optimize.toms748(f,-0.012,0.01)
optimize.fsolve(f, 1e-16)
array([0.23743014])
Interpolation¶
Interpolation is simple and convenient in scipy: The interp1d
function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:
from scipy.interpolate import *
from numpy import *
def f(x):
return sin(x)
n = linspace(0, 9, 10)
x = linspace(0, 9, 300)
y_meas = f(n) + 0.05 * random.randn(len(n)) # simulate measurement with noise
y_real = f(x)
linear_interpolation = interp1d(n, y_meas, kind='linear')
y_interp1 = linear_interpolation(x)
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
plt.legend(loc='best')
plt.show()
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
cubic_smooth = UnivariateSpline(n, y_meas,s=0)
y_interp3 = cubic_smooth(x)
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
ax.plot(x, y_interp2, 'g', label='cubic interp')
ax.plot(x, y_interp3, 'b', label='cubic spline')
ax.legend(loc=3)
plt.show()
cubic_interpolation = interp1d(n, f(n), kind='cubic')
diff = cubic_interpolation(x)-f(x)
cubic_smooth = UnivariateSpline(n, f(n), s=0., k=4)
diff2 = cubic_smooth(x)-f(x)
plt.plot(x, diff, label='cubic')
plt.plot(x, diff2, label='cubic-spline')
plt.legend(loc='best')
plt.show()
help(UnivariateSpline)
Help on class UnivariateSpline in module scipy.interpolate._fitpack2: class UnivariateSpline(builtins.object) | UnivariateSpline(x, y, w=None, bbox=[None, None], k=3, s=None, ext=0, check_finite=False) | | 1-D smoothing spline fit to a given set of data points. | | Fits a spline y = spl(x) of degree `k` to the provided `x`, `y` data. `s` | specifies the number of knots by specifying a smoothing condition. | | Parameters | ---------- | x : (N,) array_like | 1-D array of independent input data. Must be increasing; | must be strictly increasing if `s` is 0. | y : (N,) array_like | 1-D array of dependent input data, of the same length as `x`. | w : (N,) array_like, optional | Weights for spline fitting. Must be positive. If `w` is None, | weights are all 1. Default is None. | bbox : (2,) array_like, optional | 2-sequence specifying the boundary of the approximation interval. If | `bbox` is None, ``bbox=[x[0], x[-1]]``. Default is None. | k : int, optional | Degree of the smoothing spline. Must be 1 <= `k` <= 5. | ``k = 3`` is a cubic spline. Default is 3. | s : float or None, optional | Positive smoothing factor used to choose the number of knots. Number | of knots will be increased until the smoothing condition is satisfied:: | | sum((w[i] * (y[i]-spl(x[i])))**2, axis=0) <= s | | However, because of numerical issues, the actual condition is:: | | abs(sum((w[i] * (y[i]-spl(x[i])))**2, axis=0) - s) < 0.001 * s | | If `s` is None, `s` will be set as `len(w)` for a smoothing spline | that uses all data points. | If 0, spline will interpolate through all data points. This is | equivalent to `InterpolatedUnivariateSpline`. | Default is None. | The user can use the `s` to control the tradeoff between closeness | and smoothness of fit. Larger `s` means more smoothing while smaller | values of `s` indicate less smoothing. | Recommended values of `s` depend on the weights, `w`. If the weights | represent the inverse of the standard-deviation of `y`, then a good | `s` value should be found in the range (m-sqrt(2*m),m+sqrt(2*m)) | where m is the number of datapoints in `x`, `y`, and `w`. This means | ``s = len(w)`` should be a good value if ``1/w[i]`` is an | estimate of the standard deviation of ``y[i]``. | ext : int or str, optional | Controls the extrapolation mode for elements | not in the interval defined by the knot sequence. | | * if ext=0 or 'extrapolate', return the extrapolated value. | * if ext=1 or 'zeros', return 0 | * if ext=2 or 'raise', raise a ValueError | * if ext=3 or 'const', return the boundary value. | | Default is 0. | | check_finite : bool, optional | Whether to check that the input arrays contain only finite numbers. | Disabling may give a performance gain, but may result in problems | (crashes, non-termination or non-sensical results) if the inputs | do contain infinities or NaNs. | Default is False. | | See Also | -------- | BivariateSpline : | a base class for bivariate splines. | SmoothBivariateSpline : | a smoothing bivariate spline through the given points | LSQBivariateSpline : | a bivariate spline using weighted least-squares fitting | RectSphereBivariateSpline : | a bivariate spline over a rectangular mesh on a sphere | SmoothSphereBivariateSpline : | a smoothing bivariate spline in spherical coordinates | LSQSphereBivariateSpline : | a bivariate spline in spherical coordinates using weighted | least-squares fitting | RectBivariateSpline : | a bivariate spline over a rectangular mesh | InterpolatedUnivariateSpline : | a interpolating univariate spline for a given set of data points. | bisplrep : | a function to find a bivariate B-spline representation of a surface | bisplev : | a function to evaluate a bivariate B-spline and its derivatives | splrep : | a function to find the B-spline representation of a 1-D curve | splev : | a function to evaluate a B-spline or its derivatives | sproot : | a function to find the roots of a cubic B-spline | splint : | a function to evaluate the definite integral of a B-spline between two | given points | spalde : | a function to evaluate all derivatives of a B-spline | | Notes | ----- | The number of data points must be larger than the spline degree `k`. | | **NaN handling**: If the input arrays contain ``nan`` values, the result | is not useful, since the underlying spline fitting routines cannot deal | with ``nan``. A workaround is to use zero weights for not-a-number | data points: | | >>> import numpy as np | >>> from scipy.interpolate import UnivariateSpline | >>> x, y = np.array([1, 2, 3, 4]), np.array([1, np.nan, 3, 4]) | >>> w = np.isnan(y) | >>> y[w] = 0. | >>> spl = UnivariateSpline(x, y, w=~w) | | Notice the need to replace a ``nan`` by a numerical value (precise value | does not matter as long as the corresponding weight is zero.) | | References | ---------- | Based on algorithms described in [1]_, [2]_, [3]_, and [4]_: | | .. [1] P. Dierckx, "An algorithm for smoothing, differentiation and | integration of experimental data using spline functions", | J.Comp.Appl.Maths 1 (1975) 165-184. | .. [2] P. Dierckx, "A fast algorithm for smoothing data on a rectangular | grid while using spline functions", SIAM J.Numer.Anal. 19 (1982) | 1286-1304. | .. [3] P. Dierckx, "An improved algorithm for curve fitting with spline | functions", report tw54, Dept. Computer Science,K.U. Leuven, 1981. | .. [4] P. Dierckx, "Curve and surface fitting with splines", Monographs on | Numerical Analysis, Oxford University Press, 1993. | | Examples | -------- | >>> import numpy as np | >>> import matplotlib.pyplot as plt | >>> from scipy.interpolate import UnivariateSpline | >>> rng = np.random.default_rng() | >>> x = np.linspace(-3, 3, 50) | >>> y = np.exp(-x**2) + 0.1 * rng.standard_normal(50) | >>> plt.plot(x, y, 'ro', ms=5) | | Use the default value for the smoothing parameter: | | >>> spl = UnivariateSpline(x, y) | >>> xs = np.linspace(-3, 3, 1000) | >>> plt.plot(xs, spl(xs), 'g', lw=3) | | Manually change the amount of smoothing: | | >>> spl.set_smoothing_factor(0.5) | >>> plt.plot(xs, spl(xs), 'b', lw=3) | >>> plt.show() | | Methods defined here: | | __call__(self, x, nu=0, ext=None) | Evaluate spline (or its nu-th derivative) at positions x. | | Parameters | ---------- | x : array_like | A 1-D array of points at which to return the value of the smoothed | spline or its derivatives. Note: `x` can be unordered but the | evaluation is more efficient if `x` is (partially) ordered. | nu : int | The order of derivative of the spline to compute. | ext : int | Controls the value returned for elements of `x` not in the | interval defined by the knot sequence. | | * if ext=0 or 'extrapolate', return the extrapolated value. | * if ext=1 or 'zeros', return 0 | * if ext=2 or 'raise', raise a ValueError | * if ext=3 or 'const', return the boundary value. | | The default value is 0, passed from the initialization of | UnivariateSpline. | | __init__(self, x, y, w=None, bbox=[None, None], k=3, s=None, ext=0, check_finite=False) | Initialize self. See help(type(self)) for accurate signature. | | antiderivative(self, n=1) | Construct a new spline representing the antiderivative of this spline. | | Parameters | ---------- | n : int, optional | Order of antiderivative to evaluate. Default: 1 | | Returns | ------- | spline : UnivariateSpline | Spline of order k2=k+n representing the antiderivative of this | spline. | | Notes | ----- | | .. versionadded:: 0.13.0 | | See Also | -------- | splantider, derivative | | Examples | -------- | >>> import numpy as np | >>> from scipy.interpolate import UnivariateSpline | >>> x = np.linspace(0, np.pi/2, 70) | >>> y = 1 / np.sqrt(1 - 0.8*np.sin(x)**2) | >>> spl = UnivariateSpline(x, y, s=0) | | The derivative is the inverse operation of the antiderivative, | although some floating point error accumulates: | | >>> spl(1.7), spl.antiderivative().derivative()(1.7) | (array(2.1565429877197317), array(2.1565429877201865)) | | Antiderivative can be used to evaluate definite integrals: | | >>> ispl = spl.antiderivative() | >>> ispl(np.pi/2) - ispl(0) | 2.2572053588768486 | | This is indeed an approximation to the complete elliptic integral | :math:`K(m) = \int_0^{\pi/2} [1 - m\sin^2 x]^{-1/2} dx`: | | >>> from scipy.special import ellipk | >>> ellipk(0.8) | 2.2572053268208538 | | derivative(self, n=1) | Construct a new spline representing the derivative of this spline. | | Parameters | ---------- | n : int, optional | Order of derivative to evaluate. Default: 1 | | Returns | ------- | spline : UnivariateSpline | Spline of order k2=k-n representing the derivative of this | spline. | | See Also | -------- | splder, antiderivative | | Notes | ----- | | .. versionadded:: 0.13.0 | | Examples | -------- | This can be used for finding maxima of a curve: | | >>> import numpy as np | >>> from scipy.interpolate import UnivariateSpline | >>> x = np.linspace(0, 10, 70) | >>> y = np.sin(x) | >>> spl = UnivariateSpline(x, y, k=4, s=0) | | Now, differentiate the spline and find the zeros of the | derivative. (NB: `sproot` only works for order 3 splines, so we | fit an order 4 spline): | | >>> spl.derivative().roots() / np.pi | array([ 0.50000001, 1.5 , 2.49999998]) | | This agrees well with roots :math:`\pi/2 + n\pi` of | :math:`\cos(x) = \sin'(x)`. | | derivatives(self, x) | Return all derivatives of the spline at the point x. | | Parameters | ---------- | x : float | The point to evaluate the derivatives at. | | Returns | ------- | der : ndarray, shape(k+1,) | Derivatives of the orders 0 to k. | | Examples | -------- | >>> import numpy as np | >>> from scipy.interpolate import UnivariateSpline | >>> x = np.linspace(0, 3, 11) | >>> y = x**2 | >>> spl = UnivariateSpline(x, y) | >>> spl.derivatives(1.5) | array([2.25, 3.0, 2.0, 0]) | | get_coeffs(self) | Return spline coefficients. | | get_knots(self) | Return positions of interior knots of the spline. | | Internally, the knot vector contains ``2*k`` additional boundary knots. | | get_residual(self) | Return weighted sum of squared residuals of the spline approximation. | | This is equivalent to:: | | sum((w[i] * (y[i]-spl(x[i])))**2, axis=0) | | integral(self, a, b) | Return definite integral of the spline between two given points. | | Parameters | ---------- | a : float | Lower limit of integration. | b : float | Upper limit of integration. | | Returns | ------- | integral : float | The value of the definite integral of the spline between limits. | | Examples | -------- | >>> import numpy as np | >>> from scipy.interpolate import UnivariateSpline | >>> x = np.linspace(0, 3, 11) | >>> y = x**2 | >>> spl = UnivariateSpline(x, y) | >>> spl.integral(0, 3) | 9.0 | | which agrees with :math:`\int x^2 dx = x^3 / 3` between the limits | of 0 and 3. | | A caveat is that this routine assumes the spline to be zero outside of | the data limits: | | >>> spl.integral(-1, 4) | 9.0 | >>> spl.integral(-1, 0) | 0.0 | | roots(self) | Return the zeros of the spline. | | Notes | ----- | Restriction: only cubic splines are supported by FITPACK. For non-cubic | splines, use `PPoly.root` (see below for an example). | | Examples | -------- | | For some data, this method may miss a root. This happens when one of | the spline knots (which FITPACK places automatically) happens to | coincide with the true root. A workaround is to convert to `PPoly`, | which uses a different root-finding algorithm. | | For example, | | >>> x = [1.96, 1.97, 1.98, 1.99, 2.00, 2.01, 2.02, 2.03, 2.04, 2.05] | >>> y = [-6.365470e-03, -4.790580e-03, -3.204320e-03, -1.607270e-03, | ... 4.440892e-16, 1.616930e-03, 3.243000e-03, 4.877670e-03, | ... 6.520430e-03, 8.170770e-03] | >>> from scipy.interpolate import UnivariateSpline | >>> spl = UnivariateSpline(x, y, s=0) | >>> spl.roots() | array([], dtype=float64) | | Converting to a PPoly object does find the roots at `x=2`: | | >>> from scipy.interpolate import splrep, PPoly | >>> tck = splrep(x, y, s=0) | >>> ppoly = PPoly.from_spline(tck) | >>> ppoly.roots(extrapolate=False) | array([2.]) | | See Also | -------- | sproot | PPoly.roots | | set_smoothing_factor(self, s) | Continue spline computation with the given smoothing | factor s and with the knots found at the last call. | | This routine modifies the spline in place. | | ---------------------------------------------------------------------- | Static methods defined here: | | validate_input(x, y, w, bbox, k, s, ext, check_finite) | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables | | __weakref__ | list of weak references to the object
df = cubic_smooth.derivative(n=2)
plt.plot(x, df(x))
plt.show()
from scipy.misc import derivative as der
#cubic_interpolation.derivative()
dcubic_interpolation=[der(cubic_interpolation, t, 0.01,n=2) for t in x[1:-1]]
plt.plot(x[1:-1], dcubic_interpolation)
plt.show()
/var/folders/j8/d9m3r0zx7j37l3ktfl_n1xw00000gn/T/ipykernel_6651/476914872.py:4: DeprecationWarning: scipy.misc.derivative is deprecated in SciPy v1.10.0; and will be completely removed in SciPy v1.12.0. You may consider using findiff: https://github.com/maroba/findiff or numdifftools: https://github.com/pbrod/numdifftools dcubic_interpolation=[der(cubic_interpolation, t, 0.01,n=2) for t in x[1:-1]]
interpolate.make_interp_spline
allows boundary condition at the two ends.
bc_type2 -- tuple or None
Default is None
, which means choosing the boundary conditions automatically. Otherwise, it must be a length-two tuple where the first element sets the boundary conditions at x[0]
and the second element sets the boundary conditions at x[-1]
. Each of these must be an iterable of pairs (order, value) which gives the values of derivatives of specified orders at the given edge of the interpolation interval. Alternatively, the following string aliases are recognized:`
- "clamped": The first derivatives at the ends are zero. This is equivalent to bc_type=([(1, 0.0)], [(1, 0.0)]).
- "natural": The second derivatives at ends are zero. This is equivalent to bc_type=([(2, 0.0)], [(2, 0.0)]).
- "not-a-knot" (default): The first and second segments are the same polynomial. This is equivalent to having bc_type=None. In this case the spline requires that the third derivative of the spline is continuous at
x[0]
andx[-1]
Also interp2d
exists with similar syntax
UnivariateSpline
is object oriented analog for spline interpolation, and offers a bit more functionality: s=Positive smoothing factor
from scipy import interpolate
fx = interpolate.UnivariateSpline(x, sin(x),s=0)
fx1=fx.derivative(1)
fx2=fx.derivative(2)
plt.plot(x,fx(x))
plt.plot(x,sin(x))
plt.plot(x,fx1(x))
plt.show()
plt.plot(x,fx1(x))
plt.plot(x,fx2(x))
plt.show()
Further reading¶
- http://www.scipy.org - The official web page for the SciPy project.
- http://docs.scipy.org/doc/scipy/reference/tutorial/index.html - A tutorial on how to get started using SciPy.
- https://github.com/scipy/scipy/ - The SciPy source code.