A Taxonomy of Numerical Methods v1

Posted by JoshReuben on Geeks with Blogs See other posts from Geeks with Blogs or by JoshReuben
Published on Sun, 09 Dec 2012 03:28:04 GMT Indexed on 2012/12/09 5:05 UTC
Read the original article Hit count: 908

Filed under:

Numerical Analysis – When, What, (but not how)

Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up.

I’ve included links to detailed explanations and to C++ code examples.

I’ve tried to classify Numerical methods in the following broad categories:

  • Solving Systems of Linear Equations
  • Solving Non-Linear Equations Iteratively
  • Interpolation
  • Curve Fitting
  • Optimization
  • Numerical Differentiation & Integration
  • Solving ODEs
  • Boundary Problems
  • Solving EigenValue problems

Enjoy – I did !

Solving Systems of Linear Equations

Overview

Solve sets of algebraic equations with x unknowns

The set is commonly in matrix form

Gauss-Jordan Elimination

http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination

C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx

Produces solution of the equations & the coefficient matrix

Efficient, stable

2 steps:

· Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution

· Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set

Elementary ops for matrix decomposition:

· Row multiplication

· Row switching

· Add multiples of rows to other rows

Use pivoting to ensure rows are ordered for achieving triangular form

LU Decomposition

http://en.wikipedia.org/wiki/LU_decomposition

C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html

clip_image001

Represent the matrix as a product of lower & upper triangular matrices

A modified version of GJ Elimination

Advantage – can easily apply forward & backward elimination to solve triangular matrices

Techniques:

· Doolittle Method – sets the L matrix diagonal to unity

· Crout Method - sets the U matrix diagonal to unity

Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix

Gauss-Seidel Iteration

http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method

C++: http://www.nr.com/forum/showthread.php?t=722

Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively).

an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance

Solving Non-Linear Equations Iteratively

find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial

use iterative techniques

Iterative methods

· used when there are no known analytical techniques

· Requires set functions to be continuous & differentiable

· Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result

· Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met

· bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge

Incremental method

if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically.

Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities

Fixed point method

http://en.wikipedia.org/wiki/Fixed-point_iteration

C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false

clip_image002

Algebraically rearrange a solution to isolate a variable then apply incremental method

Bisection method

http://en.wikipedia.org/wiki/Bisection_method

C++: http://numericalcomputing.wordpress.com/category/algorithms/

clip_image003

Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in

Adv: unaffected by function gradient à reliable

Disadv: slow convergence

False Position Method

http://en.wikipedia.org/wiki/False_position_method

C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/

Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this)

Newton-Raphson method

http://en.wikipedia.org/wiki/Newton's_method

C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3

clip_image005

Also known as Newton's method

Convenient, efficient

Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input.

Evaluates the function & its derivative at each step.

Can be extended to the Newton MutiRoot method for solving multiple roots

Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!!

Secant Method

http://en.wikipedia.org/wiki/Secant_method

C++: http://forum.vcoderz.com/showthread.php?p=205230

clip_image006

Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it

Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step.

Similar implementation to False Positive method

Birge-Vieta Method

http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html

C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false

combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up

Interpolation

Overview

Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation)

Use Taylor Series Expansion of a function f(x) around a specific value for x

Linear Interpolation

http://en.wikipedia.org/wiki/Linear_interpolation

C++: http://www.hamaluik.com/?p=289

clip_image007

Straight line between 2 points à concatenate interpolants between each pair of data points

Bilinear Interpolation

http://en.wikipedia.org/wiki/Bilinear_interpolation

C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/

clip_image008

Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another.

Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell.

Lagrange Interpolation

http://en.wikipedia.org/wiki/Lagrange_polynomial

C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php

clip_image009

For polynomials

Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes

Numerically unstable

Barycentric Interpolation

http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715

C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/

clip_image010

Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x

Newton Divided Difference Interpolation

http://en.wikipedia.org/wiki/Newton_polynomial

C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html

Hermite Divided Differences:

clip_image012

Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences.

For a given set of 3 data points , fit a quadratic interpolant through the data

Bracketed functions allow Newton divided differences to be calculated recursively

Difference table

Cubic Spline Interpolation

http://en.wikipedia.org/wiki/Spline_interpolation

C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html

clip_image013

Spline is a piecewise polynomial

Provides smoothness – for interpolations with significantly varying data

Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval

Curve Fitting

A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points

Least Squares Fit

http://en.wikipedia.org/wiki/Least_squares

C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c

clip_image015

Residual – difference between observed value & expected value

Model function is often chosen as a linear combination of the specified functions

Determines:

A) The model instance in which the sum of squared residuals has the least value

B) param values for which model best fits data

Straight Line Fit

Linear correlation between independent variable and dependent variable

Linear Regression

http://en.wikipedia.org/wiki/Linear_regression

C++: http://www.oocities.org/david_swaim/cpp/linregc.htm

clip_image016

Special case of statistically exact extrapolation

Leverage least squares

Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition)

Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights

Polynomial Fit - use a polynomial basis function

Moving Average

http://en.wikipedia.org/wiki/Moving_average

C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm

clip_image018

Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters

Replace each data point with average of neighbors.

Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point.

Parameters: smoothing factor, period, weight basis

Optimization

Overview

Given function with multiple variables, find Min (or max by minimizing –f(x))

Iterative approach

Efficient, but not necessarily reliable

Conditions: noisy data, constraints, non-linear models

Detection via sign of first derivative - Derivative of saddle points will be 0

Local minima

Bisection method

Similar method for finding a root for a non-linear equation

Start with an interval that contains a minimum

Golden Search method

http://en.wikipedia.org/wiki/Golden_section_search

C++: http://www.codecogs.com/code/maths/optimization/golden.php

clip_image019

Bisect intervals according to golden ratio 0.618..

Achieves reduction by evaluating a single function instead of 2

Newton-Raphson Method

Brent method

http://en.wikipedia.org/wiki/Brent's_method

C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp

Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0

Simplex Method

http://en.wikipedia.org/wiki/Simplex_algorithm

C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm

clip_image020

Find the global minima of any multi-variable function

Direct search – no derivatives required

At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices.

Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point.

Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point

Oscillation can be avoided by choosing the 2nd worst result

Restart if it gets stuck

Parameters: contraction & expansion factors

Simulated Annealing

http://en.wikipedia.org/wiki/Simulated_annealing

C++: http://code.google.com/p/cppsimulatedannealing/

clip_image022

Analogy to heating & cooling metal to strengthen its structure

Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing

Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta

Cooling schedule – can be linear, step-wise or exponential

Differential Evolution

http://en.wikipedia.org/wiki/Differential_evolution

C++: http://www.amichel.com/de/doc/html/

More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies

Parallel direct search method against multiple discrete or continuous variables

Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector

Many params: #parents, #variables, step size, crossover constant etc

Convergence is slow – many more function evaluations than simulated annealing

Numerical Differentiation

Overview

2 approaches to finite difference methods:

· A) approximate function via polynomial interpolation then differentiate

· B) Taylor series approximation – additionally provides error estimate

Finite Difference methods

http://en.wikipedia.org/wiki/Finite_difference_method

C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf

clip_image023

Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points

Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h.

Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved.

Provide an approximation of the derivative within a O(h^2) accuracy

There is also central difference & extended central difference which has a O(h^4) accuracy

Richardson Extrapolation

http://en.wikipedia.org/wiki/Richardson_extrapolation

C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html

A sequence acceleration method applied to finite differences

Fast convergence, high accuracy O(h^4)

Derivatives via Interpolation

Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation

Note: the higher the order of the derivative, the lower the approximation precision

Numerical Integration

Estimate finite & infinite integrals of functions

More accurate procedure than numerical differentiation

Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are

Newton Cotes Methods

http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas

C++: http://www.siafoo.net/snippet/324

clip_image024

For equally spaced data points

Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area

Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum

Weights are derived from Lagrange Basis polynomials

Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results

Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint

Romberg Integration

http://en.wikipedia.org/wiki/Romberg's_method

C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q=

Combines trapezoidal rule with Richardson Extrapolation

Evaluates the integrand at equally spaced points

The integrand must have continuous derivatives

Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small.

Gaussian Quadrature

http://en.wikipedia.org/wiki/Gaussian_quadrature

C++: http://www.alglib.net/integration/gaussianquadratures.php

Data points are chosen to yield best possible accuracy – requires fewer evaluations

Ability to handle singularities, functions that are difficult to evaluate

The integrand can include a weighting function determined by a set of orthogonal polynomials.

Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1

Techniques (basically different weighting functions):

· Gauss-Legendre Integration w(x)=1

· Gauss-Laguerre Integration w(x)=e^-x

· Gauss-Hermite Integration w(x)=e^-x^2

· Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2)

Solving ODEs

Use when high order differential equations cannot be solved analytically

Evaluated under boundary conditions

RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations

Euler method

http://en.wikipedia.org/wiki/Euler_method

C++: http://rosettacode.org/wiki/Euler_method

clip_image025

First order Runge–Kutta method.

Simple recursive method – given an initial value, calculate derivative deltas.

Unstable & not very accurate (O(h) error) – not used in practice

A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size

In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries

Higher order Runge Kutta

http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods

C++: http://www.dreamincode.net/code/snippet1441.htm

clip_image027

2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost

Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced

Boundary Value Problems

Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution

An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied

Shooting Method

http://en.wikipedia.org/wiki/Shooting_method

C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html

clip_image029

Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate

Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations.

Finite Difference Method

For linear & non-linear systems

Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though

Improve the accuracy by increasing the number of mesh points

Solving EigenValue Problems

An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue

For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors

Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively

clip_image030

Jacobi method

http://en.wikipedia.org/wiki/Jacobi_method

C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html

Robust but Computationally intense – use for small matrices < 10x10

Power Iteration

http://en.wikipedia.org/wiki/Power_iteration

For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors

Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices

Inverse Iteration

Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix

Rayleigh Method

http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis

Variation of power iteration method

Rayleigh Quotient Method

Variation of inverse iteration method

Matrix Tri-diagonalization Method

Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms

 

 

Whats Next

Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades:

Sooner or later, I’ll cover the above topics as well.

© Geeks with Blogs or respective owner