Numerical Analysis

Jonas Tölle1

Lecture notes for MS-C1650 Numerical Analysis, Aalto University

Last updated: 30.5.2024

Largely based the lecture transcript by Harri Hakula, 2021.

Intended learning outcomes. After the course, the student will be able to…

Floating-point numbers

The set of real numbers R\mathbb{R} is infinite in two ways: it is unbounded and continuous. In most practical computing, the second kind of infiniteness is more consequential than the first kind, so we turn our attention there first.

Instead of R\mathbb{R}, we shall introduce the set of floating-point numbers (floats) F\mathbb{F}. They come with different bases, precisions and exponent ranges, and other features. The basic representation is
x=±(d0.d1d2dp)kke.x=\pm (d_0. d_1 d_2 \ldots d_p)_k\cdot k^e.
kN{1}k\in\mathbb{N}\setminus\{1\} is called base or radix, pN0:=N{0}p\in\mathbb{N}_0:=\mathbb{N}\cup\{0\} is called the precision, did_i, i{0,,p}i\in\{0,\ldots,p\}, and the sequence of numbers
(d0.d1d2dp)k:=i=0pdiki(d_0. d_1 d_2 \ldots d_p)_k:=\sum_{i=0}^p d_i k^{-i}
is called mantissa or significand. The exponent eZe\in\mathbb{Z} is bounded meMm\le e\le M, where m,MZm,M\in\mathbb{Z}.
If k=10k=10, we can read the usual decimal commas from the mantissa:
(1.01)10=1100+0101+1102=1.01.(1.01)_{10}=1\cdot 10^0+0\cdot 10^{-1}+1\cdot 10^{-2}=1.01.
If k=2k=2, we have binary floats. In the binary case, we observe that we can always choose d0=1d_0=1, hence saving one bit, which can be expressed by ee. We refer to this as normalization. The mantissa is always contained in the interval [1,2)[1,2).

Example. (Toy floating point system). Binary floats of the type
(1.b1b2)2(1.b_1b_2)_2 with exponents e=1,0,1e=-1,0,1.
Hence (1.00)2=1(1.00)_2=1, (1.01)2=54(1.01)_2=\frac{5}{4}, (1.10)2=32(1.10)_2=\frac{3}{2}, and >(1.11)2=74(1.11)_2=\frac{7}{4}. By multiplying with the exponents 21=122^{-1}=\frac{1}{2}, 20=12^0=1, 21=22^1=2, we get the whole set:

ee
11 54\frac{5}{4} 32\frac{3}{2}
22 52\frac{5}{2} 33
12\frac{1}{2} 58\frac{5}{8} 34\frac{3}{4}

Important quantity: (1.01)21=14(1.01)_2-1=\frac{1}{4}, the so-called machine epsilon.

Define the machine epsilon by ε:=2p=(1.0001)21\varepsilon:=2^{-p}=(1.00\ldots 01)_2-1.

Rounding

For the rounding function round:RF\text{round}:\mathbb{R}\to\mathbb{F}, we have 5 alternative definitions:

It holds that round(x)=x(1+δ)\text{round}(x)=x(1+\delta), where δ<ε2|\delta|<\frac{\varepsilon}{2}, where ε\varepsilon denotes the machine epsilon. Note that usually δ\delta depends on xx.
There is a way to define the standard arithmetic operations on F\mathbb{F} such that
ab=round(a+b)=(a+b)(1+δ1),a\oplus b=\text{round}(a+b)=(a+b)(1+\delta_1),
ab=round(ab)=(ab)(1+δ2),a\ominus b=\text{round}(a-b)=(a-b)(1+\delta_2),
ab=round(ab)=ab(1+δ3),a\otimes b=\text{round}(ab)=ab(1+\delta_3),
ab=round(ab)=ab(1+δ4),b0.a\oslash b=\text{round}\left(\frac{a}{b}\right)=\frac{a}{b}(1+\delta_4),\quad b\not= 0.
Here, generally δiδj\delta_i\not=\delta_j, iji\not=j.

IEEE 754 “Double precision”

k=2k=2, 64 bits, where:

Exponent field Number Type of number
0000=000\ldots 00=0 ±(0.b1b2b52)221022\pm (0.b_1 b_2\ldots b_{52})_2 \cdot 2^{-1022} 00 or subnormal
0001=100\ldots 01=1 ±(1.b1b2b52)221022\pm (1.b_1 b_2\ldots b_{52})_2 \cdot 2^{-1022}
0010=200\ldots 10=2 ±(1.b1b2b52)221021\pm (1.b_1 b_2\ldots b_{52})_2 \cdot 2^{-1021}
\ldots \ldots
0111=102301\ldots 11=1023 ±(1.b1b2b52)220\pm (1.b_1 b_2\ldots b_{52})_2 \cdot 2^{0}
\ldots \ldots
1110=204611\ldots 10=2046 ±(1.b1b2b52)221023\pm (1.b_1 b_2\ldots b_{52})_2 \cdot 2^{1023}
1111=204711\ldots 11=2047 ±\pm \infty if b1=b2==b52=0b_1=b_2=\ldots=b_{52}=0, otherwise NaN exception

Thus, there are two zeros, two infinities and NaN which denotes “not a number”. The smallest positive normalized number is:
(1.0)2210222.210308.(1.0)_2\cdot 2^{-1022}\approx 2.2\cdot 10^{-308}.
The largest positive number is:
(1.11)2210231.810308(1.1\ldots 1)_2\cdot 2^{1023}\approx 1.8\cdot 10^{308}
The machine epsilon is:
2522.221016.2^{-52}\approx 2.22\cdot 10^{-16}.

Here’s an easy-to-follow video explaining floating point numbers (and a specific version of Newton’s algorithm).

Condition number and stability

Conditioning of problems

Assume that f:RRf:\mathbb{R}\to\mathbb{R} “solution map” of the problem, input numbers xx, x^\hat{x}, close in value, e.g. x^=round(x)\hat{x}=\text{round}(x). Set y:=f(x)y:=f(x), y^:=f(x^)\hat{y}:=f(\hat{x}).

Definition. The absolute condition number C(x)C(x) is defined by the relation
yy^C(x)xx^.|y-\hat{y}|\approx C(x)|x-\hat{x}|.
The relative condition number K(x)K(x) is defined by the relation
yy^yK(x)xx^x\left|\frac{y-\hat{y}}{y}\right|\approx K(x)\left|\frac{x-\hat{x}}{x}\right|

By the normalization, we guarantee that
(relative error in the output)K(x)×(relative error in the input).\text{(relative error in the output)}\approx K(x)\times \text{(relative error in the input)}.
Now,
yy^=f(x)f(x^)=f(x)f(x^)xx^undefinedf(x)  as  x^x(xx^)y-\hat{y}=f(x)-f(\hat{x})=\underbrace{\frac{f(x)-f(\hat{x})}{x-\hat{x}}}_{\to f'(x)\;\text{as}\;\hat{x}\to x}(x-\hat{x})
Thus, C(x)=f(x)C(x)=|f'(x)|.
Furthermore,
yy^y=f(x)f(x^)f(x)=f(x)f(x^)xx^undefinedf(x)  as  x^xxx^xxf(x)\frac{y-\hat{y}}{y}=\frac{f(x)-f(\hat{x})}{f(x)}=\underbrace{\frac{f(x)-f(\hat{x})}{x-\hat{x}}}_{\to f'(x)\;\text{as}\;\hat{x}\to x}\frac{x-\hat{x}}{x}\frac{x}{f(x)}
Thus, K(x)=xf(x)f(x)K(x)=\left|\frac{x f'(x)}{f(x)}\right|.

Example. f(x)=2xf(x)=2x, f(x)=2f'(x)=2. Thus, C(x)=2C(x)=2, K(x)=2x2x=1K(x)=\left|\frac{2x}{2x}\right|=1. This is a well-conditioned problem.

Example. g(x)=xg(x)=\sqrt{x}, g(x)=12xg'(x)=\frac{1}{2\sqrt{x}}. Thus, C(x)C(x) is becomes unbounded for x>0x>0 close to zero, e.g. x108x\approx 10^{-8} yields C(x)104C(x)\approx 10^4. On the other hand, K(x)=x2xx=12K(x)=\left|\frac{x}{2\sqrt{x}\sqrt{x}}\right|=\frac{1}{2}.

Stability of algorithms

Definition. An algorithm or numerical process is called stable if small changes in the input produce small changes in the output. It is called unstable if large changes in the output are produced by small changes in the input.

An algorithm is stable, if every step is well-conditioned (i.e. has a uniformly bounded condition number). It is unstable if any step is ill-conditioned (i.e. the condition number may become arbitrarily large).

Forward error analysis (FEA) is asking:
“How far are we from the true solution?”

Backward error analysis (BEA) is asking:
“Given the answer, what was the problem?”

Example.
Set:
fl(x+y):=round(x)round(y)=((x(1+δ1)+y(1+δ2))(1+δ3),\text{fl}(x+y):=\text{round}(x)\oplus\text{round}(y)=((x(1+\delta_1)+y(1+\delta_2))(1+\delta_3),
where δi<ε2|\delta_i|<\frac{\varepsilon}{2}, i=1,2,3i=1,2,3.
FEA:
fl(x+y)=x+y+x(δ1+δ3+δ1δ3)+y(δ2+δ3+δ2δ3).\text{fl}(x+y)=x+y+x(\delta_1+\delta_3+\delta_1\delta_3)+y(\delta_2+\delta_3+\delta_2\delta_3).
The absolute error is
fl(x+y)(x+y)(x+y)(ε+ε24).|\text{fl}(x+y)-(x+y)|\le(|x|+|y|)\left(\varepsilon+\frac{\varepsilon^2}{4}\right).
The relative error is:
fl(x+y)(x+y)x+y(x+y)(ε+ε24)x+y.\left|\frac{\text{fl}(x+y)-(x+y)}{x+y}\right|\le\frac{(|x|+|y|)\left(\varepsilon+\frac{\varepsilon^2}{4}\right)}{|x+y|}.
BEA:
fl(x+y)=x(1+δ1)(1+δ3)+y(1+δ2)(1+δ3).\text{fl}(x+y)=x(1+\delta_1)(1+\delta_3)+y(1+\delta_2)(1+\delta_3).
Thus the relative error for each term is less or equal to ε+ε24\varepsilon+\frac{\varepsilon^2}{4}.
Hence the sum of two floating point numbers is backwards stable.

Well-conditioned problems may have unstable algorithms. For stability, each step has to be well conditioned. Some ill-conditioned steps produce an unstable algorithm. Ill-conditioned problems cannot be reliably solved with a stable algorithm.

Example. Consider evaluating f(x)=1+x1f(x)=\sqrt{1+x}-1 for xx close to zero. The relative condition number is:
K(x)=xf(x)f(x)=x21+x(1+x1)K(x)=\left|\frac{xf'(x)}{f(x)}\right|=\frac{x}{2\sqrt{1+x}(\sqrt{1+x}-1)}
=x(1+x+1)21+x(1+x1)(1+x+1)=1+x+121+x,=\frac{x(\sqrt{1+x}+1)}{2\sqrt{1+x}(\sqrt{1+x}-1)(\sqrt{1+x}+1)}=\frac{\sqrt{1+x}+1}{2\sqrt{1+x}},
and K(0)=1K(0)=1.
Consider the following 3 steps;

  1. t1:=1+xt_1:=1+x, well-conditioned, xx close to 00.
  2. t2:=t1t_2:=\sqrt{t_1}, relatively well-conditioned, also absolutely well conditioned, because t1t_1 is close to 11.
  3. t3:=t21t_3:=t_2-1, ill-conditioned, relative condition number of this step: K3(t2)=t2t21K_3(t_2)=\left|\frac{t_2}{t_2-1}\right|, which becomes unbounded for t2t_2 close to 11!
    On the other hand, the problem is well-conditioned. Solve it by writing:
    f(x)=1+x1=(1+x+1)(1+x1)1+x+1=x1+x+1,f(x)=\sqrt{1+x}-1=\frac{(\sqrt{1+x}+1)(\sqrt{1+x}-1)}{\sqrt{1+x}+1}=\frac{x}{\sqrt{1+x}+1},
    which can be evaluated directly close to zero.

Numerical differentiation

Recall Taylor’s theorem, for a twice differentiable function f:RRf:\mathbb{R}\to\mathbb{R},
f(x)=f(x0)+f(x0)(xx0)+12(xx0)2f(ξ),f(x)=f(x_0)+f'(x_0)(x-x_0)+\frac{1}{2}(x-x_0)^2 f''(\xi),
for any x0,xRx_0,x\in\mathbb{R}, where ξ[x0,x]\xi\in [x_0,x].

What is that ξ\xi (= “xi”)? Under certain assumptions elementary functions have their series expansions. If the series is truncated, we have the Taylor polynomial. However, the residual has an explicit expression but due to application of an intermediate value theorem, the exact location of the point ξ\xi is not known, i.e. “generic”.

By setting x:=z+hx:=z+h, x0:=zx_0:=z, we obtain the useful equivalent formula
f(z+h)=f(z)+f(z)h+12f(ξ)h2,f(z+h)=f(z)+f'(z)h+\frac{1}{2} f''(\xi)h^2,
for every z,hRz,h\in\mathbb{R}, ξ[z,z+h]\xi\in [z,z+h].

Rate of convergence (QQ-convergence)

Let (xk)(x_k) be an infinite sequence of real numbers. Let sk:=suplkxls_k:=\sup_{l\ge k}x_l, kNk\in\mathbb{N}, be the supremum (i.e., the lowest upper bound) of the tail (that is, large indices lkl\ge k) of (xk)(x_k). Define the lim sup\limsup (limes superior) as
lim supkxk:=limksk[,+].\limsup_{k\to\infty}x_k:=\lim_{k\to\infty}s_k\in[-\infty,+\infty].
Other than a limit, it always exists, but can be ±\pm\infty. If (xk)(x_k) is bounded, the lim sup\limsup is the largest limit of a converging subsequence. If limkxk(,)\lim_{k\to\infty} x_k\in (-\infty,\infty) exists, then limkxk=lim supkxk\lim_{k\to\infty}x_k=\limsup_{k\to\infty}x_k. The opposite is not true.

Examples. (1) xk:=(1)kx_k:=(-1)^k, then sk=1s_k=1, and lim supkxk=1\limsup_{k\to\infty}x_k=1.
(2) xk=sin(k)x_k=\sin(k), then sk=1s_k=1, and lim supkxk=1\limsup_{k\to\infty}x_k=1.

Assume that limkxk=x\lim_{k\to\infty}x_k=x and that there exists some large index MNM\in\mathbb{N} such that xkxx_k\not=x for all kMk\ge M. Then we define the following quantity for p0p\ge 0
C(p):=lim supkxk+1xxkxp.C(p):=\limsup_{k\to\infty}\frac{|x_{k+1}-x|}{|x_k-x|^p}.
We observe that C(p)<C(p^{*})<\infty for some p>0p^{*}> 0 implies C(p)=0C(p)=0 for every 0p<p0\le p<p^{*}. If C(p)>0C(p^{*})>0 for some p>0p^{*}> 0 then C(p)=C(p)=\infty for any p>pp>p^{*}.

Proof. By the submultiplicative property of lim sup\limsup,
C(p)=lim supkxk+1xxkxp=lim supk[xk+1xxkxpxkxpp]C(p)=\limsup_{k\to\infty}\frac{|x_{k+1}-x|}{|x_k-x|^p}=\limsup_{k\to\infty}\left[\frac{|x_{k+1}-x|}{|x_k-x|^{p^{*}}}|x_k-x|^{p^{*}-p}\right]
(lim supkxk+1xxkxp)(lim supkxkxpp)=C(p){0if    p<p,if    p>p.\le\left(\limsup_{k\to\infty}\frac{|x_{k+1}-x|}{|x_k-x|^{p^{*}}}\right)\cdot\left(\limsup_{k\to\infty}|x_k-x|^{p^{*}-p}\right)=C(p^{*})\cdot \begin{cases}0&\text{if}\;\;p<p^{*},\\\infty&\text{if}\;\;p>p^{*}.\end{cases}
\Box

Thus, there exists a (possibly infinite) pp^{*} such that
C(p)={0if    0p<p,C(p)if    p=p,if    p>p.C(p)=\begin{cases}0&\text{if}\;\;0\le p<p^{*},\\ C(p^{*})&\text{if}\;\;p=p^{*},\\\infty &\text{if}\;\;p>p^{*}.\end{cases}
The number pp^{*} is called order of convergence for the sequence (xk)(x_k) and determines the rate of convergence as follows:

When working with convergence estimates it is often useful to use the following approximation:
xk+1xCxkxp|x_{k+1}-x|\approx C|x_k-x|^{p^{*}}
for some constant C>0C>0, not necessarily C(p)C(p^{*}).
Here, it is useful to look at the logarithmic behavior:
log(xk+1x)log(Cxkxp)=log(C)+log(xkxp)=log(C)+plog(xkx).\log(|x_{k+1}-x|)\approx\log\left(C|x_k-x|^{p^{*}}\right)=\log(C)+\log\left(|x_k-x|^{p^{*}}\right)=\log(C)+p^{*}\log(|x_k-x|).

The rate of convergence can be used interchangeably with the order of convergence. However, there is some caution necessary, as different authors use different terminology here. Usually, the order of convergence always refers to the same thing, namely, the pp^{*}-exponent in the denominator of the limit defining the order of convergence. Most confusingly, some authors call the order of convergence “rate of convergence”, as e.g. here. The English Wikipedia article calls it the order of convergence, whereas here the rate of convergence is the constant in the definition, which also determines the speed of convergence, together with the order of convergence. So, please always check the context, as the use of the terminology should be clear from it. If there is no definition, try to figure out what is meant in each text. As a rule of thumb: The “order of convergence” is a unique terminology in numerical analysis. The “rate of convergence” can mean at least two different things. I will use both words for the same thing, but will try to make clear what I mean from case to case. In any case, to be sure, use “order of convergence”. My PhD advisor usually said that in mathematics “it’s all hollow words” (meaning that one should check the definition).

Landau’s little oo- and big OO-notation

Copied from Wikibooks under a Creative Commons BY-SA 4.0 license.

The Landau notation is an amazing tool applicable in all of real analysis. The reason it is so convenient and widely used is because it underlines a key principle of real analysis, namely ‘‘estimation’’. Loosely speaking, the Landau notation introduces two operators which can be called the “order of magnitude” operators, which essentially compare the magnitude of two given functions.

The “little-oo

The “little-oo” provides a function that is of lower order of magnitude than a given function, that is the function o(g(x))o(g(x)) is of a lower order than the function g(x)g(x). Formally,

Definition.
Let ARA\subseteq\mathbb{R} and let cRc\in\mathbb{R}.
Let f,g:ARf,g:A\to\mathbb{R}.

If limxcf(x)g(x)=0\lim_{x\to c}\frac{f(x)}{g(x)}=0 then we say that

"As xcx\to c, f(x)=o(g(x))f(x)=o(g(x))"

Examples.

The “Big-OO

The “Big-OO” provides a function that is at most the same order as that of a given function, that is the function O(g(x))O(g(x)) is at most the same order as the function g(x)g(x). Formally,

Definition.
Let ARA\subseteq\mathbb{R} and let cRc\in\mathbb{R}

Let f,g:ARf,g:A\to\mathbb{R}

If there exists M>0M>0 such that limxcf(x)g(x)<M\lim_{x\to c}\left| \frac{f(x)}{g(x)}\right| <M then we say that

"As xcx\to c, f(x)=O(g(x))f(x)=O(g(x))"

Examples.

Applications

We will now consider few examples which demonstrate the power of this notation.

Differentiability

Let f:URRf: \mathcal{U} \subseteq \mathbb{R} \to\mathbb{R} and x0Ux_0 \in \mathcal{U}.

Then ff is differentiable at x0x_0 if and only if

There exists a λR\lambda \in\mathbb{R} such that as xx0x\to x_0, f(x)=f(x0)+λ(xx0)+o(xx0)f(x) = f(x_0) + \lambda(x-x_0)+o\left( |x-x_0|\right).

Mean Value Theorem

Let f:[a,x]Rf:[a,x]\to\mathbb{R} be differentiable on [a,b][a,b]. Then,

As xax\to a, f(x)=f(a)+O(xa)f(x)=f(a)+O(x-a).

Taylor’s Theorem

Let f:[a,x]Rf:[a,x]\to\mathbb{R} be nn-times differentiable on [a,b][a,b]. Then,

As xax\to a, f(x)=f(a)+(xa)f(a)1!+(xa)2f(a)2!++(xa)n1f(n1)(a)(n1)!+O((xa)n)f(x)=f(a)+\tfrac{(x-a)f'(a)}{1!}+\tfrac{(x-a)^2f''(a)}{2!}+\ldots+\tfrac{(x-a)^{n-1}f^{(n-1)}(a)}{(n-1)!}+O\left( (x-a)^n\right).

Finding roots of functions and fixed points

Let f:RRf:\mathbb{R}\to\mathbb{R} be continuous. We are interested in methods for finding zeros, that is, roots of ff, in other words, xRx\in\mathbb{R}, such that f(x)=0f(x)=0.

Definition. If f:RnRf:\mathbb{R}^n\to\mathbb{R}, nNn\in\mathbb{N} is kk-times continuously differentiable, then we write fCk(Rn)f\in C^k(\mathbb{R}^n) or just fCkf\in C^k. For k=0k=0, fC0f\in C^0 or fC(Rn)f\in C(\mathbb{R}^n) or fC([a,b])f\in C([a,b]) just means that ff is assumed to be continuous.

Bisection method

The intermediate value theorem for continuous functions implies that x1<x<x2x_1<x<x_2 with f(x)=0f(x)=0 exists if f(x1)f(x2)<0f(x_1) f(x_2)<0, i.e., there is a sign change. The bisection method is based on halving the interval such that the sign condition is preserved. Note that, in principle, we have to look for intervals [x1,x2][x_1,x_2].

Let us analyze the convergence rate. Let [a,b][a,b] be an interval. After kk steps the interval of analysis has length ba2k\frac{b-a}{2^k} which converges to zero for kk\to\infty. Let us look in a neighborhood of radius δ>0\delta>0, so that
ba2k2δ2k+1baδklog2(baδ)1.\frac{b-a}{2^k}\le 2\delta\quad\Leftrightarrow\quad 2^{k+1}\ge \frac{b-a}{\delta}\quad\Leftrightarrow\quad k\ge \log_2\left(\frac{b-a}{\delta}\right)-1.
Every step reduces the error by factor 12\frac{1}{2}. The convergence rate is thus linear.

Newton’s method

Assume that fC1f\in C^1. For an initial value x0x_0, consider the iteration
xk+1=xkf(xk)f(xk)k=0,1,.x_{k+1}=x_k-\frac{f(x_k)}{f'(x_k)}\quad k=0,1,\ldots.

Heuristics. If f(x)=0f(x_\ast)=0 and fC2f\in C^2, by the Taylor’s expansion,
0=f(x)=f(x0)+(xx0)f(x0)+(xx0)22f(ξ)0=f(x_\ast)=f(x_0)+(x_\ast-x_0)f'(x_0)+\frac{(x_\ast-x_0)^2}{2}f''(\xi)
for ξ[x0,x]\xi\in [x_0,x_\ast], and upon neglecting the 2nd order term,
xx0f(x0)f(x0).x_\ast \approx x_0-\frac{f(x_0)}{f'(x_0)}.

Theorem. If fC2f\in C^2 and x0x_0 is sufficiently good (i.e. close to the root xx_\ast) and if f(x)0f'(x_\ast)\not=0, then Newton’s method converges quadratically.

Proof. By Taylor’s expansion, it follows that
x=xkf(xk)f(xk)(xxk)22f(ξk)f(xk)x_\ast=x_k-\frac{f(x_k)}{f'(x_k)}-\frac{(x_\ast-x_k)^2}{2}\frac{f''(\xi_k)}{f'(x_k)}
for some ξk[x,xk]\xi_k\in [x_\ast,x_k]. Take xk+1x_{k+1} from the method and subtract,
xk+1x=(xxk)2f(ξk)2f(xk)undefinedD.x_{k+1}-x_\ast=(x_\ast-x_k)^2\underbrace{\frac{f''(\xi_k)}{2f'(x_k)}}_{\le D}.
In other words,
xk+1xDxkx2,|x_{k+1}-x_\ast|\le D |x_k-x^\ast|^2,
as kk\to\infty and thus xkxx_k\to x_\ast.
Hence the method is quadratic. Note that f(xk)f'(x_k) does not vanish by continuity if xkx_k is close to xx_\ast. \Box

What happens if f(x)=0f'(x_\ast)=0?
xk+1x=(xxk)2f(ξk)2f(xk)undefined0x_{k+1}-x_\ast=(x_\ast-x_k)^2\frac{f''(\xi_k)}{2\underbrace{f'(x_k)}_{\to0}}
as kk\to\infty.
By Taylor’s expansion (η\eta=“eta”):
f(xk)=f(x)undefined=0+(xkx)f(ηk)=(xkx)f(ηk)f'(x_k)=\underbrace{f'(x_\ast)}_{=0}+(x_k-x_\ast)f''(\eta_k)=(x_k-x_\ast)f''(\eta_k)
for some ηk[x,xk]\eta_k\in [x_\ast,x_k], and hence
xk+1x=f(ηk)=(xkx)f(ηk)(xkx).x_{k+1}-x_\ast=f''(\eta_k)=(x_k-x_\ast)f''(\eta_k)(x_k-x_\ast).
The method has degenerated to a linear method!

Example. f(x)=x2f(x)=x^2, f(x)=2xf'(x)=2x. Newton:
xk+1=xkxk22xk=12xk.x_{k+1}=x_k-\frac{x^2_k}{2x_k}=\frac{1}{2}x_k.

Secant method

Sometimes it can be difficult or computationally expensive to compute the derivative f(xk)f'(x_k). Newton 's method can be adapted by approximating the derivative by the differential quotient. The secant method is the following two-step recursive algorithm.
xk+1=xkf(xk)(xkxk1)f(xk)f(xk1),k=1,2,x_{k+1}=x_k-\frac{f(x_k)(x_k-x_{k-1})}{f(x_k)-f(x_{k-1})},\quad k=1,2,\ldots
with two distinct starting points x0x1x_0\not=x_1. The convergence rate is 1+521.62\frac{1+\sqrt{5}}{2}\approx 1.62, the golden ratio.

Fixed point iteration

Definition. A point xRx\in\mathbb{R} is called a fixed point of φ:RR\varphi:\mathbb{R}\to\mathbb{R} if φ(x)=x\varphi(x)=x.

We could for instance use Newton’s method to find fixed points by setting f(x):=φ(x)xf(x):=\varphi(x)-x.

Banach’s Fixed Point Theorem. Suppose that φ\varphi is a contraction, that is, there exists a constant L<1L<1 such that
φ(x)φ(y)Lxy|\varphi(x)-\varphi(y)|\le L|x-y|
for all x,yRx,y\in\mathbb{R}. Then there exists a unique fixed point xRx_\ast\in\mathbb{R} of φ\varphi, i.e., φ(x)=x\varphi(x_\ast)=x_\ast, and the fixed point iteration φn:=φφundefinedn-times\varphi_n:=\underbrace{\varphi\circ\ldots\circ\varphi}_{n\text{-times}} satisfies limnφn(x0)=x\lim_{n\to\infty}\varphi_n(x_0)=x_{\ast} for any starting point x0Rx_0\in\mathbb{R}. The convergence rate is at least linear.

Proof. We prove that the sequence {φk(x0)}k=0={xk}k=0\{\varphi_k(x_0)\}_{k=0}^{\infty}=\{x_k\}_{k=0}^{\infty} is a Cauchy sequence. Let k>jk>j. Then, by the triangle inequality,
xkxjxkxk1+xk1xk2++xj+1xjundefined(kj)-summands.|x_k-x_j|\le\underbrace{|x_k-x_{k-1}|+|x_{k-1}-x_{k-2}|+\ldots+|x_{j+1}-x_j|}_{(k-j)\text{-summands}}.
Furthermore,
xmxm1=φ(xm1)φ(xm2)Lxm1xm2Lm1x1x0.|x_m-x_{m-1}|=|\varphi(x_{m-1})-\varphi(x_{m-2})|\le L|x_{m-1}-x_{m-2}|\le L^{m-1}|x_1-x_0|.
Hence, by the geometric series,
xkxjLj1Lkj1Lx1x0.|x_k-x_j|\le L^j\frac{1-L^{k-j}}{1-L}|x_1-x_0|.
If k>Nk>N, j>Nj>N, then
xkxjLN11Lx1x00|x_k-x_j|\le L^N\frac{1}{1-L}|x_1-x_0|\to 0
as NN\to\infty, which proves that {xk}\{x_k\} is a Cauchy sequence. The linear convergence rate follows also from this estimate.
The existence of a fixed point follows from the continuity of φ\varphi (as contractions are uniformly continuous, in fact, even Lipschitz continuous) as follows.
x=limkxk=limkxk+1=limkφ(xk)=φ(limkxk)=φ(x).x_\ast=\lim_{k\to\infty}x_k=\lim_{k\to\infty}x_{k+1}=\lim_{k\to\infty}\varphi(x_k)=\varphi(\lim_{k\to\infty}x_k)=\varphi(x_\ast).
\Box

Theorem. Assume that φCp\varphi\in C^p for p1p\ge 1. Furthermore, assume that has a fixed point xx_\ast and assume that
φ(x)=φ(x)==φ(p1)(x)=0\varphi'(x_\ast)=\varphi''(x_\ast)=\ldots=\varphi^{(p-1)}(x_\ast)=0 for p2p\ge 2 and
G(x)<1G'(x_\ast)<1 if p=1p=1. Then the fixed point sequence {φk(x0)}\{\varphi_k(x_0)\} converges to xx_\ast at least with rate pp, provided that the starting point x0x_0 is sufficiently close to xx_\ast. If, in addition, φ(p)(x)0\varphi^{(p)}(x_\ast)\not=0, then the rate of convergence is precisely pp.

Proof. First note that by Banach’s fixed point theorem the limit indeed converges to xx_\ast for suitable starting points x0x_0. By the Taylor expansion,
xk+1x=φ(xk)φ(x)=l=1p1φ(l)(x)l!(xkx)l+G(p)(ξk)p!(xkx)px_{k+1}-x_\ast=\varphi(x_k)-\varphi(x_\ast)=\sum_{l=1}^{p-1}\frac{\varphi^{(l)}(x_\ast)}{l!}(x_k-x_\ast)^l+\frac{G^{(p)}(\xi_k)}{p!}(x_k-x_\ast)^p
for some ξk\xi_k between xx_\ast and xkx_k. The sum will be left empty for the case p=1p=1. Since φ(l)(x)=0\varphi^{(l)}(x_\ast)=0 for 1lp11\le l\le p-1, we get that
xk+1x=φ(p)(ξk)p!xkxp|x_{k+1}-x_\ast|=\frac{|\varphi^{(p)}(\xi_k)|}{p!} |x_k-x_\ast|^p.
By continuity, there exists C>0C>0 (with C<1C<1 for p=1p=1) such that
φ(p)(ξk)p!C\frac{|\varphi^{(p)}(\xi_k)|}{p!}\le C for ξk\xi_k sufficiently close to xx_\ast (that is, for sufficiently large kk). Thus,
xk+1xCxkxp|x_{k+1}-x_\ast|\le C|x_k-x_\ast|^p
for large kk, and thus the rate of convergence is at least pp. Note that for p=1p=1, this also proves convergence by
xk+1x<φ(ξk)undefined<1xkx.|x_{k+1}-x_\ast|<\underbrace{|\varphi(\xi_k)|}_{<1}|x_k-x_\ast|.
If φ(p)0\varphi^{(p)}\not=0, then by continuity, there exists K>0K>0 such that
φ(p)(ξk)p!K\frac{|\varphi^{(p)}(\xi_k)|}{p!}\ge K
for ξK\xi_K sufficiently close to xx_\ast. Thus
xk+1xKxkxp|x_{k+1}-x_\ast|\ge K|x_k-x_\ast|^p
which implies that the rate of convergence cannot be higher than pp. Thus the rate of convergence is precisely pp. \Box

Note. From the above proof, we expect that close to the fixed point xx_\ast
xk+1xφ(p)(x)p!xkxp,|x_{k+1}-x_\ast|\approx \frac{|\varphi^{(p)}(x_\ast)|}{p!} |x_k-x_\ast|^p,
when
φ(x)=φ(x)==φ(p1)(x)=0,\varphi'(x_\ast)=\varphi''(x_\ast)=\ldots=\varphi^{(p-1)}(x_\ast)=0,
but φ(p)(x)0\varphi^{(p)}(x_\ast)\not=0.

Polynomial interpolation

Idea. Approximate a function f:RRf:\mathbb{R}\to\mathbb{R} over [a,b][a,b] by a polynomial pp such that in distinct data points (xi,yi)(x_i,y_i), i=0,1,,ni=0,1,\ldots, n, the approximation is exact, that is,
f(xi)=yi=p(xi),for alli=0,1,,n.f(x_i)=y_i=p(x_i),\quad\text{for all}\quad i=0,1,\ldots, n.
We may call xix_i node and yiy_i value.
We need at least 22 data points. We usually just assume that xixjx_i\not=x_j for iji\not=j.

Note. Interpolation polynomials are not per se unique, for instance the data {(1,1),(1,1)}\{(-1,1),(1,1)\} can be interpolated by
p(x)=1p(x)=1, q(x)=x2q(x)=x^2, or r(x)=x4x2+1r(x)=x^4-x^2+1. However, we will see later that pp is the unique interpolation polynomial with degp1=n\deg p\le 1= n.

Example. (1,2)(1,2), (2,3)(2,3), (3,6)(3,6), as data set {(xi,yi)   ⁣:  i=0,1,2}\{(x_i,y_i)\;\colon\;i=0,1,2\} on the interval [1,3][1,3].
We are looking for a polynomial p2(x)=j=02cjxjp_2(x)=\sum_{j=0}^2 c_j x^j, which is chosen to be 2nd order, because we have 33 data points and 33 unknown coefficients.
We can formulate the problem in matrix form:
(1x0x021x1x121x2x22)(c0c1c2)=(y0y1y2),\begin{pmatrix}1 & x_0 & x_0^2\\ 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\end{pmatrix}\cdot \begin{pmatrix}c_0\\c_1\\c_2\end{pmatrix}=\begin{pmatrix}y_0\\y_1\\y_2\end{pmatrix},
which is a so-called Vandermonde matrix which has determinant
det(1x0x021x1x121x2x22)=i<j(xjxi)0,\det\begin{pmatrix}1 & x_0 & x_0^2\\ 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\end{pmatrix}=\prod_{i<j}(x_j-x_i)\not=0, and is thus invertible. Here,
(111124139)(c0c1c2)=(236).\begin{pmatrix}1 & 1 & 1\\ 1 & 2 & 4\\ 1 &3 & 9\end{pmatrix}\cdot \begin{pmatrix}c_0\\c_1\\c_2\end{pmatrix}=\begin{pmatrix}2\\3\\6\end{pmatrix}.
As a result, c0=3c_0=3, c1=2c_1=-2, and c2=1c_2=1, and thus,
p2(x)=x22x+3.p_2(x)=x^2-2x+3.

The computational complexity of solving the linear system is O(n3)O(n^3). We used the natural basis for the polynomials.
What would be the ideal basis?
Definition. (Lagrange basis polynomials) Suppose that xixjx_i\not=x_j if iji\not=j. We call
ϕi(x):=j=0ijnxxjxixj\phi_i(x):=\prod_{\substack{j=0\\ i\not=j}}^n\frac{x-x_j}{x_i-x_j}
the iith Lagrange basis polynomial.
The Lagrange interpolation polynomial is given by
p(x):=i=0nyiφi(x).p(x):=\sum_{i=0}^n y_i \varphi_i(x).

Clearly,
φi(xj)=δi,j:={1    if    i=j,0    if    ij.\varphi_i(x_j)=\delta_{i,j}:=\begin{cases}1\text{\;\;if\;\;}i=j,\\0\text{\;\;if\;\;}i\not=j.\end{cases}

Example. (1,2)(1,2), (2,3)(2,3), (3,6)(3,6):
φ0(x)=(x2)(x3)(12)(13)\varphi_0(x)=\frac{(x-2)(x-3)}{(1-2)(1-3)}
φ1(x)=(x1)(x3)(21)(23)\varphi_1(x)=\frac{(x-1)(x-3)}{(2-1)(2-3)}
φ2(x)=(x1)(x2)(31)(32)\varphi_2(x)=\frac{(x-1)(x-2)}{(3-1)(3-2)}
p2(x)=2φ0(x)+3φ1(x)+6φ2(x)=x22x+3.p_2(x)=2\varphi_0(x)+3\varphi_1(x)+6\varphi_2(x)=x^2-2x+3.

Evaluating the Lagrange polynomials has the computational complexity O(n2)O(n^2).

Newton’s interpolation

Idea. Extend the natural basis:
1,xx0,(xx0)(xx1),,j=0n1(xxj).1,\quad x-x_0,\quad (x-x_0)(x-x_1),\quad\ldots,\quad \prod_{j=0}^{n-1}(x-x_j).

Definition. Define Newton’s interpolation polynomials by
pn(x)=a0+a1(xx0)++anj=0n1(xxj)p_n(x)=a_0+a_1(x-x_0)+\ldots+a_n\prod_{j=0}^{n-1} (x-x_j)
in such a way that pn(xi)=yip_n(x_i)=y_i