Notes - Numerical Analysis HT24, One-step methods


Flashcards

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Can you:

  • Give an exact expression for $\pmb y(x _ {n+1})$ in terms of an integral
  • Explain how this approximated in many first order methods
  • Use this exact expression to give a bound on the approximation error for a one-step method which uses $s \in [x _ n, x _ {n+1}]$ as the approximation

?


\[\pmb y(x_ {n + 1}) = \pmb y(x_ n) + \int^{x_ {n+1}\\,}_ {x_ n} \pmb f(x, \pmb y(x)) \text d x\]

By the mean value theorem for integrals, $\exists \xi \in [x _ n, x _ {n+1}]$ such that

\[\int^{x_ {n+1}\\,}_ {x_n} \pmb f(x, \pmb y(x)) \text d x = h\pmb f(\xi, \pmb y(\xi))\]

where $h$ is the step size. Different methods correspond to different choices for $\xi$. Then for some particular choice $s$ as an approximation,

\[\begin{aligned} \left |\int^{x_ {n+1}\\,}_ {x_n} \pmb f(x, \pmb y(x)) \text d x - h\pmb f(s, \pmb y(s)) \right| &= \Big| h\pmb f(\xi, \pmb y(\xi)) - h\pmb f(s, \pmb y(s))\Big| \\\\ &\le h\max_ {r \in [x_n, x_{n+1}]}\Big|\pmb f(r, \pmb y(r)) - \pmb f(s, \pmb y(s))\Big| \end{aligned}\]

(the resulting approximations are called “rectangle rules”)

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Many first-order methods for approximating the solution are based on the observation that

\[\pmb y(x_{n + 1}) = \pmb y(x_n) + \int^{x_{n+1}\\,}_{x_n} \pmb f(x, \pmb y(x)) \text d x\]

and by the mean value theorem for integrals, $\exists \xi \in [x _ n, x _ {n+1}]$ such that

\[\int^{x_{n+1}\\,}_{x_n} \pmb f(x, \pmb y(x)) \text d x = h\pmb f(\xi, \pmb y(\xi))\]

We can’t pick $\xi$ exactly, since otherwise we would have an exact solution, so methods differ in their choice of $s \in [x _ n, x _ {n+1}]$ used as an approximation. Under this framework, describe:

  • Explicit Euler method
  • Implicit Euler method
  • Implicit midpoint rule

and describe why the implicit methods are called implicit.


  • Explicit Euler method: $s = x _ n$
  • Implicit Euler method: $s = x _ {n+1}$
  • Implicit midpoint rule: $s = \frac{x _ {n+1} + x _ n}{2} = x _ n + \frac h 2$

The implicit methods are called implicit since we need to determine $\pmb y(s) = \pmb y (x _ {n+1})$, which is what we are trying to calculate. But these can be found with a root finding algorithm.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Can you formally define a “one-step method”?


A function

\[\pmb \Psi(s, \pmb y, h, \pmb f)\]

which computes an approximation of $\pmb y(s + h)$ which is the solution at $s + h$ of the IVP

\[\pmb y'(x) = \pmb f(x, \pmb y), \quad \pmb y(s) = \pmb y\]

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

and the one-step method given by $\pmb \Psi(s, \pmb y, h, \pmb f)$. What does it mean for the one-step method $\pmb \Psi$ to be consistent?


\[\pmb \Psi(s, \pmb y, 0, \pmb f) = \pmb y\]

(i.e. setting the step size to zero should have no effect), and

\[\frac{\text d}{\text d h} \pmb \Psi(s, \pmb y, h, \pmb f) \Big|_{h = 0} = \pmb f(s, \pmb y)\]

This condition imposes the derivative is computed sensibly, consider something like explicit Euler’s method,

\[\pmb y_{n+1} = \pmb y_n + h\pmb f(x_{n+1}, \pmb y_n)\]

here taking the derivative satisfies the required property.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Suppose we are using the one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$. How is the consistency error defined, and what does it intuitively measure?


\[\begin{aligned} \pmb \tau(s, \pmb y, h, \pmb f) &:= \frac{\pmb y(s + h) - \pmb y(s)}{h} - \frac{\pmb \Psi(s, \pmb y, h, \pmb f) - \pmb y(s)}{h} \\\\ &= \frac{\pmb y(s + h) - \pmb \Psi(s, \pmb y, h, \pmb f)}{h} \end{aligned}\]

It intuitively measures “how bad is our approximation, normalised for $h$”.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

What is the global error of some method for approximating $\pmb y(x _ n)$?


\[e_n := y(\pmb x_n) - \pmb y_n\]

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

We use a numerical algorithm to obtain an approximation $\pmb y _ n \approx \pmb y(X)$, and define the global error $e := \pmb y(X) - \pmb y _ n$. What does it mean for such a method to have order of accuracy $p$?


\[e \le Ch^p\]

for some $C > 0$.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

The consistency error $\pmb \tau$ of a one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$ is equal to as

\[\pmb \tau(s, \pmb y, h, \pmb f) = \frac{\pmb y(s + h) - \pmb \Psi(s, \pmb y, h, \pmb f)}{h}\]

What does it mean for the method to have consistency order $p$ (at point $s$)?


\[||\pmb \tau(s, \pmb y, h, \pmb f)|| \le Ch^p =O(h^p)\]

for some $C > 0$.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

The consistency error $\pmb \tau$ of a one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$ is equal to as

\[\pmb \tau(s, \pmb y, h, \pmb f) = \frac{\pmb y(s + h) - \pmb \Psi(s, \pmb y, h, \pmb f)}{h}\]

Can you state a result which relates the consistency of $\pmb \Psi(s, \pmb y, h, \pmb f)$ to this function?


$\pmb \Psi$ is consistent iff $ \vert \vert \pmb \tau(s, \pmb y, h, \pmb f) \vert \vert \to 0$ as $h \to 0$ locally uniformly in the cylinder from Picard’s theorem.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Suppose we are using the one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$. Can you state a result that characterises the consistency of $\pmb \Psi$ in terms of increment functions?


$\pmb \Psi$ is consistent iff there is a continuous increment function $\pmb \psi$ such that

\[\pmb \Psi(s, \pmb y, h, \pmb f) = \pmb y + h\pmb \psi(s, \pmb y, h, \pmb f)\]

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Suppose we are using the one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$. Assuming:

  • The corresponding increment function $\pmb \psi$ is Lipschitz continuous with constant $L$
  • Mild assumptions on the IVP

Can you state a theorem that places a bound on the global error of the approximation, and explain why despite looking scary this is actually a good thing?


\[e_N \le \frac{\exp(L(x_N - x_0)) - 1}{L} \cdot \max_n ||\pmb \tau(x_n, y(x_n), h, f)||\]

where $\pmb \tau$ is the consistency error given by

\[\pmb \tau(s, \pmb y, h, \pmb f) = \frac{\pmb y(s + h) - \pmb \Psi(s, \pmb y, h, \pmb f)}{h}\]

The error looks exponential here, but actually it doesn’t depend on $h$. So by making $h$ small enough, we are guarenteed convergence.

Suppose we have the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$

Suppose we are using the one-step method $\pmb \Psi(s, \pmb y, h, \pmb f)$. Assuming:

  • The corresponding increment function $\pmb \psi$ is Lipschitz continuous with constant $L$
  • Mild assumptions on the IVP

We have a theorem that places a bound on the global error of the approximation which says:

\[e_N \le \frac{\exp(L(x_N - x_0)) - 1}{L} \cdot \max_n ||\pmb \tau(x_n, y(x_n), h, f)||\]

where $\pmb \tau$ is the consistency error given by

\[\pmb \tau(s, \pmb y, h, \pmb f) = \frac{\pmb y(s + h) - \pmb \Psi(s, \pmb y, h, \pmb f)}{h}\]

Quickly prove this.


Since $\pmb \Psi$ is assumed to be Lipschitz continuous, dropping the $h$ and $\pmb f$ from the functions for notational convinience, we have:

\[||\pmb \psi(x, \pmb y) - \pmb \psi(x, \pmb z)|| \le L||\pmb y - \pmb z||\]

We first derive a bound for the error $e _ {n+1}$ given the error at the previous step $e _ n$, and then apply this recursively all the way from $e _ N$ back to $e _ 0$ to get a bound on the global error.

\[\begin{aligned} e_{n+1} &= ||\pmb y(x_{n + 1}) - \pmb y_{n+1}|| \\\\ &= ||\pmb y(x_{n+1}) - \pmb \Psi(x_n, \pmb y_n)|| \\\\ &= ||\pmb y(x_{n+1}) - \pmb \Psi(x_n, \pmb y(x_n)) + \pmb \Psi(x_n, \pmb y(x_n)) - \pmb \Psi(x_n, \pmb y_n)|| &&(1\star)\\\\ &\le ||\pmb y(x_{n+1}) - \pmb \Psi(x_n, \pmb y(x_n)) || + || \pmb \Psi(x_n, \pmb y(x_n)) - \pmb \Psi(x_n, \pmb y_n)|| \\\\ &= h||\pmb \tau(x_n, \pmb y(x_n))|| - ||\pmb y(x_n) + h \pmb \psi(x_n, \pmb y(x_n)) - \pmb y_n - h\pmb\psi(x_n, \pmb y_n)|| \\\\ &\le h||\pmb \tau(x_n, \pmb y(x_n))|| + ||\pmb y(x_n) - \pmb y_n|| + h||\pmb \psi(x_n, \pmb y(x_n)) - \pmb\psi(x_n, \pmb y_n)|| \\\\ &= h||\pmb \tau(x_n, \pmb y(x_n))|| + e_n + h||\pmb \psi(x_n, \pmb y(x_n)) - \pmb\psi(x_n, \pmb y_n)|| \\\\ &\le h||\pmb \tau(x_n, \pmb y(x_n))|| + e_n + hL||\pmb y(x_n) - \pmb y_n|| \\\\ &= h||\pmb \tau(x_n, \pmb y(x_n))|| + e_n + h Le_n \\\\ &= h||\pmb \tau(x_n, \pmb y(x_n))|| + (1 + h L)e_n \end{aligned}\]

For $(1\star)$, $\pmb y _ n$ is the approximation, and then $\pmb y(x _ n)$ is the exact value at $x _ n$.

Iterating recursively and using the fact that $e _ 0 = 0$ we obtain

\[\begin{aligned} e_{n+1} &\le (1 + hL)^{n+1}e_0 + h\sum^n_{k=0} (1 + hL)^k \max_{m = 0, \cdots, n} ||\pmb \tau(x_m, \pmb y(x_m))|| \\\\ &= 0 + \frac{h((1 + hL)^{n+1} - 1)}{(1 + hL) - 1} \cdot \max_{m = 0, \cdots, n} ||\pmb \tau(x_m, \pmb y(x_m))|| \\\\ &= \frac{(1 + hL)^{n+1} - 1}{L} \cdot \max_{m = 0, \cdots, n} ||\pmb \tau(x_m, \pmb y(x_m))|| \end{aligned}\]

Then, since $(1 + x)^n \le \exp(nx)$, we have

\[\begin{aligned} e_{n+1} &\le \frac{\exp(NLh) - 1}{L} \cdot \max_{m = 0, \cdots, n} ||\pmb \tau(x_m, \pmb y(x_m))||\\\\ &= \frac{\exp(L(x_N - x_0)) - 1}{L} \cdot \max_{m = 0, \cdots, n} ||\pmb \tau(x_m, \pmb y(x_m))|| \end{aligned}\]

so letting $n + 1 = N$ gives the required result.

Suppose we are solving the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$
  • $h$ is a step size
  • $x _ n = x _ 0 + hn$, $\pmb y _ n$ is the corresponding approximation

What is the expression for $\pmb y _ {n+1}$ using the explicit Euler method?


\[\pmb y_{n + 1} = \pmb y_n + h \pmb f(x_n, \pmb y_n)\] \[\pmb \psi(s, \pmb y, h, \pmb f) = \pmb f(x_n, \pmb y_n)\]

Suppose we are solving the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$
  • $h$ is a step size
  • $x _ n = x _ 0 + hn$, $\pmb y _ n$ is the corresponding approximation

What is the expression for $\pmb y _ {n+1}$ using the implicit Euler method?


\[\pmb y_{n + 1} = \pmb y_n + h \pmb f(x_{n+1}, \pmb y_{n+1})\] \[\pmb \psi(s, \pmb y, h, \pmb f) = \pmb f(x_{n+1}, \pmb y_{n+1})\]

Suppose we are solving the IVP:

  • $I = [x _ 0, X] \subset \mathbb R$ be an interval
  • $D \subset \mathbb R^d$ open subset
  • $\pmb y’ (x) = \pmb f(x, \pmb y)$
  • $\pmb y (x _ 0) = \pmb y _ 0$
  • $h$ is a step size
  • $x _ n = x _ 0 + hn$, $\pmb y _ n$ is the corresponding approximation

What is the expression for $\pmb y _ {n+1}$ using the implicit midpoint method, and what is the corresponding increment function?


\[\pmb y_{n + 1} = \pmb y_n + h \pmb f\left(x_n + \frac h2, \frac{\pmb y_n + \pmb y_{n + 1} }{2}\right)\] \[\pmb \psi(s, \pmb y, h, \pmb f) = f\left(x_n + \frac h2, \frac{\pmb y_n + \pmb y_{n + 1} }{2}\right)\]



Related posts