Notes - Numerical Analysis HT24, Quadrature and approximate integration


Flashcards

In quadrature, we want to approximate integrals of the form

\[\int^b_aw(x) f(x) \text dx\]

for some weight function $w(x) > 0$. We do this by trying to find points $x _ j$ and corresponding weights $w _ j$ so that

\[\int^b_a w(x) f(x) \text d x \approx \sum^n_{j = 0} w_j f(x_j)\]

In this context, describe Newton-Cotes quadrature, a rule that works exactly for any points $x _ 0, x _ 1, \cdots, x _ n$ and when $f$ is a polynomial of degree $n$, based on Lagrange interpolating polynomial.


Since $f$ is a polynomial of degree $n$ and we have $n + 1$ points, we can write it exactly as

\[f(x) = \sum^n_{j = 0} f(x_j) L_{n, j}(x)\]

where $L _ {n, j}$ denotes the corresponding Lagrange interpolating polynomial “helper”. Hence

\[\begin{aligned} \int^b_a w(x) f(x) \text d x &= \int^b_a w(x) \sum^n_{j = 0} f(x_j) L_{n, j} (x) \text d x \\\\ &= \sum^n_{j = 0} f(x_j) \left(\int^b_a w(x) L_{n, j}(x) \text d x)\right) \\\\ &= \sum^n_{j = 0}f(x_j)w_j \end{aligned}\]

where we define at the end

\[w_j := \int^b_a w(x) L_{n, j}(x) \text dx\]

The idea behind Gauss quadrature is to approximate integrals of the form

\[\int^b_aw(x) f(x) \text dx\]

for some weight function $w(x) > 0$. We do this by trying to find points $x _ j$ and corresponding weights $w _ j$ so that

\[\int^b_a w(x) f(x) \text d x \approx \sum^n_{j = 0} w_j f(x_j)\]

For polynomials $p$ of degree $n$, we have the exact rule that

\[\int^b_a w(x) p(x) \text d x = \sum^n_{j = 0} w_j p(x_j)\]

where

\[w_j := \int^b_a w(x) L_{n, j}(x) \text dx\]

for any points $x _ 0, \cdots, x _ n$. How can we actually better pick these points so that the equality holds not just for polynomials of degree $n$, but all the way up to polynomials of degree $\Pi _ {2n + 1}$? Quickly prove that this works.


Take

\[x_0 < x_1 < \cdots < x_n\]

to be the $n+1$ roots of the $n+1$st degree orthogonal polynomial $\phi _ {n+1}$ in the interval with respect to the inner product

\[\langle g, h \rangle = \int^b_a w(x) g(x) h(x) \text d x\]

(all the roots are luckily guaranteed to be in the interval). Then if $p \in \Pi _ {2n + 1}$, we have

\[p(x) = q(x) \phi_{n+1}(x) + r(x)\]

with $q, r \in \Pi _ n$. Then

\[\begin{aligned} \int^b_a w(x) p(x) \text d x &= \int^b_a w(x) q(x)\phi_{n+1}(x) + w(x) r(x) \text d x \\\\ &= \int^b_a w(x) q(x) \phi_{n+1}(x) \text d x + \int^b_a w(x)r(x) \text d x \\\\ &= 0 + \sum^n_{j = 0} w_j r(x_j) &&(1\star, 2\star)\\\\ &= \sum^n_{j = 0}w_j (q(x_j) \phi_{n+1}(x_j) + r(x_j)) &&(3\star) \\\\ &= \sum^n_{j = 0} w_j p(x_j) \end{aligned}\]

Notes:

  • $(1\star)$: The fact the LHS goes to zero follows from the fact that $\phi _ {n+1}(x)$ is orthogonal to all polynomials of degree less than $n+1$ (consider expanding $q$ in the orthonormal basis, and then using linearity to expand out the integral).
  • $(2\star)$: Since $r(x)$ is a polynomial of degree less than $n+1$, Gauss quadrature must be exact for the same reason that Newton-Cotes quadrature is exact for these polynomials; Gauss quadrature finds the Lagrange interpolating polynomial through $n+1$ points in the interval, and since the degree of $r$ is small, these must actually be the same polynomial.
  • $(3\star)$: This is just adding $0$, this is where we use the fact that the $x _ j$ are the roots of the $\phi _ {n+1}$.

Can you state a theorem that describes the convergence of Gauss quadrature?


Suppose:

  • $f$ analytic on $[-1, 1]$.
  • $I = \int^1 _ {-1} f(x) \text dx$
  • $I _ n$ approximation of Gauss quadrature with $n$ nodes

Then:

\[|I - I_n| \le C\exp(-cn)\]

for some $C, c > 0$.

Suppose:

  • $f$ analytic on $[-1, 1]$.
  • $I = \int^1 _ {-1} f(x) \text dx$
  • We are approximating $I$ with as $I _ n$, using Gauss-Legendre quadrature with $n$ nodes. This means:
    • Using nodes $x _ j$ as the roots of the $n+1$st Legendre polynomial
    • Using weights $w _ j$ equal to $\int^1 _ {-1} L _ {n, j}(x) \text dx$

Given the additional results that:

  • $f$ analytic on $[-1, 1]$ implies we can write $f(x) = \sum^\infty _ {i = 0} \alpha _ i T _ i(x)$ for some coefficients $\alpha _ i$ where $T _ i$ are the Chebyshev polynomials, which correspond to the inner product corresponding to $\hat w(x) = 1/\sqrt{1 - x^2}$
  • These coefficients have the property that $ \vert \alpha _ i \vert \le k \exp(-di)$ for some $d, k > 0$
  • $ \vert T _ i(x) \vert \le 1$ for all $x$ on $[-1, 1]$
  • All weights $w _ j$ are nonnegative

Quickly prove that:

\[|I - I_n| \le C\exp(-cn)\]

for some $C, c > 0$.

(confusingly, we expand $f$ using Chebyshev polynomials but this result is about Gauss-Legendre quadrature rather than than Guass-Chebyshev quadrature. Expanding $f$ using Chebyshev polynomials is useful because it gives us useful inequalities each term in the expansion of $f(x) = \sum^\infty _ {i = 0} \alpha _ i T _ i(x)$).


\[\begin{aligned} |I_n - I| &= \left| \sum^n_{j = 0} w_j \left( \sum^\infty_{i = 0} \alpha_i T_i(x_j) \right) - \int^1_{-1} \left(\sum^\infty_{i = 0} \alpha_i T_i(x) \right)\text dx \right| \\\\ &= \left| \sum^n_{j = 0}w_j \left( \sum^\infty_{i = 2n + 2} \alpha_i T_i(x_j) \right) - \int^1_{-1} \sum^\infty_{i = 2n + 2} \alpha_i T_i(x) \text dx \right| &&(1\star) \\\\ &\le \sum^n_{j = 0} w_j \left( \sum^\infty_{i = 2n+2} |\alpha_i| \right) + 2 \sum^\infty_{i = 2n+2} |\alpha_i| && (2,3,4,5\star) \\\\ &= \sum^\infty_{i = 2n+2} |\alpha_i| \left( \left(\sum^n_{j = 0}w_j\right) + 2 \right) \\\\ &= \sum^\infty_{i = 2n +2} |\alpha_i|(2 + 2) &&(6\star) \\\\ &\le 4 \sum^\infty_{i = 2n+2} k\exp(-di) &&(7\star) \\\\ &= 4 \cdot k\exp(-d(2n+2)) \cdot \frac{1}{1-\exp(-d)} &&(8\star) \\\\ &= C\exp(-cn) \end{aligned}\]

Notes:

  • $(1\star)$: Gauss quadrature is exact for polynomials of degree up to $2n+1$, so we can discard all terms before $i = 2n + 2$.
  • $(2,3,4,5\star)$:
    • $(2\star)$: We use the triangle inequality to split it up into a sum with the absolute values inside
    • $(3\star)$: $ \vert w _ j \vert = w _ j$ by the assumption that $w _ j \ge 0$ that we are allowed to use
    • $(4\star)$: We use the fact that $ \vert T _ i(x) \vert \le 1$ for all $x \in [-1, 1]$ twice, first on the left-hand sum and then in the right hand sum
    • $(5\star)$: Using the pessimistic estimate that $T _ i(x) = 1$ for all $x$ means integrating over $-1$ to $1$ gives the constant $2$.
  • $(6\star)$: The sum $\sum^n _ {j = 0} w _ j$ will be $2$, since this is equivalent to integrating the constant function over $[-1, 1]$, and this integral will be exact.
  • $(7\star)$: This is applying the inequality that $ \vert \alpha _ i \vert \le k\exp(-di)$.
  • $(8\star)$: This is using the geometric series formula.

Suppose you are using Newton-Cotes quadrature over the interval $[-1, 1]$. How are the points at which you evaluate $f$ chosen?


They are equally spaced in $[-1, 1]$.

Suppose:

  • $f : [a, b] \to \mathbb R \in L _ w^2(a, b)$
  • $f^{(2n+2)}$ is continuous on $(a, b)$
  • $I = \int^b _ a f(x) \text dx$
  • $I _ n = \sum^n _ {i = 0} w _ i f(x _ i)$ is the Gauss quadrature approximation of $I$ in $L^2 _ w(a, b)$

Can you state a theorem giving the error in Gauss quadrature?


There exists $\xi \in (a, b)$ such that

\[I - I_n = \frac{f^{(2n+2)} (\xi)}{(2n+2)!} \int^b_a w(x) \prod^n_{i = 0} (x - x_i)^2 \text dx\]

Suppose:

  • $f : [a, b] \to \mathbb R \in L _ w^2(a, b)$
  • $f^{(2n+2)}$ is continuous on $(a, b)$
  • $I = \int^b _ a f(x) \text dx$
  • $I _ n = \sum^n _ {i = 0} w _ i f(x _ i)$ is the $n$-node Gauss quadrature approximation of $I$ in $L^2 _ w(a, b)$

Quickly prove that then there exists $\xi \in (a, b)$ such that

\[I - I_n = \frac{f^{(2n+2)} (\xi)}{(2n+2)!} \int^b_a w(x) \prod^n_{i = 0} (x - x_i)^2 \text dx\]

Overall idea: Approximate $f$ by the Hermite interpolating polynomial $p _ {2n+1}$. Since $p _ {2n+1}$ is integrated exactly, and we have bounds on the error $f - p _ {2n+1}$, then we can use this to get an error estimate for the quadrature.

Let $p _ {2n+1}$ be the Hermite interpolating polynomial for $f$ at the nodes being used (i.e. the roots of the $\phi _ {n+1}$st orthogonal polynomial in $L^2 _ w(a, b)$), so that $p _ {2n+1}(x _ i) = f(x _ i)$ and $p _ {2n+1}’(x _ i) = f’(x _ i)$ for each $x _ i$.

Then the error in ther Hermite interpolation is

\[f(x) - p_{2n+1}(x) = e(x) = \frac{f^{(2n+2)}(\xi_x)}{(2n+2)!}\prod^n_{i = 0}(x-x_i)^2\]

for some $\xi _ x \in (a, b)$. Since $p _ {2n+1} \in \Pi _ {2n+1}$, the Gauss quadrature is exact for $p _ {2n+1}$, so

\[\begin{aligned} \int^b_a w(x) p_{2n+1} (x) \text dx &= \sum^n_{i = 0} w_i p_{2n+1}(x_i) \\\\ &= \sum^n_{i = 0} w_i f(x_i) && (\text{by defn. of }p_{2n+1}) \end{aligned}\]

Then:

\[\begin{aligned} I - I_n &= \int^b_a w(x) f(x) \text dx - \sum^n_{i = 0} w_i f(x_i) \\\\ &= \int^b_a w(x) (f(x) - p_{2n+1}(x)) \text dx \\\\ &= \int^b_a w(x) \left(\frac{f^{(2n+2)}(\xi_x)}{(2n+2)!}\prod^n_{i = 0}(x-x_i)^2\right) \text dx \\\\ &= \int^b_a \left( w(x) \prod^n_{i = 0}(x-x_i)^2\right) \frac{f^{(2n+2)}(\xi_x)}{(2n+2)!} \text dx \\\\ &= \frac{f^{(2n+2)}(\xi)}{(2n+2)!} \int^b_a \left( w(x) \prod^n_{i = 0}(x-x_i)^2\right) \text dx \end{aligned}\]

for some $\xi \in (a, b)$. This equality is justified by the theorem that says:

If $f : [a, b] \to \mathbb R$ is an integrable function and $g$ is an integrable function that does not change sign, then $\exists \xi \in [a, b]$ such that

\[\int^b _ a f(x) g(x)\text dx = f(\xi) \int^b _ a g(x) \text dx\]

and the fact that $\left( w(x) \prod^n _ {i = 0}(x-x _ i)^2\right)$ is nonnegative on $[a, b]$.

Suppose you have the quadrature rule:

\[\int^1_{-1} f(x) \text dx \simeq \sum_{i = 0}^n w_i f(x_i)\]

but you want to actually integrate $f$ over $[a, b]$. How can you do this, and what’s the resulting formula in general?


The map $x \mapsto (2x - a - b)/(b-a)$ maps $[a, b]$ to $[-1, 1]$, so performing the change of variables gives

\[\begin{aligned} \int^b_a f(x) \text dx &= \int^1_{-1} f\left( \frac{(b-a)t + b + a}{2} \right) \frac{b-a}{2} \text dt \\\\ &\simeq \frac{b-a}{2} \sum^n_{j = 0} w_j f\left(\frac{b-a}{2}t_j + \frac{b+a}{2}\right) \end{aligned}\]

where $t _ j$ are the nodes in $[-1, 1]$.

What’s the formula for Simpson’s rule for approximating

\[\int^b_{a} f(x) \text dx\]

and what is the idea behind its derivation?


\[\int^b_{a} f(x) \text dx \simeq \frac{b-a}{6} \left[ f(a) + 4f\left(\frac{a + b}{2}\right) + f(b) \right]\]

Derivation: On an interval $[-1, 1]$, approximate $f$ using a degree $2$ polynomial to match at points $-1, 0, 1$. Then can apply a change of variables to integrate over an arbitrary interval.

What’s the formula for the trapezium rule for approximating

\[\int^b_{a} f(x) \text dx\]

and what is the idea behind its derivation?


\[\int^b_{a} f(x) \text dx \simeq \frac{b-a}{2} \left[ f(a) + f(b) \right]\]

Derivation: The “easy way” is that you are approximating the area under the graph by a trapezium at starting $f(a)$ and ending at $f(b)$. The more sophisticated way is that on an interval $[-1, 1]$, you can approximate $f$ by a linear function using the points $-1$ and $1$, and integrate this exactly. Then you can apply a change of variables formula.




Related posts