Numerical Analysis HT24, Misc
Flashcards
matrix-multiplication-flop-countWhat is the exact flop count for the simple method of multiplying a $(m \times n)$ matrix by an $(n \times \ell)$ matrix, and as a special case, what is it for multiplying an $m \times n$ matrix by a vector?
\(m(2n-1)\ell\) Then for a multiplication $Av$ where $A$ is $m \times n$ and $v$ is $n \times 1$, this is just \(m(2n-1)\) —
Quick justification:
- $m\ell$ elements in the final matrix
- Each one involved $n$ multiplications, followed by $n-1$ additions (we need to do one less addition since we can add $n-1$ multiplications to the first multiplication, rather than adding $n$ multiplications to $0$)
- So $m\ell (2n-1)$ total
Bite-sized
Leading-term flop counts: matrix-vector $Ax$ ($A \in \mathbb R^{m \times n}$) costs $2mn$ flops; matrix-matrix $AB$ ($A$ $m \times n$, $B$ $n \times k$) costs $2mnk$ flops. For square $n \times n$ matrices that’s $2n^3$.
Source: Lecture 1 of NLA MT25, §1.4 “Computational complexity of matrix algorithms” of the lecture notes.
@bite~
bite-leading-term-conventionWhy does NLA “only track the leading term” of a flop count?
Lower-order terms are dominated as $n \to \infty$, and the leading constant does matter (e.g. $\tfrac{2}{3} n^3$ for LU vs $\tfrac{4}{3} n^3$ for QR — a factor-2 difference is consequential), so tracking it is essential. But sub-leading terms ($n^2$ or $n \log n$ corrections) are not — they’re absorbed in big-Oh.
Source: Lecture 1 of NLA MT25, §1.4 of the lecture notes.
@bite~