August 03, 2020
*|MC:SUBJECT|*  August 8th, 2020 The complex step method for numerical differentiation
 Some years ago I came across this idea while browsing a now-forgotten computing magazine, and thought I would pass it on. The reference is a 1967 paper by Lyness and Moler, available at epubs. For further details and implementation in various languages see Martins et al (2003). Moler’s blog sets the scene. Start with an analytical function f, a point x0 on the real axis, and a parameter h > 0. By expanding f in a Taylor series off the x-axis and taking the imaginary part of both sides, we get f ′(x0) = Im(f(x0 + ih))/h + O(h2), giving an approximation accurate to order h2 that is not subject to subtractive cancellation errors! An example is given in the R code below, for which f(x) = ex/(sin3(x) + cos3(x)),  − π/4 ≤ x ≤ π/2, and x0 = π/4. The results are compared with the finite-difference formula  f ′(x0) ≈ (f(x0 + h) − f(x0 − h))/(2h). As advertised, the complex step method reaches its double precision value when h = 10 − 8, while the finite difference method never does; it also fails dramatically as h ↓ 0. Happy exploring!
 ``````moler <- function(){ Fd <- matrix(0,nrow=16,ncol=2) f <- function(x) exp(x)/(sin(x)^3 + cos(x)^3) fcs <- function(x,h) Im(f(x + 1i*h))/h fdf <- function(x,h) (f(x+h) - f(x-h))/(2*h) x0 <- pi/4 h <- 10^(-seq(1,16)) Fd[,1] <- fcs(x0,h) Fd[,2] <- fdf(x0,h) return(Fd) }``````
 h complex step finite difference 0.100000000000000 3.14427604063456 3.06151186656812 0.010000000000000 3.10218007541127 3.10135293765588 0.001000000000000 3.10177052953585 3.10176225815861 0.000100000000000 3.10176643519294 3.10176635248460 0.000010000000000 3.10176639424962 3.10176639335413 0.000001000000000 3.10176639384019 3.10176639350956 0.000000100000000 3.10176639383609 3.10176639706228 0.000000010000000 3.10176639383605 3.10176644369164 0.000000001000000 3.10176639383605 3.10176595519351 0.000000000100000 3.10176639383605 3.10176329065825 0.000000000010000 3.10176639383605 3.10182990403973 0.000000000001000 3.10176639383605 3.10196313080269 0.000000000000100 3.10176639383605 3.09752223870419 0.000000000000010 3.10176639383605 3.06421554796543 0.000000000000001 3.10176639383605 3.10862446895044 0.000000000000000 3.10176639383605 8.88178419700125 Simon Tavaré Herbert and Florence Irving Director, Herbert and Florence Irving Institute for Cancer Dynamics
 Meet Our Associate Members In this monthly newsletter, we will regularly feature the many talented members of our Institute. Ivan Corwin is a professor of mathematics, a member of the Program for Mathematical Genomics, of the Data Science Institute and of the Columbia Quantum Initiative at Columbia University. Ivan is also on the scientific advisory board for the NSF center (Institute for Computations and Experimental Research in Mathematics (ICERM). He studies aspects of probability and mathematical physics, including random interface growth, interacting particle systems, random matrix theory, and stochastic partial differential equations. Ivan leads the Probability and Society group, as part of the Columbia Science Initiative at Columbia University. Ivan is also interested in applications of probability in the biological sciences, for example, random matrix theory for the analysis of single-cell RNA-seq data. Andrew Yates is a Professor of Pathology & Cell Biology and a member of the Data Science Institute and of the Program for Mathematical Genomics at Columbia University. Andy is a quantitative immunologist. His research integrates theoretical and computational tools with diverse experimental approaches to study the dynamics of lymphocytes, white blood cells that are key components of vertebrate immune systems. Andy is helping the IICD develop its research profile in immunology.