Реализация производной в C / C ++

How is the derivative of a f(x) typically calculated programmatically to ensure maximum accuracy?

I am implementing the Newton-Raphson method, and it requires taking of the derivative of a function.

13.10.2009 11:40:35
Show us what you've done so far.
Lazarus 13.10.2009 11:42:23
do you want me to get fired :)?
vehomzzz 13.10.2009 11:48:36
Newton's method accuracy doesn't depend (only) on the accuracy of the derivative, a gross approximation will do too (in the extreme case you get the secant method), but it will be a little bit slower to get the same precision (more iterations needed).
fortran 13.10.2009 12:44:39
8 ОТВЕТОВ
РЕШЕНИЕ

I agree with @erikkallen that (f(x + h) - f(x - h)) / 2 * h is the usual approach for numerically approximating derivatives. However, getting the right step size h is a little subtle.

The approximation error in (f(x + h) - f(x - h)) / 2 * h decreases as h gets smaller, which says you should take h as small as possible. But as h gets smaller, the error from floating point subtraction increases since the numerator requires subtracting nearly equal numbers. If h is too small, you can loose a lot of precision in the subtraction. So in practice you have to pick a not-too-small value of h that minimizes the combination of approximation error and numerical error.

Как правило, вы можете попробовать , h = SQRT(DBL_EPSILON)где DBL_EPSILONсамую маленькую двойную точность и eтаким образом, что 1 + e != 1в машинной точности. DBL_EPSILONо том, 10^-15чтобы вы могли использовать h = 10^-7или 10^-8.

Для получения дополнительной информации см. Эти примечания по выбору размера шага для дифференциальных уравнений.

60
20.06.2018 19:40:01
I think your rule of thumb assumes you use a first-order rule to approximate the derivative. However, the central difference rule you mention is second order, and the corresponding rule of thumb is h = EPSILON^(1/3) which is approximately 10^(-5) when using double precision.
Jitse Niesen 13.10.2009 13:05:05
I think the accuracy can be improved a little by dividing by (x+h)-(x-h) instead of 2h. It's mathematically equivalent but not numerically.
sellibitze 13.10.2009 13:10:25
Whould you mean instead "DBL_EPSILON is the smallest double precision number e such that 1 + e != 1 in machine precision."
yves Baumes 13.10.2009 15:58:32
Choosing h depends on the derivative f'''(x).
Alexey Malistov 13.10.2009 16:10:23
+1 great answer. I would surely have overlooked that very small sizes of h magnifies floating point errors. Thanks, I learned something today.
MAK 13.10.2009 19:20:33
fprime(x) = (f(x+dx) - f(x-dx)) / (2*dx)

for some small dx.

5
13.10.2009 11:48:38
Numerical Receipes has some comments on that books.google.co.uk/…
user151019 13.10.2009 12:31:34

Newton_Raphson assumes that you can have two functions f(x) and its derivative f'(x). If you do not have the derivative available as a function and have to estimate the derivative from the original function then you should use another root finding algorithm.

Wikipedia root finding gives several suggestions as would any numerical analysis text.

10
13.10.2009 12:01:50

Что вы знаете о f (x)? Если у вас есть только черный квадрат f, единственное, что вы можете сделать, - это численно приблизить производную. Но точность обычно не так хороша.

Вы можете сделать намного лучше, если вы можете коснуться кода, который вычисляет f. Попробуйте «автоматическое дифференцирование» . Есть несколько хороших библиотек для этого. С небольшим количеством волшебства библиотеки вы можете легко преобразовать свою функцию во что-то, что автоматически вычисляет производную. Простой пример C ++ см. В исходном коде этого обсуждения на немецком языке.

5
13.10.2009 12:58:16

In addition to John D. Cooks answer above it is important not only to take into account the floating point precision, but also the robustness of the function f(x). For instance, in finance, it is a common case that f(x) is actually a Monte Carlo-simulation and the value of f(x) has some noise. Using a very small step size can in these cases severely degrade the accuracy of the derivative.

3
13.10.2009 13:27:31

alt text

alt text

1) First case:

alt text

alt text — relative rounding error, about 2^{-16} for double and 2^{-7} for float.

We can calculate total error:

alt text

Suppose that you are using double floating operation. Thus the optimal value of h is 2sqrt(DBL_EPSILON/f''(x)). You do not know f''(x). But you have to estimate this value. For example, if f''(x) is about 1 then the optimal value of h is 2^{-7} but if f''(x) is about 10^6 then the optimal value of h is 2^{-10}!

2) Second case:

alt text

Note that second approximation error tends to 0 faster than first one. But if f'''(x) is very lagre then first option is more preferable:

alt text

Note that in the first case h is proportional to e but in the second case h is proportional to e^{1/3}. For double floating operations e^{1/3} is 2^{-5} or 2^{-6}. (I suppose that f'''(x) is about 1).


Which way is better? It is unkown if you do not know f''(x) and f'''(x) or you can not estimate these values. It is believed that the second option is preferable. But if you know that f'''(x) is very large use first one.

What is the optimal value of h? Suppose that f''(x) and f'''(x) are about 1. Also assume that we use double floating operations. Then in the first case h is about 2^{-8}, in the first case h is about 2^{-5}. Correct this values if you know f''(x) or f'''(x).

9
20.06.2015 18:04:30
Эпсилон должен быть 2 ^ -53 для double и 2 ^ -24 для float (что составляет около 10 ^ -16 и 10 ^ -7, соответственно).
Stephen Canon 13.10.2009 15:50:45
epsilon - относительная ошибка округления (не абсолютная). Это всегда около 10 ^ {- 16} для двойного и 10 ^ -7 для поплавка
Alexey Malistov 13.10.2009 16:00:47
Yes, I know. In your answer, you say "epsilon -- relative rounding error, about 2^{-16} for double and 2^{-7} for float," which is plainly incorrect. The relative (forward) rounding error is also not always on that scale, but rather the backwards error. The forward error can be much, much larger when cancellation occurs, as is likely to happen here.
Stephen Canon 14.10.2009 00:44:04
The ImageShack images are currently broken.
kibibu 3.02.2015 23:26:53
The evaluation error is more correctly a multiple of abs(f(x))*eps, where the multiplicity relates to the number of floating point operations in the evaluation of f(x). Thus h~cbrt(abs(f(x)/f'''(x))*eps) for the central difference.
Lutz Lehmann 20.06.2018 19:51:02

You definitely want to take into account John Cook's suggestion for picking h, but you typically don't want to use a centered difference to approximate the derivative. The main reason is that it costs an extra function evaluation, if you use a forward difference, ie,

f'(x) = (f(x+h) - f(x))/h

Then you'll get value of f(x) for free because you need to compute it already for Newton's method. This isn't such a big deal when you have a scalar equation, but if x is a vector, then f'(x) is a matrix (the Jacobian), and you'll need to do n extra function evaluations to approximate it using the centered difference approach.

5
14.10.2009 01:10:15

Typically signal noise impacts the derivative quality more that anything else. If you do have noise in your f(x), Savtizky-Golay is a excellent smoothing algorithm that is often used to compute good derivatives. In a nutshell, SG fits a polynomial locally to your data, the then this polynomial can be used to compute the derivative.

Paul

1
6.11.2009 16:41:37