In mathematics , the Runge–Kutta–Fehlberg method (or Fehlberg method ) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations . It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods .
26-534: RKF may refer to: Runge-Kutta-Fehlberg Method RKFDV Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with the title RKF . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=RKF&oldid=1081759198 " Category : Disambiguation pages Hidden categories: Short description
52-506: A {\displaystyle h=b-a} , this theory facilitates our controllable integration of the ODE from point a {\displaystyle a} to b {\displaystyle b} , using an optimal number of steps given a local error tolerance. A drawback is that the step size may become prohibitively small, especially when using the low-order Euler method . Similar methods can be developed for higher order methods, such as
78-422: A lower order RK method ψ ~ {\displaystyle {\tilde {\psi }}} . The error then can be simply written as err n {\displaystyle {\textrm {err}}_{n}} is the unnormalized error. To normalize it, we compare it against a user-defined tolerance, which consists of the absolute tolerance and relative tolerance: Then we compare
104-611: A satellite about the earth as a standard Kepler orbit , a fixed time-stepping method such as the Euler method may be sufficient. However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the Three-body problem . There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if
130-517: A sense that it enlarges the step if the estimated local error is smaller than the tolerance and it shrinks the step otherwise. The description given above is a simplified procedures used in the stepsize control for explicit RK solvers. A more detailed treatment can be found in Hairer's textbook. The ODE solver in many programming languages uses this procedure as the default strategy for adaptive stepsize control, which adds other engineering parameters to make
156-613: A solution to solving a system of n differential equations of the form: d y i d x = f i ( x , y 1 , y 2 , … , y n ) , i = 1 , 2 , … , n {\displaystyle {\frac {dy_{i}}{dx}}=f_{i}(x,y_{1},y_{2},\ldots ,y_{n}),i=1,2,\ldots ,n} to iterative solve for y i ( x + h ) , i = 1 , 2 , … , n {\displaystyle y_{i}(x+h),i=1,2,\ldots ,n} where h
182-576: Is allowed, we could let h evolve like: The 0.9 {\displaystyle 0.9} is a safety factor to ensure success on the next try. The minimum and maximum are to prevent extreme changes from the previous stepsize. This should, in principle give an error of about 0.9 × tol {\displaystyle 0.9\times {\text{tol}}} in the next try. If | τ n + 1 ( 1 ) | < tol {\displaystyle |\tau _{n+1}^{(1)}|<{\text{tol}}} , we consider
208-2683: Is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, h {\textstyle h} , and an estimated slope specified by function f on the right-hand side of the differential equation. k 1 = h ⋅ f ( x + A ( 1 ) ⋅ h , y ) k 2 = h ⋅ f ( x + A ( 2 ) ⋅ h , y + B ( 2 , 1 ) ⋅ k 1 ) k 3 = h ⋅ f ( x + A ( 3 ) ⋅ h , y + B ( 3 , 1 ) ⋅ k 1 + B ( 3 , 2 ) ⋅ k 2 ) k 4 = h ⋅ f ( x + A ( 4 ) ⋅ h , y + B ( 4 , 1 ) ⋅ k 1 + B ( 4 , 2 ) ⋅ k 2 + B ( 4 , 3 ) ⋅ k 3 ) k 5 = h ⋅ f ( x + A ( 5 ) ⋅ h , y + B ( 5 , 1 ) ⋅ k 1 + B ( 5 , 2 ) ⋅ k 2 + B ( 5 , 3 ) ⋅ k 3 + B ( 5 , 4 ) ⋅ k 4 ) k 6 = h ⋅ f ( x + A ( 6 ) ⋅ h , y + B ( 6 , 1 ) ⋅ k 1 + B ( 6 , 2 ) ⋅ k 2 + B ( 6 , 3 ) ⋅ k 3 + B ( 6 , 4 ) ⋅ k 4 + B ( 6 , 5 ) ⋅ k 5 ) {\displaystyle {\begin{aligned}k_{1}&=h\cdot f(x+A(1)\cdot h,y)\\k_{2}&=h\cdot f(x+A(2)\cdot h,y+B(2,1)\cdot k_{1})\\k_{3}&=h\cdot f(x+A(3)\cdot h,y+B(3,1)\cdot k_{1}+B(3,2)\cdot k_{2})\\k_{4}&=h\cdot f(x+A(4)\cdot h,y+B(4,1)\cdot k_{1}+B(4,2)\cdot k_{2}+B(4,3)\cdot k_{3})\\k_{5}&=h\cdot f(x+A(5)\cdot h,y+B(5,1)\cdot k_{1}+B(5,2)\cdot k_{2}+B(5,3)\cdot k_{3}+B(5,4)\cdot k_{4})\\k_{6}&=h\cdot f(x+A(6)\cdot h,y+B(6,1)\cdot k_{1}+B(6,2)\cdot k_{2}+B(6,3)\cdot k_{3}+B(6,4)\cdot k_{4}+B(6,5)\cdot k_{5})\end{aligned}}} Then
234-551: Is different from Wikidata All article disambiguation pages All disambiguation pages Runge-Kutta-Fehlberg Method The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family , meaning that identical function evaluations are used in conjunction with each other to create methods of varying order and similar error constants. The method presented in Fehlberg's 1969 paper has been dubbed
260-465: Is proportional to y ( 3 ) ( t ) {\displaystyle y^{(3)}(t)} . Subtracting solutions gives the error estimate: This local error estimate is third order accurate. The local error estimate can be used to decide how stepsize h {\displaystyle h} should be modified to achieve the desired accuracy. For example, if a local tolerance of tol {\displaystyle {\text{tol}}}
286-452: Is the error in the numerical solution. For a sequence ( t n ) of values of t , with t n = a + nh , the Euler method gives approximations to the corresponding values of y ( t n ) as The local truncation error of this approximation is defined by and by Taylor's theorem , it can be shown that (provided f is sufficiently smooth) the local truncation error is proportional to
SECTION 10
#1732876328463312-409: Is used in some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration ) in order to control the errors of the method and to ensure stability properties such as A-stability . Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative. For example, when modeling the motion of
338-501: The RKF45 method, and is a method of order O( h ) with an error estimator of order O( h ). By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Any Runge–Kutta method is uniquely identified by its Butcher tableau . The embedded pair proposed by Fehlberg The first row of coefficients at
364-488: The 4th-order Runge–Kutta method. Also, a global error tolerance can be achieved by scaling the local error to global scope. Adaptive stepsize methods that use a so-called 'embedded' error estimate include the Bogacki–Shampine , Runge–Kutta–Fehlberg , Cash–Karp and Dormand–Prince methods. These methods are considered to be more computationally efficient, but have lower accuracy in their error estimates. To illustrate
390-416: The bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α 2 =1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: The coefficients in the below table do not work. Fehlberg outlines
416-565: The completion of the step, a new stepsize is calculated: h new = 0.9 ⋅ h ⋅ ( ε T E ) 1 / 5 {\displaystyle h_{\text{new}}=0.9\cdot h\cdot \left({\frac {\varepsilon }{TE}}\right)^{1/5}} If T E > ε {\textstyle \mathrm {TE} >\varepsilon } , then replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} and repeat
442-555: The ideas of embedded method, consider the following scheme which update y n {\displaystyle y_{n}} : The next step h n {\displaystyle h_{n}} is predicted from the previous information h n = g ( t n , y n , h n − 1 ) {\displaystyle h_{n}=g(t_{n},y_{n},h_{n-1})} . For embedded RK method, computation of ψ {\displaystyle \psi } includes
468-607: The initial value problem where y and f may denote vectors (in which case this equation represents a system of coupled ODEs in several variables). We are given the function f ( t , y ) and the initial conditions ( a , y a ), and we are interested in finding the solution at t = b . Let y ( b ) denote the exact solution at b , and let y b denote the solution that we compute. We write y b + ε = y ( b ) {\displaystyle y_{b}+\varepsilon =y(b)} , where ε {\displaystyle \varepsilon }
494-478: The new step size to be one half of the original step size, and apply two steps of Euler's method. This second solution is presumably more accurate. Since we have to apply Euler's method twice, the local error is (in the worst case) twice the original error. Here, we assume error factor c {\displaystyle c} is constant over the interval [ t , t + h ] {\displaystyle [t,t+h]} . In reality its rate of change
520-450: The normalized error E n {\displaystyle E_{n}} against 1 to get the predicted h n {\displaystyle h_{n}} : The parameter q is the order corresponding to the RK method ψ ~ {\displaystyle {\tilde {\psi }}} , which has lower order. The above prediction formula is plausible in
546-480: The spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed. Romberg's method and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize. For simplicity, the following example uses the simplest integration method, the Euler method ; in practice, higher-order methods such as Runge–Kutta methods are preferred due to their superior convergence and stability properties. Consider
SECTION 20
#1732876328463572-462: The square of the step size: where c is some constant of proportionality. We have marked this solution and its error with a ( 0 ) {\displaystyle (0)} . The value of c is not known to us. Let us now apply Euler's method again with a different step size to generate a second approximation to y ( t n +1 ). We get a second solution, which we label with a ( 1 ) {\displaystyle (1)} . Take
598-406: The step successful, and the error estimate is used to improve the solution: This solution is actually third order accurate in the local scope (second order in the global scope), but since there is no error estimate for it, this doesn't help in reducing the number of steps. This technique is called Richardson extrapolation . Beginning with an initial stepsize of h = b −
624-697: The step. If T E ⩽ ε {\textstyle TE\leqslant \varepsilon } , then the step is completed. Replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α 2 = 3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: In another table in Fehlberg, coefficients for an RKF4(5) derived by D. Sarafyan are given: Adaptive stepsize In mathematics and numerical analysis , an adaptive step size
650-684: The truncation error is: T E = | C T ( 1 ) ⋅ k 1 + C T ( 2 ) ⋅ k 2 + C T ( 3 ) ⋅ k 3 + C T ( 4 ) ⋅ k 4 + C T ( 5 ) ⋅ k 5 + C T ( 6 ) ⋅ k 6 | {\displaystyle \mathrm {TE} =\left|\mathrm {CT} (1)\cdot k_{1}+\mathrm {CT} (2)\cdot k_{2}+\mathrm {CT} (3)\cdot k_{3}+\mathrm {CT} (4)\cdot k_{4}+\mathrm {CT} (5)\cdot k_{5}+\mathrm {CT} (6)\cdot k_{6}\right|} At
676-618: The weighted average is: y ( x + h ) = y ( x ) + C H ( 1 ) ⋅ k 1 + C H ( 2 ) ⋅ k 2 + C H ( 3 ) ⋅ k 3 + C H ( 4 ) ⋅ k 4 + C H ( 5 ) ⋅ k 5 + C H ( 6 ) ⋅ k 6 {\displaystyle y(x+h)=y(x)+CH(1)\cdot k_{1}+CH(2)\cdot k_{2}+CH(3)\cdot k_{3}+CH(4)\cdot k_{4}+CH(5)\cdot k_{5}+CH(6)\cdot k_{6}} The estimate of
#462537