Misplaced Pages

Runge–Kutta–Fehlberg method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In mathematics , the Runge–Kutta–Fehlberg method (or Fehlberg method ) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations . It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods .

#616383

36-588: The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family , meaning that identical function evaluations are used in conjunction with each other to create methods of varying order and similar error constants. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O( h ) with an error estimator of order O( h ). By performing one extra calculation,

72-1045: A s 2 … a s s b ¯ 1 b ¯ 2 … b ¯ s b 1 b 2 … b s = c A b ¯ ⊤ b ⊤ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}} . Two fourth-order explicit RKN methods are given by

108-409: A Butcher table with the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1

144-445: A minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary. The RK4 method falls in this framework. Its tableau is A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and

180-480: A step with the higher-order method. During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step

216-399: Is In this family, α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} gives the midpoint method , α = 1 {\displaystyle \alpha =1} is Heun's method , and α = 2 3 {\displaystyle \alpha ={\tfrac {2}{3}}} is Ralston's method. As an example, consider

252-431: Is Simpson's rule . The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of O ( h 5 ) {\displaystyle O(h^{5})} , while the total accumulated error is on the order of O ( h 4 ) {\displaystyle O(h^{4})} . In many practical applications the function f {\displaystyle f}

288-514: Is a generalization of the RK4 method mentioned above. It is given by where To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients a ij (for 1 ≤ j < i ≤ s ), b i (for i = 1, 2, ..., s ) and c i (for i = 2, 3, ..., s ). The matrix [ a ij ] is called the Runge–Kutta matrix , while the b i and c i are known as

324-2683: Is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, h {\textstyle h} , and an estimated slope specified by function f on the right-hand side of the differential equation. k 1 = h ⋅ f ( x + A ( 1 ) ⋅ h , y ) k 2 = h ⋅ f ( x + A ( 2 ) ⋅ h , y + B ( 2 , 1 ) ⋅ k 1 ) k 3 = h ⋅ f ( x + A ( 3 ) ⋅ h , y + B ( 3 , 1 ) ⋅ k 1 + B ( 3 , 2 ) ⋅ k 2 ) k 4 = h ⋅ f ( x + A ( 4 ) ⋅ h , y + B ( 4 , 1 ) ⋅ k 1 + B ( 4 , 2 ) ⋅ k 2 + B ( 4 , 3 ) ⋅ k 3 ) k 5 = h ⋅ f ( x + A ( 5 ) ⋅ h , y + B ( 5 , 1 ) ⋅ k 1 + B ( 5 , 2 ) ⋅ k 2 + B ( 5 , 3 ) ⋅ k 3 + B ( 5 , 4 ) ⋅ k 4 ) k 6 = h ⋅ f ( x + A ( 6 ) ⋅ h , y + B ( 6 , 1 ) ⋅ k 1 + B ( 6 , 2 ) ⋅ k 2 + B ( 6 , 3 ) ⋅ k 3 + B ( 6 , 4 ) ⋅ k 4 + B ( 6 , 5 ) ⋅ k 5 ) {\displaystyle {\begin{aligned}k_{1}&=h\cdot f(x+A(1)\cdot h,y)\\k_{2}&=h\cdot f(x+A(2)\cdot h,y+B(2,1)\cdot k_{1})\\k_{3}&=h\cdot f(x+A(3)\cdot h,y+B(3,1)\cdot k_{1}+B(3,2)\cdot k_{2})\\k_{4}&=h\cdot f(x+A(4)\cdot h,y+B(4,1)\cdot k_{1}+B(4,2)\cdot k_{2}+B(4,3)\cdot k_{3})\\k_{5}&=h\cdot f(x+A(5)\cdot h,y+B(5,1)\cdot k_{1}+B(5,2)\cdot k_{2}+B(5,3)\cdot k_{3}+B(5,4)\cdot k_{4})\\k_{6}&=h\cdot f(x+A(6)\cdot h,y+B(6,1)\cdot k_{1}+B(6,2)\cdot k_{2}+B(6,3)\cdot k_{3}+B(6,4)\cdot k_{4}+B(6,5)\cdot k_{5})\end{aligned}}} Then

360-538: Is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is However, the simplest Runge–Kutta method is the (forward) Euler method , given by the formula y n + 1 = y n + h f ( t n , y n ) {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})} . This

396-462: Is for an explicit Runge–Kutta method to have order p {\displaystyle p} . Some values which are known are: The provable bound above then imply that we can not find methods of orders p = 1 , 2 , … , 6 {\displaystyle p=1,2,\ldots ,6} that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have

SECTION 10

#1732883833617

432-517: Is given by where k i {\displaystyle k_{i}} are the same as for the higher-order method. Then the error is which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} : The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is: However,

468-424: Is independent of t {\displaystyle t} (so called autonomous system , or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f {\displaystyle f} , with only the final formula for t n + 1 {\displaystyle t_{n+1}} used. The family of explicit Runge–Kutta methods

504-439: Is no explicit method with s = p + 1 {\displaystyle s=p+1} stages. Butcher also proved that for p > 7 {\displaystyle p>7} , there is no explicit Runge-Kutta method with p + 2 {\displaystyle p+2} stages. In general, however, it remains an open problem what the precise minimum number of stages s {\displaystyle s}

540-404: Is the RK4 approximation of y ( t n + 1 ) {\displaystyle y(t_{n+1})} , and the next value ( y n + 1 {\displaystyle y_{n+1}} ) is determined by the present value ( y n {\displaystyle y_{n}} ) plus the weighted average of four increments, where each increment is the product of

576-416: Is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is An example of a second-order method with two stages is provided by the explicit midpoint method : The corresponding tableau is The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula Its Butcher tableau

612-1122: Is with the form { g i = y m + c i h y ˙ m + h 2 ∑ j = 1 s a i j f ( g j ) , i = 1 , 2 , ⋯ , s y m + 1 = y m + h y ˙ m + h 2 ∑ j = 1 s b ¯ j f ( g j ) y ˙ m + 1 = y ˙ m + h ∑ j = 1 s b j f ( g j ) {\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\cdots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}} which forms

648-652: The c i , i = 1 , 2 , … , s {\displaystyle c_{i},\,i=1,2,\ldots ,s} are distinct. Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second order ODE system y ¨ i = f i ( y 1 , y 2 , ⋯ , y n ) {\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\cdots ,y_{n})} with order s {\displaystyle s}

684-549: The Runge–Kutta methods ( English: / ˈ r ʊ ŋ ə ˈ k ʊ t ɑː / RUUNG -ə- KUUT -tah ) are a family of implicit and explicit iterative methods, which include the Euler method , used in temporal discretization for the approximate solutions of simultaneous nonlinear equations . These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta . The most widely known member of

720-417: The initial conditions t 0 {\displaystyle t_{0}} , y 0 {\displaystyle y_{0}} are given. Now we pick a step-size h > 0 and define: for n = 0, 1, 2, 3, ..., using ( Note: the above equations have different but equivalent definitions in different texts. ) Here y n + 1 {\displaystyle y_{n+1}}

756-466: The weights and the nodes . These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher ): A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if There are also accompanying requirements if one requires the method to have a certain order p , meaning that the local truncation error is O( h ). These can be derived from

SECTION 20

#1732883833617

792-483: The Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: Here y {\displaystyle y} is an unknown function (scalar or vector) of time t {\displaystyle t} , which we would like to approximate; we are told that d y d t {\displaystyle {\frac {dy}{dt}}} ,

828-564: The completion of the step, a new stepsize is calculated: h new = 0.9 ⋅ h ⋅ ( ε T E ) 1 / 5 {\displaystyle h_{\text{new}}=0.9\cdot h\cdot \left({\frac {\varepsilon }{TE}}\right)^{1/5}} If T E > ε {\textstyle \mathrm {TE} >\varepsilon } , then replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} and repeat

864-507: The definition of the truncation error itself. For example, a two-stage method has order 2 if b 1 + b 2 = 1, b 2 c 2 = 1/2, and b 2 a 21 = 1/2. Note that a popular condition for determining coefficients is This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s {\displaystyle s} -stage Runge–Kutta method has order p {\displaystyle p} , then it can be proven that

900-408: The error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Any Runge–Kutta method is uniquely identified by its Butcher tableau . The embedded pair proposed by Fehlberg The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives

936-1293: The following Butcher tables: c i a i j 3 + 3 6 0 0 0 3 − 3 6 2 − 3 12 0 0 3 + 3 6 0 3 6 0 b i ¯ 5 − 3 3 24 3 + 3 12 1 + 3 24 b i 3 − 2 3 12 1 2 3 + 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}} c i

972-941: The fourth-order accurate method. The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α 2 =1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: The coefficients in the below table do not work. Fehlberg outlines a solution to solving a system of n differential equations of the form: d y i d x = f i ( x , y 1 , y 2 , … , y n ) , i = 1 , 2 , … , n {\displaystyle {\frac {dy_{i}}{dx}}=f_{i}(x,y_{1},y_{2},\ldots ,y_{n}),i=1,2,\ldots ,n} to iterative solve for y i ( x + h ) , i = 1 , 2 , … , n {\displaystyle y_{i}(x+h),i=1,2,\ldots ,n} where h

1008-405: The local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order p {\displaystyle p} and one with order p − 1 {\displaystyle p-1} . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to

1044-515: The number of stages must satisfy s ≥ p {\displaystyle s\geq p} , and if p ≥ 5 {\displaystyle p\geq 5} , then s ≥ p + 1 {\displaystyle s\geq p+1} . However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for p > 6 {\displaystyle p>6} , there

1080-479: The rate at which y {\displaystyle y} changes, is a function of t {\displaystyle t} and of y {\displaystyle y} itself. At the initial time t 0 {\displaystyle t_{0}} the corresponding y {\displaystyle y} value is y 0 {\displaystyle y_{0}} . The function f {\displaystyle f} and

1116-568: The simplest adaptive Runge–Kutta method involves combining Heun's method , which is order 2, with the Euler method , which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4). A Runge–Kutta method is said to be nonconfluent if all

Runge–Kutta–Fehlberg method - Misplaced Pages Continue

1152-412: The size of the interval, h , and an estimated slope specified by function f on the right-hand side of the differential equation. In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f {\displaystyle f} is independent of y {\displaystyle y} , so that the differential equation is equivalent to a simple integral, then RK4

1188-665: The step. If T E ⩽ ε {\textstyle TE\leqslant \varepsilon } , then the step is completed. Replace h {\textstyle h} with h new {\textstyle h_{\text{new}}} for the next step. The coefficients found by Fehlberg for Formula 2 (derivation with his parameter α 2 = 3/8) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: In another table in Fehlberg, coefficients for an RKF4(5) derived by D. Sarafyan are given: Runge%E2%80%93Kutta methods In numerical analysis ,

1224-684: The truncation error is: T E = | C T ( 1 ) ⋅ k 1 + C T ( 2 ) ⋅ k 2 + C T ( 3 ) ⋅ k 3 + C T ( 4 ) ⋅ k 4 + C T ( 5 ) ⋅ k 5 + C T ( 6 ) ⋅ k 6 | {\displaystyle \mathrm {TE} =\left|\mathrm {CT} (1)\cdot k_{1}+\mathrm {CT} (2)\cdot k_{2}+\mathrm {CT} (3)\cdot k_{3}+\mathrm {CT} (4)\cdot k_{4}+\mathrm {CT} (5)\cdot k_{5}+\mathrm {CT} (6)\cdot k_{6}\right|} At

1260-435: The two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method . It is given by the tableau with the corresponding equations This method is used to solve the initial-value problem with step size h = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values. Adaptive methods are designed to produce an estimate of

1296-618: The weighted average is: y ( x + h ) = y ( x ) + C H ( 1 ) ⋅ k 1 + C H ( 2 ) ⋅ k 2 + C H ( 3 ) ⋅ k 3 + C H ( 4 ) ⋅ k 4 + C H ( 5 ) ⋅ k 5 + C H ( 6 ) ⋅ k 6 {\displaystyle y(x+h)=y(x)+CH(1)\cdot k_{1}+CH(2)\cdot k_{2}+CH(3)\cdot k_{3}+CH(4)\cdot k_{4}+CH(5)\cdot k_{5}+CH(6)\cdot k_{6}} The estimate of

#616383