Misplaced Pages

Runge–Kutta methods

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In numerical analysis , the Runge–Kutta methods ( English: / ˈ r ʊ ŋ ə ˈ k ʊ t ɑː / RUUNG -ə- KUUT -tah ) are a family of implicit and explicit iterative methods, which include the Euler method , used in temporal discretization for the approximate solutions of simultaneous nonlinear equations . These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta .

#958041

68-569: The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: Here y {\displaystyle y} is an unknown function (scalar or vector) of time t {\displaystyle t} , which we would like to approximate; we are told that d y d t {\displaystyle {\frac {dy}{dt}}} ,

136-1275: A i j 3 − 3 6 0 0 0 3 + 3 6 2 + 3 12 0 0 3 − 3 6 0 − 3 6 0 b i ¯ 5 + 3 3 24 3 − 3 12 1 − 3 24 b i 3 + 2 3 12 1 2 3 − 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3-{\sqrt {3}}}{6}}&0&0&0\\{\frac {3+{\sqrt {3}}}{6}}&{\frac {2+{\sqrt {3}}}{12}}&0&0\\{\frac {3-{\sqrt {3}}}{6}}&0&-{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5+3{\sqrt {3}}}{24}}&{\frac {3-{\sqrt {3}}}{12}}&{\frac {1-{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3+2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3-2{\sqrt {3}}}{12}}\end{array}}} Initial value problem Too Many Requests If you report this error to

204-1045: A s 2 … a s s b ¯ 1 b ¯ 2 … b ¯ s b 1 b 2 … b s = c A b ¯ ⊤ b ⊤ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}} . Two fourth-order explicit RKN methods are given by

272-409: A Butcher table with the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1

340-720: A different probability distribution with known variance σ i 2 {\displaystyle \sigma _{i}^{2}} , all having the same mean, one possible choice for the weights is given by the reciprocal of variance: The weighted mean in this case is: and the standard error of the weighted mean (with inverse-variance weights) is: Note this reduces to σ x ¯ 2 = σ 0 2 / n {\displaystyle \sigma _{\bar {x}}^{2}=\sigma _{0}^{2}/n} when all σ i = σ 0 {\displaystyle \sigma _{i}=\sigma _{0}} . It

408-448: A few counterintuitive properties, as captured for instance in Simpson's paradox . Given two school classes   —   one with 20 students, one with 30 students   —   and test grades in each class as follows: The mean for the morning class is 80 and the mean of the afternoon class is 90. The unweighted mean of the two means is 85. However, this does not account for

476-614: A linear combination is called a convex combination . Using the previous example, we would get the following weights: Then, apply the weights like this: Formally, the weighted mean of a non-empty finite tuple of data ( x 1 , x 2 , … , x n ) {\displaystyle \left(x_{1},x_{2},\dots ,x_{n}\right)} , with corresponding non-negative weights ( w 1 , w 2 , … , w n ) {\displaystyle \left(w_{1},w_{2},\dots ,w_{n}\right)}

544-447: A minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary. The RK4 method falls in this framework. Its tableau is A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called

612-554: A sample, is denoted as P ( I i = 1 ∣ Some sample of size  n ) = π i {\displaystyle P(I_{i}=1\mid {\text{Some sample of size }}n)=\pi _{i}} , and the one-draw probability of selection is P ( I i = 1 | one sample draw ) = p i ≈ π i n {\displaystyle P(I_{i}=1|{\text{one sample draw}})=p_{i}\approx {\frac {\pi _{i}}{n}}} (If N

680-407: A tick mark if multiplying by the indicator function. I.e.: y ˇ i ′ = I i y ˇ i = I i y i π i {\displaystyle {\check {y}}'_{i}=I_{i}{\check {y}}_{i}={\frac {I_{i}y_{i}}{\pi _{i}}}} In this design based perspective,

748-493: Is In this family, α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} gives the midpoint method , α = 1 {\displaystyle \alpha =1} is Heun's method , and α = 2 3 {\displaystyle \alpha ={\tfrac {2}{3}}} is Ralston's method. As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method . It

SECTION 10

#1732869933959

816-573: Is which expands to: Therefore, data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights may not be negative in order for the equation to work . Some may be zero, but not all of them (since division by zero is not allowed). The formulas are simplified when the weights are normalized such that they sum up to 1, i.e., ∑ i = 1 n w i ′ = 1 {\textstyle \sum \limits _{i=1}^{n}{w_{i}'}=1} . For such normalized weights,

884-430: Is Simpson's rule . The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of O ( h 5 ) {\displaystyle O(h^{5})} , while the total accumulated error is on the order of O ( h 4 ) {\displaystyle O(h^{4})} . In many practical applications the function f {\displaystyle f}

952-513: Is a generalization of the RK4 method mentioned above. It is given by where To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients a ij (for 1 ≤ j < i ≤ s ), b i (for i = 1, 2, ..., s ) and c i (for i = 2, 3, ..., s ). The matrix [ a ij ] is called the Runge–Kutta matrix , while the b i and c i are known as

1020-457: Is a special case of the general formula in previous section, The equations above can be combined to obtain: The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally distributed with the same mean. The weighted sample mean, x ¯ {\displaystyle {\bar {x}}} ,

1088-524: Is called a Ratio estimator and it is approximately unbiased for R . In this case, the variability of the ratio depends on the variability of the random variables both in the numerator and the denominator - as well as their correlation. Since there is no closed analytical form to compute this variance, various methods are used for approximate estimation. Primarily Taylor series first-order linearization, asymptotics, and bootstrap/jackknife. The Taylor linearization method could lead to under-estimation of

1156-561: Is considered constant, and the variability comes from the selection procedure. This in contrast to "model based" approaches in which the randomness is often described in the y values. The survey sampling procedure yields a series of Bernoulli indicator values ( I i {\displaystyle I_{i}} ) that get 1 if some observation i is in the sample and 0 if it was not selected. This can occur with fixed sample size, or varied sample size sampling (e.g.: Poisson sampling ). The probability of some element to be chosen, given

1224-406: Is done by having two methods, one with order p {\displaystyle p} and one with order p − 1 {\displaystyle p-1} . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method. During the integration,

1292-424: Is fixed, and the randomness comes from it being included in the sample or not ( I i {\displaystyle I_{i}} ), we often talk about the multiplication of the two, which is a random variable. To avoid confusion in the following section, let's call this term: y i ′ = y i I i {\displaystyle y'_{i}=y_{i}I_{i}} . With

1360-461: Is for an explicit Runge–Kutta method to have order p {\displaystyle p} . Some values which are known are: The provable bound above then imply that we can not find methods of orders p = 1 , 2 , … , 6 {\displaystyle p=1,2,\ldots ,6} that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have

1428-517: Is given by where k i {\displaystyle k_{i}} are the same as for the higher-order method. Then the error is which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} : The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is: However,

SECTION 20

#1732869933959

1496-402: Is given by the tableau with the corresponding equations This method is used to solve the initial-value problem with step size h = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values. Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This

1564-424: Is independent of t {\displaystyle t} (so called autonomous system , or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f {\displaystyle f} , with only the final formula for t n + 1 {\displaystyle t_{n+1}} used. The family of explicit Runge–Kutta methods

1632-413: Is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations, as follows. For simplicity, we assume normalized weights (weights summing to one). If the observations have expected values E ( x i ) = μ i , {\displaystyle E(x_{i})={\mu _{i}},} then

1700-486: Is known we can estimate the population mean using Y ¯ ^ known  N = Y ^ p w r N ≈ ∑ i = 1 n w i y i ′ N {\displaystyle {\hat {\bar {Y}}}_{{\text{known }}N}={\frac {{\hat {Y}}_{pwr}}{N}}\approx {\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{N}}} . If

1768-437: Is no explicit method with s = p + 1 {\displaystyle s=p+1} stages. Butcher also proved that for p > 7 {\displaystyle p>7} , there is no explicit Runge-Kutta method with p + 2 {\displaystyle p+2} stages. In general, however, it remains an open problem what the precise minimum number of stages s {\displaystyle s}

1836-410: Is rather limited due to the strong assumption about the y observations. This has led to the development of alternative, more general, estimators. From a model based perspective, we are interested in estimating the variance of the weighted mean when the different y i {\displaystyle y_{i}} are not i.i.d random variables. An alternative perspective for this problem

1904-565: Is that of some arbitrary sampling design of the data in which units are selected with unequal probabilities (with replacement). In Survey methodology , the population mean, of some quantity of interest y , is calculated by taking an estimation of the total of y over all elements in the population ( Y or sometimes T ) and dividing it by the population size – either known ( N {\displaystyle N} ) or estimated ( N ^ {\displaystyle {\hat {N}}} ). In this context, each value of y

1972-404: Is the RK4 approximation of y ( t n + 1 ) {\displaystyle y(t_{n+1})} , and the next value ( y n + 1 {\displaystyle y_{n+1}} ) is determined by the present value ( y n {\displaystyle y_{n}} ) plus the weighted average of four increments, where each increment is the product of

2040-415: Is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is An example of a second-order method with two stages is provided by the explicit midpoint method : The corresponding tableau is The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula Its Butcher tableau

2108-681: Is the probability of selecting both i and j. And Δ ˇ i j = 1 − π i π j π i j {\displaystyle {\check {\Delta }}_{ij}=1-{\frac {\pi _{i}\pi _{j}}{\pi _{ij}}}} , and for i=j: Δ ˇ i i = 1 − π i π i π i = 1 − π i {\displaystyle {\check {\Delta }}_{ii}=1-{\frac {\pi _{i}\pi _{i}}{\pi _{i}}}=1-\pi _{i}} . If

Runge–Kutta methods - Misplaced Pages Continue

2176-481: Is very large and each p i {\displaystyle p_{i}} is very small). For the following derivation we'll assume that the probability of selecting each element is fully represented by these probabilities. I.e.: selecting some element will not influence the probability of drawing another element (this doesn't apply for things such as cluster sampling design). Since each element ( y i {\displaystyle y_{i}} )

2244-1122: Is with the form { g i = y m + c i h y ˙ m + h 2 ∑ j = 1 s a i j f ( g j ) , i = 1 , 2 , ⋯ , s y m + 1 = y m + h y ˙ m + h 2 ∑ j = 1 s b ¯ j f ( g j ) y ˙ m + 1 = y ˙ m + h ∑ j = 1 s b j f ( g j ) {\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\cdots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}} which forms

2312-650: The c i , i = 1 , 2 , … , s {\displaystyle c_{i},\,i=1,2,\ldots ,s} are distinct. Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second order ODE system y ¨ i = f i ( y 1 , y 2 , ⋯ , y n ) {\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\cdots ,y_{n})} with order s {\displaystyle s}

2380-1074: The π {\displaystyle \pi } -estimator. This estimator can be itself estimated using the pwr -estimator (i.e.: p {\displaystyle p} -expanded with replacement estimator, or "probability with replacement" estimator). With the above notation, it is: Y ^ p w r = 1 n ∑ i = 1 n y i ′ p i = ∑ i = 1 n y i ′ n p i ≈ ∑ i = 1 n y i ′ π i = ∑ i = 1 n w i y i ′ {\displaystyle {\hat {Y}}_{pwr}={\frac {1}{n}}\sum _{i=1}^{n}{\frac {y'_{i}}{p_{i}}}=\sum _{i=1}^{n}{\frac {y'_{i}}{np_{i}}}\approx \sum _{i=1}^{n}{\frac {y'_{i}}{\pi _{i}}}=\sum _{i=1}^{n}w_{i}y'_{i}} . The estimated variance of

2448-415: The initial conditions t 0 {\displaystyle t_{0}} , y 0 {\displaystyle y_{0}} are given. Now we pick a step-size h > 0 and define: for n = 0, 1, 2, 3, ..., using ( Note: the above equations have different but equivalent definitions in different texts. ) Here y n + 1 {\displaystyle y_{n+1}}

2516-735: The pwr -estimator is given by: Var ⁡ ( Y ^ p w r ) = n n − 1 ∑ i = 1 n ( w i y i − w y ¯ ) 2 {\displaystyle \operatorname {Var} ({\hat {Y}}_{pwr})={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}} where w y ¯ = ∑ i = 1 n w i y i n {\displaystyle {\overline {wy}}=\sum _{i=1}^{n}{\frac {w_{i}y_{i}}{n}}} . The above formula

2584-443: The sampling design is one that results in a fixed sample size n (such as in pps sampling ), then the variance of this estimator is: The general formula can be developed like this: The population total is denoted as Y = ∑ i = 1 N y i {\displaystyle Y=\sum _{i=1}^{N}y_{i}} and it may be estimated by the (unbiased) Horvitz–Thompson estimator , also called

2652-409: The weights and the nodes . These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher ): A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if There are also accompanying requirements if one requires the method to have a certain order p , meaning that the local truncation error is O( h ). These can be derived from

2720-527: The 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is However, the simplest Runge–Kutta method is the (forward) Euler method , given by the formula y n + 1 = y n + h f ( t n , y n ) {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})} . This

2788-484: The Wikimedia System Administrators, please include the details below. Request from 172.68.168.150 via cp1114 cp1114, Varnish XID 485874582 Upstream caches: cp1114 int Error: 429, Too Many Requests at Fri, 29 Nov 2024 08:45:34 GMT Weighted average The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average ), except that instead of each of

Runge–Kutta methods - Misplaced Pages Continue

2856-2535: The above notation, the parameter we care about is the ratio of the sums of y i {\displaystyle y_{i}} s, and 1s. I.e.: R = Y ¯ = ∑ i = 1 N y i π i ∑ i = 1 N 1 π i = ∑ i = 1 N y ˇ i ∑ i = 1 N 1 ˇ i = ∑ i = 1 N w i y i ∑ i = 1 N w i {\displaystyle R={\bar {Y}}={\frac {\sum _{i=1}^{N}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}_{i}}{\sum _{i=1}^{N}{\check {1}}_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y_{i}}{\sum _{i=1}^{N}w_{i}}}} . We can estimate it using our sample with: R ^ = Y ¯ ^ = ∑ i = 1 N I i y i π i ∑ i = 1 N I i 1 π i = ∑ i = 1 N y ˇ i ′ ∑ i = 1 N 1 ˇ i ′ = ∑ i = 1 N w i y i ′ ∑ i = 1 N w i 1 i ′ = ∑ i = 1 n w i y i ′ ∑ i = 1 n w i 1 i ′ = y ¯ w {\displaystyle {\hat {R}}={\hat {\bar {Y}}}={\frac {\sum _{i=1}^{N}I_{i}{\frac {y_{i}}{\pi _{i}}}}{\sum _{i=1}^{N}I_{i}{\frac {1}{\pi _{i}}}}}={\frac {\sum _{i=1}^{N}{\check {y}}'_{i}}{\sum _{i=1}^{N}{\check {1}}'_{i}}}={\frac {\sum _{i=1}^{N}w_{i}y'_{i}}{\sum _{i=1}^{N}w_{i}1'_{i}}}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}={\bar {y}}_{w}} . As we moved from using N to using n, we actually know that all

2924-418: The class means by the number of students in each class. The larger class is given more "weight": Thus, the weighted mean makes it possible to find the mean average student grade without knowing each student's score. Only the class means and the number of students in each class are needed. Since only the relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such

2992-522: The data elements are independent and identically distributed random variables with variance σ 2 {\displaystyle \sigma ^{2}} , the standard error of the weighted mean , σ x ¯ {\displaystyle \sigma _{\bar {x}}} , can be shown via uncertainty propagation to be: For the weighted mean of a list of data for which each element x i {\displaystyle x_{i}} potentially comes from

3060-438: The data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics. If all the weights are equal, then the weighted mean is the same as the arithmetic mean . While weighted means generally behave in a similar fashion to arithmetic means, they do have

3128-505: The definition of the truncation error itself. For example, a two-stage method has order 2 if b 1 + b 2 = 1, b 2 c 2 = 1/2, and b 2 a 21 = 1/2. Note that a popular condition for determining coefficients is This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s {\displaystyle s} -stage Runge–Kutta method has order p {\displaystyle p} , then it can be proven that

3196-505: The difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students): x ¯ = 4300 50 = 86. {\displaystyle {\bar {x}}={\frac {4300}{50}}=86.} Or, this can be accomplished by weighting

3264-410: The expectation of the weighted sample mean will be that value, E ( x ¯ ) = μ . {\displaystyle E({\bar {x}})=\mu .} When treating the weights as constants, and having a sample of n observations from uncorrelated random variables , all with the same variance and expectation (as is the case for i.i.d random variables), then

3332-1293: The following Butcher tables: c i a i j 3 + 3 6 0 0 0 3 − 3 6 2 − 3 12 0 0 3 + 3 6 0 3 6 0 b i ¯ 5 − 3 3 24 3 + 3 12 1 + 3 24 b i 3 − 2 3 12 1 2 3 + 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}} c i

3400-610: The following expectancy: E [ y i ′ ] = y i E [ I i ] = y i π i {\displaystyle E[y'_{i}]=y_{i}E[I_{i}]=y_{i}\pi _{i}} ; and variance: V [ y i ′ ] = y i 2 V [ I i ] = y i 2 π i ( 1 − π i ) {\displaystyle V[y'_{i}]=y_{i}^{2}V[I_{i}]=y_{i}^{2}\pi _{i}(1-\pi _{i})} . When each element of

3468-1470: The formula from above. An alternative term, for when the sampling has a random sample size (as in Poisson sampling ), is presented in Sarndal et al. (1992) as: Var ⁡ ( Y ¯ ^ pwr (known  N ) ) = 1 N 2 ∑ i = 1 n ∑ j = 1 n ( Δ ˇ i j y ˇ i y ˇ j ) {\displaystyle \operatorname {Var} ({\hat {\bar {Y}}}_{{\text{pwr (known }}N{\text{)}}})={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)} With y ˇ i = y i π i {\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}} . Also, C ( I i , I j ) = π i j − π i π j = Δ i j {\displaystyle C(I_{i},I_{j})=\pi _{ij}-\pi _{i}\pi _{j}=\Delta _{ij}} where π i j {\displaystyle \pi _{ij}}

SECTION 50

#1732869933959

3536-441: The indicator variables get 1, so we could simply write: y ¯ w = ∑ i = 1 n w i y i ∑ i = 1 n w i {\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y_{i}}{\sum _{i=1}^{n}w_{i}}}} . This will be the estimand for specific values of y and w, but

3604-514: The number of stages must satisfy s ≥ p {\displaystyle s\geq p} , and if p ≥ 5 {\displaystyle p\geq 5} , then s ≥ p + 1 {\displaystyle s\geq p+1} . However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for p > 6 {\displaystyle p>6} , there

3672-408: The population mean as a ratio of an estimated population total ( Y ^ {\displaystyle {\hat {Y}}} ) with a known population size ( N {\displaystyle N} ), and the variance was estimated in that context. Another common case is that the population size itself ( N {\displaystyle N} ) is unknown and is estimated using

3740-479: The rate at which y {\displaystyle y} changes, is a function of t {\displaystyle t} and of y {\displaystyle y} itself. At the initial time t 0 {\displaystyle t_{0}} the corresponding y {\displaystyle y} value is y 0 {\displaystyle y_{0}} . The function f {\displaystyle f} and

3808-826: The sample (i.e.: N ^ {\displaystyle {\hat {N}}} ). The estimation of N {\displaystyle N} can be described as the sum of weights. So when w i = 1 π i {\displaystyle w_{i}={\frac {1}{\pi _{i}}}} we get N ^ = ∑ i = 1 n w i I i = ∑ i = 1 n I i π i = ∑ i = 1 n 1 ˇ i ′ {\displaystyle {\hat {N}}=\sum _{i=1}^{n}w_{i}I_{i}=\sum _{i=1}^{n}{\frac {I_{i}}{\pi _{i}}}=\sum _{i=1}^{n}{\check {1}}'_{i}} . With

3876-641: The sample is inflated by the inverse of its selection probability, it is termed the π {\displaystyle \pi } -expanded y values, i.e.: y ˇ i = y i π i {\displaystyle {\check {y}}_{i}={\frac {y_{i}}{\pi _{i}}}} . A related quantity is p {\displaystyle p} -expanded y values: y i p i = n y ˇ i {\displaystyle {\frac {y_{i}}{p_{i}}}=n{\check {y}}_{i}} . As above, we can add

3944-2159: The selection probability are uncorrelated (i.e.: ∀ i ≠ j : C ( I i , I j ) = 0 {\displaystyle \forall i\neq j:C(I_{i},I_{j})=0} ), and when assuming the probability of each element is very small, then: We assume that ( 1 − π i ) ≈ 1 {\displaystyle (1-\pi _{i})\approx 1} and that Var ⁡ ( Y ^ pwr (known  N ) ) = 1 N 2 ∑ i = 1 n ∑ j = 1 n ( Δ ˇ i j y ˇ i y ˇ j ) = 1 N 2 ∑ i = 1 n ( Δ ˇ i i y ˇ i y ˇ i ) = 1 N 2 ∑ i = 1 n ( ( 1 − π i ) y i π i y i π i ) = 1 N 2 ∑ i = 1 n ( w i y i ) 2 {\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{{\text{pwr (known }}N{\text{)}}})&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}\left({\check {\Delta }}_{ij}{\check {y}}_{i}{\check {y}}_{j}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left({\check {\Delta }}_{ii}{\check {y}}_{i}{\check {y}}_{i}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left((1-\pi _{i}){\frac {y_{i}}{\pi _{i}}}{\frac {y_{i}}{\pi _{i}}}\right)\\&={\frac {1}{N^{2}}}\sum _{i=1}^{n}\left(w_{i}y_{i}\right)^{2}\end{aligned}}} The previous section dealt with estimating

4012-614: The simplest adaptive Runge–Kutta method involves combining Heun's method , which is order 2, with the Euler method , which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4). A Runge–Kutta method is said to be nonconfluent if all

4080-412: The size of the interval, h , and an estimated slope specified by function f on the right-hand side of the differential equation. In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f {\displaystyle f} is independent of y {\displaystyle y} , so that the differential equation is equivalent to a simple integral, then RK4

4148-449: The statistical properties comes when including the indicator variable y ¯ w = ∑ i = 1 n w i y i ′ ∑ i = 1 n w i 1 i ′ {\displaystyle {\bar {y}}_{w}={\frac {\sum _{i=1}^{n}w_{i}y'_{i}}{\sum _{i=1}^{n}w_{i}1'_{i}}}} . This

SECTION 60

#1732869933959

4216-417: The step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step

4284-722: The variance for small sample sizes in general, but that depends on the complexity of the statistic. For the weighted mean, the approximate variance is supposed to be relatively accurate even for medium sample sizes. For when the sampling has a random sample size (as in Poisson sampling ), it is as follows: If π i ≈ p i n {\displaystyle \pi _{i}\approx p_{i}n} , then either using w i = 1 π i {\displaystyle w_{i}={\frac {1}{\pi _{i}}}} or w i = 1 p i {\displaystyle w_{i}={\frac {1}{p_{i}}}} would give

4352-959: The variance of the weighted mean can be estimated as the multiplication of the unweighted variance by Kish's design effect (see proof ): With σ ^ y 2 = ∑ i = 1 n ( y i − y ¯ ) 2 n − 1 {\displaystyle {\hat {\sigma }}_{y}^{2}={\frac {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}{n-1}}} , w ¯ = ∑ i = 1 n w i n {\displaystyle {\bar {w}}={\frac {\sum _{i=1}^{n}w_{i}}{n}}} , and w 2 ¯ = ∑ i = 1 n w i 2 n {\displaystyle {\overline {w^{2}}}={\frac {\sum _{i=1}^{n}w_{i}^{2}}{n}}} However, this estimation

4420-404: The weighted mean is equivalently: One can always normalize the weights by making the following transformation on the original weights: The ordinary mean 1 n ∑ i = 1 n x i {\textstyle {\frac {1}{n}}\sum \limits _{i=1}^{n}{x_{i}}} is a special case of the weighted mean where all data have equal weights. If

4488-427: The weighted sample mean has expectation E ( x ¯ ) = ∑ i = 1 n w i ′ μ i . {\displaystyle E({\bar {x}})=\sum _{i=1}^{n}{w_{i}'\mu _{i}}.} In particular, if the means are equal, μ i = μ {\displaystyle \mu _{i}=\mu } , then

4556-416: The weights, used in the numerator of the weighted mean, are obtained from taking the inverse of the selection probability (i.e.: the inflation factor). I.e.: w i = 1 π i ≈ 1 n × p i {\displaystyle w_{i}={\frac {1}{\pi _{i}}}\approx {\frac {1}{n\times p_{i}}}} . If the population size N

4624-2418: Was taken from Sarndal et al. (1992) (also presented in Cochran 1977), but was written differently. The left side is how the variance was written and the right side is how we've developed the weighted version: Var ⁡ ( Y ^ pwr ) = 1 n 1 n − 1 ∑ i = 1 n ( y i p i − Y ^ p w r ) 2 = 1 n 1 n − 1 ∑ i = 1 n ( n n y i p i − n n ∑ i = 1 n w i y i ) 2 = 1 n 1 n − 1 ∑ i = 1 n ( n y i π i − n ∑ i = 1 n w i y i n ) 2 = n 2 n 1 n − 1 ∑ i = 1 n ( w i y i − w y ¯ ) 2 = n n − 1 ∑ i = 1 n ( w i y i − w y ¯ ) 2 {\displaystyle {\begin{aligned}\operatorname {Var} ({\hat {Y}}_{\text{pwr}})&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {y_{i}}{p_{i}}}-{\hat {Y}}_{pwr}\right)^{2}\\&={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left({\frac {n}{n}}{\frac {y_{i}}{p_{i}}}-{\frac {n}{n}}\sum _{i=1}^{n}w_{i}y_{i}\right)^{2}={\frac {1}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(n{\frac {y_{i}}{\pi _{i}}}-n{\frac {\sum _{i=1}^{n}w_{i}y_{i}}{n}}\right)^{2}\\&={\frac {n^{2}}{n}}{\frac {1}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\\&={\frac {n}{n-1}}\sum _{i=1}^{n}\left(w_{i}y_{i}-{\overline {wy}}\right)^{2}\end{aligned}}} And we got to

#958041