In numerical analysis , the Newton–Cotes formulas , also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules , are a group of formulas for numerical integration (also called quadrature ) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes .
95-408: Newton–Cotes formulas can be useful if the value of the integrand at equally spaced points is given. If it is possible to change the points at which the integrand is evaluated, then other methods such as Gaussian quadrature and Clenshaw–Curtis quadrature are probably more suitable. It is assumed that the value of a function f defined on [ a , b ] {\displaystyle [a,b]}
190-409: A b l i ( x ) d x ⏟ w i . {\displaystyle \int _{a}^{b}f(x)\,dx\approx \int _{a}^{b}L(x)\,dx=\int _{a}^{b}\left(\sum _{i=0}^{n}f(x_{i})l_{i}(x)\right)\,dx=\sum _{i=0}^{n}f(x_{i})\underbrace {\int _{a}^{b}l_{i}(x)\,dx} _{w_{i}}.} A Newton–Cotes formula of any degree n can be constructed. However, for large n
285-604: A b ω ( x ) p n ( x ) x − x i d x = a n a n − 1 p n − 1 ( x i ) ∫ a b ω ( x ) p n − 1 ( x ) 2 d x {\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {a_{n}}{a_{n-1}p_{n-1}(x_{i})}}\int _{a}^{b}\omega (x)p_{n-1}(x)^{2}dx} According to equation ( 2 ),
380-801: A b ω ( x ) x k p n ( x ) x − x i d x = x i k ∫ a b ω ( x ) p n ( x ) x − x i d x {\displaystyle \int _{a}^{b}\omega (x){\frac {x^{k}p_{n}(x)}{x-x_{i}}}dx=x_{i}^{k}\int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx} provided k ≤ n {\displaystyle k\leq n} , because 1 − ( x x i ) k x − x i {\displaystyle {\frac {1-\left({\frac {x}{x_{i}}}\right)^{k}}{x-x_{i}}}}
475-672: A b ω ( x ) q ( x ) p n ( x ) x − x i d x {\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {1}{q(x_{i})}}\int _{a}^{b}\omega (x){\frac {q(x)p_{n}(x)}{x-x_{i}}}dx} We can evaluate the integral on the right hand side for q ( x ) = p n − 1 ( x ) {\displaystyle q(x)=p_{n-1}(x)} as follows. Because p n ( x ) x − x i {\displaystyle {\frac {p_{n}(x)}{x-x_{i}}}}
570-545: A b ω ( x ) ∏ 1 ≤ j ≤ n j ≠ i x − x j x i − x j d x {\displaystyle w_{i}=\int _{a}^{b}\omega (x)\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}dx} This integral expression for w i {\displaystyle w_{i}} can be expressed in terms of
665-548: A b ω ( x ) h ( x ) d x = ∫ a b ω ( x ) ( p n ( x ) q ( x ) + r ( x ) ) d x = ∫ a b ω ( x ) r ( x ) d x . {\displaystyle \int _{a}^{b}\omega (x)\,h(x)\,dx=\int _{a}^{b}\omega (x)\,{\big (}\,p_{n}(x)q(x)+r(x)\,{\big )}\,dx=\int _{a}^{b}\omega (x)\,r(x)\,dx.} Since
760-643: A b ω ( x ) h ( x ) d x = ∫ a b ω ( x ) r ( x ) d x = ∑ i = 1 n w i r ( x i ) = ∑ i = 1 n w i h ( x i ) . {\displaystyle \int _{a}^{b}\omega (x)\,h(x)\,dx=\int _{a}^{b}\omega (x)\,r(x)\,dx=\sum _{i=1}^{n}w_{i}\,r(x_{i})=\sum _{i=1}^{n}w_{i}\,h(x_{i}).} This proves that for any polynomial h ( x ) of degree 2 n − 1 or less, its integral
855-480: A b ω ( x ) p n − 1 ( x ) x n − 1 d x {\displaystyle \int _{a}^{b}\omega (x){\frac {p_{n}(x)}{x-x_{i}}}dx={\frac {a_{n}}{p_{n-1}(x_{i})}}\int _{a}^{b}\omega (x)p_{n-1}(x)x^{n-1}dx} We can then write x n − 1 = ( x n − 1 − p n − 1 ( x )
950-827: A b ω ( x ) r ( x ) d x = ∫ a b ω ( x ) ∑ i = 1 n l i ( x ) r ( x i ) d x = ∑ i = 1 n r ( x i ) ∫ a b ω ( x ) l i ( x ) d x = ∑ i = 1 n r ( x i ) w i , {\displaystyle \int _{a}^{b}\omega (x)\,r(x)\,dx=\int _{a}^{b}\omega (x)\,\sum _{i=1}^{n}l_{i}(x)\,r(x_{i})\,dx=\sum _{i=1}^{n}\,r(x_{i})\,\int _{a}^{b}\omega (x)\,l_{i}(x)\,dx=\sum _{i=1}^{n}\,r(x_{i})\,w_{i},} where w i ,
1045-412: A b f ( x ) d x ≈ ∫ a b L ( x ) d x = ∫ a b ( ∑ i = 0 n f ( x i ) l i ( x ) ) d x = ∑ i = 0 n f ( x i ) ∫
SECTION 10
#17328511211511140-408: A {\displaystyle x_{0}=a} and x n = b {\displaystyle x_{n}=b} , i.e. they use the function values at the interval endpoints, and "open" when x 0 > a {\displaystyle x_{0}>a} and x n < b {\displaystyle x_{n}<b} , i.e. they do not use the function values at
1235-873: A 2 ξ i + a + b 2 ) . {\displaystyle \int _{a}^{b}f(x)\,dx\approx {\frac {b-a}{2}}\sum _{i=1}^{n}w_{i}f\left({\frac {b-a}{2}}\xi _{i}+{\frac {a+b}{2}}\right).} Use the two-point Gauss quadrature rule to approximate the distance in meters covered by a rocket from t = 8 s {\displaystyle t=8\mathrm {s} } to t = 30 s , {\displaystyle t=30\mathrm {s} ,} as given by s = ∫ 8 30 ( 2000 ln [ 140000 140000 − 2100 t ] − 9.8 t ) d t {\displaystyle s=\int _{8}^{30}{\left(2000\ln \left[{\frac {140000}{140000-2100t}}\right]-9.8t\right){dt}}} Change
1330-547: A n ( x − x i ) {\displaystyle \prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\left(x-x_{j}\right)={\frac {\prod _{1\leq j\leq n}\left(x-x_{j}\right)}{x-x_{i}}}={\frac {p_{n}(x)}{a_{n}\left(x-x_{i}\right)}}} where a n {\displaystyle a_{n}} is the coefficient of x n {\displaystyle x^{n}} in p n ( x ) {\displaystyle p_{n}(x)} . Taking
1425-579: A n − 1 ) + p n − 1 ( x ) a n − 1 {\displaystyle x^{n-1}=\left(x^{n-1}-{\frac {p_{n-1}(x)}{a_{n-1}}}\right)+{\frac {p_{n-1}(x)}{a_{n-1}}}} The term in the brackets is a polynomial of degree n − 2 {\displaystyle n-2} , which is therefore orthogonal to p n − 1 ( x ) {\displaystyle p_{n-1}(x)} . The integral can thus be written as ∫
1520-482: A + ( i + 1 ) h {\displaystyle x_{i}=a+(i+1)h} where h = b − a n + 2 {\displaystyle h={\frac {b-a}{n+2}}} , and f i = f ( x i ) {\displaystyle f_{i}=f(x_{i})} . For the Newton–Cotes rules to be accurate, the step size h needs to be small, which means that
1615-423: A + b 2 ) d x d ξ d ξ {\displaystyle \int _{a}^{b}f(x)\,dx=\int _{-1}^{1}f\left({\frac {b-a}{2}}\xi +{\frac {a+b}{2}}\right)\,{\frac {dx}{d\xi }}d\xi } with d x d ξ = b − a 2 {\displaystyle {\frac {dx}{d\xi }}={\frac {b-a}{2}}} Applying
1710-676: A Lagrange interpolating polynomial can be written in terms of the derivatives of the basis polynomials, Recall (see § Definition above) that each Lagrange basis polynomial is ℓ j ( x ) = ∏ m = 0 m ≠ j k x − x m x j − x m . {\displaystyle {\begin{aligned}\ell _{j}(x)&=\prod _{\begin{smallmatrix}m=0\\m\neq j\end{smallmatrix}}^{k}{\frac {x-x_{m}}{x_{j}-x_{m}}}.\end{aligned}}} The first derivative can be found using
1805-405: A Newton–Cotes rule can sometimes suffer from catastrophic Runge's phenomenon where the error grows exponentially for large n . Methods such as Gaussian quadrature and Clenshaw–Curtis quadrature with unequally spaced points (clustered at the endpoints of the integration interval) are stable and much more accurate, and are normally preferred to Newton–Cotes. If these methods cannot be used, because
1900-507: A given function f by a polynomial of degree k at the nodes x 0 , . . . , x k {\displaystyle x_{0},...,x_{k}} we get the remainder R ( x ) = f ( x ) − L ( x ) {\displaystyle R(x)=f(x)-L(x)} which can be expressed as where f [ x 0 , … , x k , x ] {\displaystyle f[x_{0},\ldots ,x_{k},x]}
1995-624: A new function F ( x ) = R ( x ) − R ~ ( x ) = f ( x ) − L ( x ) − R ~ ( x ) {\displaystyle F(x)=R(x)-{\tilde {R}}(x)=f(x)-L(x)-{\tilde {R}}(x)} and choose R ~ ( x ) = C ⋅ ∏ i = 0 k ( x − x i ) {\textstyle {\tilde {R}}(x)=C\cdot \prod _{i=0}^{k}(x-x_{i})} where C {\displaystyle C}
SECTION 20
#17328511211512090-400: A new node x k + 1 {\displaystyle x_{k+1}} by dividing each of the w j {\displaystyle w_{j}} , j = 0 … k {\displaystyle j=0\dots k} by ( x j − x k + 1 ) {\displaystyle (x_{j}-x_{k+1})} and constructing
2185-614: A problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial L ( x ) = ∑ j = 0 k x j m j {\textstyle L(x)=\sum _{j=0}^{k}x^{j}m_{j}} , we must invert the Vandermonde matrix ( x i ) j {\displaystyle (x_{i})^{j}} to solve L ( x i ) = y i {\displaystyle L(x_{i})=y_{i}} for
2280-446: A set of k + 1 {\textstyle k+1} nodes { x 0 , x 1 , … , x k } {\displaystyle \{x_{0},x_{1},\ldots ,x_{k}\}} , which must all be distinct, x j ≠ x m {\displaystyle x_{j}\neq x_{m}} for indices j ≠ m {\displaystyle j\neq m} ,
2375-434: A slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1] . That is, the problem is to calculate ∫ a b ω ( x ) f ( x ) d x {\displaystyle \int _{a}^{b}\omega (x)\,f(x)\,dx} for some choices of a , b , and ω . For a = −1 , b = 1 , and ω ( x ) = 1 ,
2470-1806: Is k + 1 {\displaystyle k+1} -times differentiable, since L ( x ) {\displaystyle L(x)} and R ~ ( x ) {\displaystyle {\tilde {R}}(x)} are polynomials, and therefore, are infinitely differentiable, F ( x ) {\displaystyle F(x)} will be k + 1 {\displaystyle k+1} -times differentiable. By Rolle's theorem , F ( 1 ) ( x ) {\displaystyle F^{(1)}(x)} has k + 1 {\displaystyle k+1} zeroes, F ( 2 ) ( x ) {\displaystyle F^{(2)}(x)} has k {\displaystyle k} zeroes... F ( k + 1 ) {\displaystyle F^{(k+1)}} has 1 zero, say ξ , x 0 < ξ < x k {\displaystyle \xi ,\,x_{0}<\xi <x_{k}} . Explicitly writing F ( k + 1 ) ( ξ ) {\displaystyle F^{(k+1)}(\xi )} : The equation can be rearranged as Since F ( x p ) = 0 {\displaystyle F(x_{p})=0} we have R ( x p ) = R ~ ( x p ) = f k + 1 ( ξ ) ( k + 1 ) ! ∏ i = 0 k ( x p − x i ) {\displaystyle R(x_{p})={\tilde {R}}(x_{p})={\frac {f^{k+1}(\xi )}{(k+1)!}}\prod _{i=0}^{k}(x_{p}-x_{i})} The d th derivative of
2565-403: Is The barycentric weights are The Lagrange basis polynomials are The Lagrange interpolating polynomial is: In (second) barycentric form, The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from
2660-443: Is a polynomial of degree k − 1 which is then orthogonal to p n ( x ) {\displaystyle p_{n}(x)} . So, if q ( x ) is a polynomial of at most nth degree we have ∫ a b ω ( x ) p n ( x ) x − x i d x = 1 q ( x i ) ∫
2755-423: Is a polynomial of degree n − 1 , we have p n ( x ) x − x i = a n x n − 1 + s ( x ) {\displaystyle {\frac {p_{n}(x)}{x-x_{i}}}=a_{n}x^{n-1}+s(x)} where s ( x ) is a polynomial of degree n − 2 {\displaystyle n-2} . Since s ( x )
2850-399: Is called step size , w i {\displaystyle w_{i}} are called weights . The weights can be computed as the integral of Lagrange basis polynomials . They depend only on x i {\displaystyle x_{i}} and not on the function f . Let L ( x ) {\displaystyle L(x)} be the interpolation polynomial in
2945-445: Is called a composite rule . See Numerical integration . Gaussian quadrature In numerical analysis , an n -point Gaussian quadrature rule , named after Carl Friedrich Gauss , is a quadrature rule constructed to yield an exact result for polynomials of degree 2 n − 1 or less by a suitable choice of the nodes x i and weights w i for i = 1, ..., n . The modern formulation using orthogonal polynomials
Newton–Cotes formulas - Misplaced Pages Continue
3040-677: Is called the second form or true form of the barycentric interpolation formula. This second form has advantages in computation cost and accuracy: it avoids evaluation of ℓ ( x ) {\displaystyle \ell (x)} ; the work to compute each term in the denominator w j / ( x − x j ) {\displaystyle w_{j}/(x-x_{j})} has already been done in computing ( w j / ( x − x j ) ) y j {\displaystyle {\bigl (}w_{j}/(x-x_{j}){\bigr )}y_{j}} and so computing
3135-481: Is exact for polynomials of degree 2 n − 1 or less. This exact rule is known as the Gauss–Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if f ( x ) is well-approximated by a polynomial of degree 2 n − 1 or less on [−1, 1] . The Gauss– Legendre quadrature rule is not typically used for integrable functions with endpoint singularities . Instead, if
3230-498: Is given exactly by the Gaussian quadrature sum. To prove the second part of the claim, consider the factored form of the polynomial p n . Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from a to b will not change sign over that interval. Finally, for factors corresponding to roots x i inside
3325-416: Is known at n + 1 {\displaystyle n+1} equally spaced points: a ≤ x 0 < x 1 < ⋯ < x n ≤ b {\displaystyle a\leq x_{0}<x_{1}<\dots <x_{n}\leq b} . There are two classes of Newton–Cotes quadrature: they are called "closed" when x 0 =
3420-403: Is orthogonal to p n − 1 ( x ) {\displaystyle p_{n-1}(x)} we have ∫ a b ω ( x ) p n ( x ) x − x i d x = a n p n − 1 ( x i ) ∫
3515-427: Is orthogonal to all polynomials of degree n -1 or less, so the degree of the product ∏ i ( x − x i ) {\displaystyle \prod _{i}(x-x_{i})} must be at least n . Therefore p n has n distinct roots, all real, in the interval from a to b . The weights can be expressed as where a k {\displaystyle a_{k}}
3610-587: Is sometimes mistakenly called Bode's rule, as a result of the propagation of a typographical error in Abramowitz and Stegun , an early reference book. The exponent of the step size h in the error term gives the rate at which the approximation error decreases. The order of the derivative of f in the error term gives the lowest degree of a polynomial which can no longer be integrated exactly (i.e. with error equal to zero) with this rule. The number ξ {\displaystyle \xi } must be taken from
3705-873: Is the coefficient of x k {\displaystyle x^{k}} in p k ( x ) {\displaystyle p_{k}(x)} . To prove this, note that using Lagrange interpolation one can express r ( x ) in terms of r ( x i ) {\displaystyle r(x_{i})} as r ( x ) = ∑ i = 1 n r ( x i ) ∏ 1 ≤ j ≤ n j ≠ i x − x j x i − x j {\displaystyle r(x)=\sum _{i=1}^{n}r(x_{i})\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}} because r ( x ) has degree less than n and
3800-630: Is the constant we are required to determine for a given x p {\displaystyle x_{p}} . We choose C {\displaystyle C} so that F ( x ) {\displaystyle F(x)} has k + 2 {\displaystyle k+2} zeroes (at all nodes and x p {\displaystyle x_{p}} ) between x 0 {\displaystyle x_{0}} and x k {\displaystyle x_{k}} (including endpoints). Assuming that f ( x ) {\displaystyle f(x)}
3895-484: Is the linear combination: L ( x ) = ∑ j = 0 k y j ℓ j ( x ) . {\displaystyle L(x)=\sum _{j=0}^{k}y_{j}\ell _{j}(x).} Each basis polynomial has degree k {\textstyle k} , so the sum L ( x ) {\textstyle L(x)} has degree ≤ k {\textstyle \leq k} , and it interpolates
Newton–Cotes formulas - Misplaced Pages Continue
3990-410: Is the notation for divided differences . Alternatively, the remainder can be expressed as a contour integral in complex domain as The remainder can be bound as Clearly, R ( x ) {\displaystyle R(x)} is zero at nodes. To find R ( x ) {\displaystyle R(x)} at a point x p {\displaystyle x_{p}} , define
4085-577: Is the unique polynomial of degree ≤ k {\displaystyle \leq k} interpolating the data { ( x 0 , 1 ) , ( x 1 , 1 ) , … , ( x k , 1 ) } . {\textstyle \{(x_{0},1),(x_{1},1),\ldots ,(x_{k},1)\}.} We can thus further simplify the barycentric formula by dividing L ( x ) = L ( x ) / g ( x ) : {\displaystyle L(x)=L(x)/g(x)\colon } This
4180-897: Is thus fixed by the values it attains at n different points. Multiplying both sides by ω ( x ) and integrating from a to b yields ∫ a b ω ( x ) r ( x ) d x = ∑ i = 1 n r ( x i ) ∫ a b ω ( x ) ∏ 1 ≤ j ≤ n j ≠ i x − x j x i − x j d x {\displaystyle \int _{a}^{b}\omega (x)r(x)dx=\sum _{i=1}^{n}r(x_{i})\int _{a}^{b}\omega (x)\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}{\frac {x-x_{j}}{x_{i}-x_{j}}}dx} The weights w i are thus given by w i = ∫
4275-562: Is unique. Proof: assume the polynomial M ( x ) {\textstyle M(x)} of degree ≤ k {\textstyle \leq k} interpolates the data. Then the difference M ( x ) − L ( x ) {\textstyle M(x)-L(x)} is zero at k + 1 {\textstyle k+1} distinct nodes { x 0 , x 1 , … , x k } . {\textstyle \{x_{0},x_{1},\ldots ,x_{k}\}.} But
4370-1286: Is well-approximated by a low-degree polynomial, then alternative nodes x i ' and weights w i ' will usually give more accurate quadrature rules. These are known as Gauss–Jacobi quadrature rules, i.e., ∫ − 1 1 f ( x ) d x = ∫ − 1 1 ( 1 − x ) α ( 1 + x ) β g ( x ) d x ≈ ∑ i = 1 n w i ′ g ( x i ′ ) . {\displaystyle \int _{-1}^{1}f(x)\,dx=\int _{-1}^{1}\left(1-x\right)^{\alpha }\left(1+x\right)^{\beta }g(x)\,dx\approx \sum _{i=1}^{n}w_{i}'g\left(x_{i}'\right).} Common weights include 1 1 − x 2 {\textstyle {\frac {1}{\sqrt {1-x^{2}}}}} ( Chebyshev–Gauss ) and 1 − x 2 {\textstyle {\sqrt {1-x^{2}}}} . One may also want to integrate over semi-infinite ( Gauss–Laguerre quadrature ) and infinite intervals ( Gauss–Hermite quadrature ). It can be shown (see Press et al., or Stoer and Bulirsch) that
4465-487: The y j {\displaystyle y_{j}} are called values . The Lagrange polynomial L ( x ) {\displaystyle L(x)} has degree ≤ k {\textstyle \leq k} and assumes each value at the corresponding node, L ( x j ) = y j . {\displaystyle L(x_{j})=y_{j}.} Although named after Joseph-Louis Lagrange , who published it in 1795,
4560-433: The n {\displaystyle n} point Gaussian quadrature ( ξ , w ) {\displaystyle (\xi ,w)} rule then results in the following approximation: ∫ a b f ( x ) d x ≈ b − a 2 ∑ i = 1 n w i f ( b −
4655-1609: The Kronecker delta this can be written ℓ j ( x m ) = δ j m . {\textstyle \ell _{j}(x_{m})=\delta _{jm}.} Each basis polynomial can be explicitly described by the product: ℓ j ( x ) = ( x − x 0 ) ( x j − x 0 ) ⋯ ( x − x j − 1 ) ( x j − x j − 1 ) ( x − x j + 1 ) ( x j − x j + 1 ) ⋯ ( x − x k ) ( x j − x k ) = ∏ 0 ≤ m ≤ k m ≠ j x − x m x j − x m | . {\displaystyle {\begin{aligned}\ell _{j}(x)&={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{k})}{(x_{j}-x_{k})}}\\[8mu]&=\prod _{\begin{smallmatrix}0\leq m\leq k\\m\neq j\end{smallmatrix}}{\frac {x-x_{m}}{x_{j}-x_{m}}}{\vphantom {\Bigg |}}.\end{aligned}}} Notice that
4750-842: The Lagrange basis for polynomials of degree ≤ k {\textstyle \leq k} for those nodes is the set of polynomials { ℓ 0 ( x ) , ℓ 1 ( x ) , … , ℓ k ( x ) } {\textstyle \{\ell _{0}(x),\ell _{1}(x),\ldots ,\ell _{k}(x)\}} each of degree k {\textstyle k} which take values ℓ j ( x m ) = 0 {\textstyle \ell _{j}(x_{m})=0} if m ≠ j {\textstyle m\neq j} and ℓ j ( x j ) = 1 {\textstyle \ell _{j}(x_{j})=1} . Using
4845-466: The Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs ( x j , y j ) {\displaystyle (x_{j},y_{j})} with 0 ≤ j ≤ k , {\displaystyle 0\leq j\leq k,} the x j {\displaystyle x_{j}} are called nodes and
SECTION 50
#17328511211514940-509: The barycentric weight ), and a part representing the displacement from x j {\textstyle x_{j}} to x {\textstyle x} : ℓ j ( x ) = ℓ ( x ) w j x − x j {\displaystyle \ell _{j}(x)=\ell (x){\dfrac {w_{j}}{x-x_{j}}}} By factoring ℓ ( x ) {\textstyle \ell (x)} out from
5035-598: The identity matrix , δ i j {\displaystyle \delta _{ij}} , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix. This construction is analogous to the Chinese remainder theorem . Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears. Furthermore, when
5130-407: The product rule : The second derivative is The third derivative is and likewise for higher derivatives. Note that all of these formulas for derivatives are invalid at or near a node. A method of evaluating all orders of derivatives of a Lagrange polynomial efficiently at all points of the domain, including the nodes, is converting the Lagrange polynomial to power basis form and then evaluating
5225-853: The 3-term recurrence relation p n + 1 ( x i ) = ( a ) p n ( x i ) + ( b ) p n − 1 ( x i ) {\displaystyle p_{n+1}(x_{i})=(a)p_{n}(x_{i})+(b)p_{n-1}(x_{i})} the term with p n ( x i ) {\displaystyle p_{n}(x_{i})} vanishes, so p n − 1 ( x i ) {\displaystyle p_{n-1}(x_{i})} in Eq. (1) can be replaced by 1 b p n + 1 ( x i ) {\textstyle {\frac {1}{b}}p_{n+1}\left(x_{i}\right)} . Lagrange polynomial In numerical analysis ,
5320-394: The Lagrange form for the given data points ( x 0 , f ( x 0 ) ) , ( x 1 , f ( x 1 ) ) , … , ( x n , f ( x n ) ) {\displaystyle (x_{0},f(x_{0})),(x_{1},f(x_{1})),\ldots ,(x_{n},f(x_{n}))} , then ∫
5415-662: The associated orthogonal polynomials are Legendre polynomials , denoted by P n ( x ) . With the n -th polynomial normalized to give P n (1) = 1 , the i -th Gauss node, x i , is the i -th root of P n and the weights are given by the formula w i = 2 ( 1 − x i 2 ) [ P n ′ ( x i ) ] 2 . {\displaystyle w_{i}={\frac {2}{\left(1-x_{i}^{2}\right)\left[P'_{n}(x_{i})\right]^{2}}}.} Some low-order quadrature rules are tabulated below (over interval [−1, 1] , see
5510-457: The closed type. For 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} , let x i = a + i h {\displaystyle x_{i}=a+ih} where h = b − a n {\displaystyle h={\frac {b-a}{n}}} , and f i = f ( x i ) {\displaystyle f_{i}=f(x_{i})} . Boole's rule
5605-402: The coefficients m j {\displaystyle m_{j}} of L ( x ) {\displaystyle L(x)} . By choosing a better basis, the Lagrange basis, L ( x ) = ∑ j = 0 k l j ( x ) y j {\textstyle L(x)=\sum _{j=0}^{k}l_{j}(x)y_{j}} , we merely get
5700-451: The data because L ( x m ) = ∑ j = 0 k y j ℓ j ( x m ) = ∑ j = 0 k y j δ m j = y m . {\textstyle L(x_{m})=\sum _{j=0}^{k}y_{j}\ell _{j}(x_{m})=\sum _{j=0}^{k}y_{j}\delta _{mj}=y_{m}.} The interpolating polynomial
5795-448: The endpoints. Newton–Cotes formulas using n + 1 {\displaystyle n+1} points can be defined (for both classes) as ∫ a b f ( x ) d x ≈ ∑ i = 0 n w i f ( x i ) , {\displaystyle \int _{a}^{b}f(x)\,dx\approx \sum _{i=0}^{n}w_{i}\,f(x_{i}),} where The number h
SECTION 60
#17328511211515890-467: The example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon ; the problem may be eliminated by choosing interpolation points at Chebyshev nodes . The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas . When interpolating
5985-429: The integral ∫ a b p n ( x ) ( ∏ i ( x − x i ) ) ω ( x ) d x ≠ 0 , {\displaystyle \int _{a}^{b}p_{n}(x)\,\left(\prod _{i}(x-x_{i})\right)\,\omega (x)\,dx\neq 0,} since the weight function ω ( x ) is always non-negative. But p n
6080-574: The integral expression for the weights as In the integrand, writing 1 x − x i = 1 − ( x x i ) k x − x i + ( x x i ) k 1 x − x i {\displaystyle {\frac {1}{x-x_{i}}}={\frac {1-\left({\frac {x}{x_{i}}}\right)^{k}}{x-x_{i}}}+\left({\frac {x}{x_{i}}}\right)^{k}{\frac {1}{x-x_{i}}}} yields ∫
6175-409: The integrand can be written as f ( x ) = ( 1 − x ) α ( 1 + x ) β g ( x ) , α , β > − 1 , {\displaystyle f(x)=\left(1-x\right)^{\alpha }\left(1+x\right)^{\beta }g(x),\quad \alpha ,\beta >-1,} where g ( x )
6270-401: The integrand is only given at the fixed equidistributed grid, then Runge's phenomenon can be avoided by using a composite rule, as explained below. Alternatively, stable Newton–Cotes formulas can be constructed using least-squares approximation instead of interpolation. This allows building numerically stable formulas even for high degrees. This table lists some of the Newton–Cotes formulas of
6365-466: The interval ( a , b ) , therefore, the error bound is equal to the error term when f ( ξ ) = max ( f ( x ) ) , a < x < b {\displaystyle f(\xi )=\max(f(x)),a<x<b} . This table lists some of the Newton–Cotes formulas of the open type. For 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} , let x i =
6460-437: The interval from a to b that are of odd multiplicity, multiply p n by one more factor to make a new polynomial p n ( x ) ∏ i ( x − x i ) . {\displaystyle p_{n}(x)\,\prod _{i}(x-x_{i}).} This polynomial cannot change sign over the interval from a to b because all its roots there are now of even multiplicity. So
6555-407: The interval of integration [ a , b ] {\displaystyle [a,b]} must be small itself, which is not true most of the time. For this reason, one usually performs numerical integration by splitting [ a , b ] {\displaystyle [a,b]} into smaller subintervals, applying a Newton–Cotes rule on each subinterval, and adding up the results. This
6650-488: The invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant . But, as can be seen from the construction, each time a node x k changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials . Lagrange and other interpolation at equally spaced points, as in
6745-562: The limit of x to x i {\displaystyle x_{i}} yields using L'Hôpital's rule ∏ 1 ≤ j ≤ n j ≠ i ( x i − x j ) = p n ′ ( x i ) a n {\displaystyle \prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq i\end{smallmatrix}}\left(x_{i}-x_{j}\right)={\frac {p'_{n}(x_{i})}{a_{n}}}} We can thus write
6840-1167: The limits so that one can use the weights and abscissae given in Table 1. Also, find the absolute relative true error. The true value is given as 11061.34 m. Solution First, changing the limits of integration from [ 8 , 30 ] {\displaystyle \left[8,30\right]} to [ − 1 , 1 ] {\displaystyle \left[-1,1\right]} gives ∫ 8 30 f ( t ) d t = 30 − 8 2 ∫ − 1 1 f ( 30 − 8 2 x + 30 + 8 2 ) d x = 11 ∫ − 1 1 f ( 11 x + 19 ) d x {\displaystyle {\begin{aligned}\int _{8}^{30}{f(t)dt}&={\frac {30-8}{2}}\int _{-1}^{1}{f\left({\frac {30-8}{2}}x+{\frac {30+8}{2}}\right){dx}}\\&=11\int _{-1}^{1}{f\left(11x+19\right){dx}}\end{aligned}}} Next, get
6935-461: The method was first discovered in 1779 by Edward Waring . It is also an easy consequence of a formula published in 1783 by Leonhard Euler . Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration , Shamir's secret sharing scheme in cryptography , and Reed–Solomon error correction in coding theory . For equispaced nodes, Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. Given
7030-408: The new w k + 1 {\displaystyle w_{k+1}} as above. For any x , {\textstyle x,} ∑ j = 0 k ℓ j ( x ) = 1 {\textstyle \sum _{j=0}^{k}\ell _{j}(x)=1} because the constant function g ( x ) = 1 {\textstyle g(x)=1}
7125-544: The numerator ∏ m ≠ j ( x − x m ) {\textstyle \prod _{m\neq j}(x-x_{m})} has k {\textstyle k} roots at the nodes { x m } m ≠ j {\textstyle \{x_{m}\}_{m\neq j}} while the denominator ∏ m ≠ j ( x j − x m ) {\textstyle \prod _{m\neq j}(x_{j}-x_{m})} scales
7220-548: The only polynomial of degree ≤ k {\textstyle \leq k} with more than k {\textstyle k} roots is the constant zero function, so M ( x ) − L ( x ) = 0 , {\textstyle M(x)-L(x)=0,} or M ( x ) = L ( x ) . {\textstyle M(x)=L(x).} Each Lagrange basis polynomial ℓ j ( x ) {\textstyle \ell _{j}(x)} can be rewritten as
7315-436: The open interval ( a , b ) . To prove the first part of this claim, let h ( x ) be any polynomial of degree 2 n − 1 or less. Divide it by the orthogonal polynomial p n to get h ( x ) = p n ( x ) q ( x ) + r ( x ) . {\displaystyle h(x)=p_{n}(x)\,q(x)+r(x).} where q ( x ) is the quotient, of degree n − 1 or less (because
7410-536: The order is large, Fast Fourier transformation can be used to solve for the coefficients of the interpolated polynomial. We wish to interpolate f ( x ) = x 2 {\displaystyle f(x)=x^{2}} over the domain 1 ≤ x ≤ 3 {\displaystyle 1\leq x\leq 3} at the three nodes { 1 , 2 , 3 } {\displaystyle \{1,\,2,\,3\}} : The node polynomial ℓ {\displaystyle \ell }
7505-626: The orthogonal polynomials p n ( x ) {\displaystyle p_{n}(x)} and p n − 1 ( x ) {\displaystyle p_{n-1}(x)} as follows. We can write ∏ 1 ≤ j ≤ n j ≠ i ( x − x j ) = ∏ 1 ≤ j ≤ n ( x − x j ) x − x i = p n ( x )
7600-451: The orthogonal polynomials above, because each p n is constructed to be orthogonal to the other polynomials p j for j < n , and x is in the span of that set. If we pick the n nodes x i to be the zeros of p n , then there exist n weights w i which make the Gaussian quadrature computed integral exact for all polynomials h ( x ) of degree 2 n − 1 or less. Furthermore, all these nodes x i will lie in
7695-666: The problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S). Let p n be a nontrivial polynomial of degree n such that ∫ a b ω ( x ) x k p n ( x ) d x = 0 , for all k = 0 , 1 , … , n − 1. {\displaystyle \int _{a}^{b}\omega (x)\,x^{k}p_{n}(x)\,dx=0,\quad {\text{for all }}k=0,1,\ldots ,n-1.} Note that this will be true for all
7790-504: The product of three parts, a function ℓ ( x ) = ∏ m ( x − x m ) {\textstyle \ell (x)=\prod _{m}(x-x_{m})} common to every basis polynomial, a node-specific constant w j = ∏ m ≠ j ( x j − x m ) − 1 {\textstyle w_{j}=\prod _{m\neq j}(x_{j}-x_{m})^{-1}} (called
7885-445: The quadrature nodes x i are the roots of a polynomial belonging to a class of orthogonal polynomials (the class orthogonal with respect to a weighted inner-product). This is a key observation for computing Gauss quadrature nodes and weights. For the simplest integration problem stated above, i.e., f ( x ) is well-approximated by polynomials on [ − 1 , 1 ] {\displaystyle [-1,1]} ,
7980-736: The remainder r ( x ) is of degree n − 1 or less, we can interpolate it exactly using n interpolation points with Lagrange polynomials l i ( x ) , where l i ( x ) = ∏ j ≠ i x − x j x i − x j . {\displaystyle l_{i}(x)=\prod _{j\neq i}{\frac {x-x_{j}}{x_{i}-x_{j}}}.} We have r ( x ) = ∑ i = 1 n l i ( x ) r ( x i ) . {\displaystyle r(x)=\sum _{i=1}^{n}l_{i}(x)\,r(x_{i}).} Then its integral will equal ∫
8075-397: The resulting polynomial so that ℓ j ( x j ) = 1. {\textstyle \ell _{j}(x_{j})=1.} The Lagrange interpolating polynomial for those nodes through the corresponding values { y 0 , y 1 , … , y k } {\displaystyle \{y_{0},y_{1},\ldots ,y_{k}\}}
8170-422: The section below for other intervals). An integral over [ a , b ] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way: ∫ a b f ( x ) d x = ∫ − 1 1 f ( b − a 2 ξ +
8265-495: The sum in the denominator costs only k {\textstyle k} addition operations; for evaluation points x {\textstyle x} which are close to one of the nodes x j {\textstyle x_{j}} , catastrophic cancelation would ordinarily be a problem for the value ( x − x j ) {\textstyle (x-x_{j})} , however this quantity appears in both numerator and denominator and
8360-399: The sum of its degree and that of the divisor p n must equal that of the dividend), and r ( x ) is the remainder, also of degree n − 1 or less (because the degree of the remainder is always less than that of the divisor). Since p n is by assumption orthogonal to all monomials of degree less than n , it must be orthogonal to the quotient q ( x ) . Therefore ∫
8455-656: The sum, we can write the Lagrange polynomial in the so-called first barycentric form : If the weights w j {\displaystyle w_{j}} have been pre-computed, this requires only O ( k ) {\displaystyle {\mathcal {O}}(k)} operations compared to O ( k 2 ) {\displaystyle {\mathcal {O}}(k^{2})} for evaluating each Lagrange basis polynomial ℓ j ( x ) {\displaystyle \ell _{j}(x)} individually. The barycentric interpolation formula can also easily be updated to incorporate
8550-547: The true value is 11061.34 m, the absolute relative true error, | ε t | {\displaystyle \left|\varepsilon _{t}\right|} is | ε t | = | 11061.34 − 11058.44 11061.34 | × 100 % = 0.0262 % {\displaystyle \left|\varepsilon _{t}\right|=\left|{\frac {11061.34-11058.44}{11061.34}}\right|\times 100\%=0.0262\%} The integration problem can be expressed in
8645-685: The two cancel leaving good relative accuracy in the final result. Using this formula to evaluate L ( x ) {\displaystyle L(x)} at one of the nodes x j {\displaystyle x_{j}} will result in the indeterminate ∞ y j / ∞ {\displaystyle \infty y_{j}/\infty } ; computer implementations must replace such results by L ( x j ) = y j . {\displaystyle L(x_{j})=y_{j}.} Each Lagrange basis polynomial can also be written in barycentric form: Solving an interpolation problem leads to
8740-592: The weight associated with the node x i , is defined to equal the weighted integral of l i ( x ) (see below for other formulas for the weights). But all the x i are roots of p n , so the division formula above tells us that h ( x i ) = p n ( x i ) q ( x i ) + r ( x i ) = r ( x i ) , {\displaystyle h(x_{i})=p_{n}(x_{i})\,q(x_{i})+r(x_{i})=r(x_{i}),} for all i . Thus we finally have ∫
8835-2178: The weighting factors and function argument values from Table 1 for the two-point rule, Now we can use the Gauss quadrature formula 11 ∫ − 1 1 f ( 11 x + 19 ) d x ≈ 11 [ c 1 f ( 11 x 1 + 19 ) + c 2 f ( 11 x 2 + 19 ) ] = 11 [ f ( 11 ( − 0.5773503 ) + 19 ) + f ( 11 ( 0.5773503 ) + 19 ) ] = 11 [ f ( 12.64915 ) + f ( 25.35085 ) ] = 11 [ ( 296.8317 ) + ( 708.4811 ) ] = 11058.44 {\displaystyle {\begin{aligned}11\int _{-1}^{1}{f\left(11x+19\right){dx}}&\approx 11\left[c_{1}f\left(11x_{1}+19\right)+c_{2}f\left(11x_{2}+19\right)\right]\\&=11\left[f\left(11(-0.5773503)+19\right)+f\left(11(0.5773503)+19\right)\right]\\&=11\left[f(12.64915)+f(25.35085)\right]\\&=11\left[(296.8317)+(708.4811)\right]\\&=11058.44\end{aligned}}} since f ( 12.64915 ) = 2000 ln [ 140000 140000 − 2100 ( 12.64915 ) ] − 9.8 ( 12.64915 ) = 296.8317 {\displaystyle {\begin{aligned}f(12.64915)&=2000\ln \left[{\frac {140000}{140000-2100(12.64915)}}\right]-9.8(12.64915)\\&=296.8317\end{aligned}}} f ( 25.35085 ) = 2000 ln [ 140000 140000 − 2100 ( 25.35085 ) ] − 9.8 ( 25.35085 ) = 708.4811 {\displaystyle {\begin{aligned}f(25.35085)&=2000\ln \left[{\frac {140000}{140000-2100(25.35085)}}\right]-9.8(25.35085)\\&=708.4811\end{aligned}}} Given that
8930-519: The weights are obtained by dividing this by p n ′ ( x i ) {\displaystyle p'_{n}(x_{i})} and that yields the expression in equation ( 1 ). w i {\displaystyle w_{i}} can also be expressed in terms of the orthogonal polynomials p n ( x ) {\displaystyle p_{n}(x)} and now p n + 1 ( x ) {\displaystyle p_{n+1}(x)} . In
9025-449: Was developed by Carl Gustav Jacobi in 1826. The most common domain of integration for such a rule is taken as [−1, 1] , so the rule is stated as ∫ − 1 1 f ( x ) d x ≈ ∑ i = 1 n w i f ( x i ) , {\displaystyle \int _{-1}^{1}f(x)\,dx\approx \sum _{i=1}^{n}w_{i}f(x_{i}),} which
#150849