Misplaced Pages

FKF

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The fast Kalman filter (FKF) , devised by Antti Lange (born 1941), is an extension of the Helmert–Wolf blocking (HWB) method from geodesy to safety-critical real-time applications of Kalman filtering (KF) such as GNSS navigation up to the centimeter-level of accuracy and satellite imaging of the Earth including atmospheric tomography.

#858141

74-598: FKF may refer to: Fast Kalman filter Football Kenya Federation Free Knowledge Foundation , in Spain Fylkeskommunalt foretak , a county-municipal business enterprise in Norway Libertarian Municipal People (Swedish: Frihetliga Kommunalfolket ), a Swedish political party Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with

148-495: A Numerical Weather Prediction (NWP) system can now forecast observations with confidence intervals and their operational quality control can thus be improved. A sudden increase of uncertainty in predicting observations would indicate that important observations are missing (observability problem) or an unpredictable change of weather is taking place (controllability problem). Remote sensing and imaging from satellites are partly based on forecasted information. Controlling stability of

222-460: A bilinear form , it yields the covariance between the two linear combinations: d T Σ c = cov ⁡ ( d T X , c T X ) {\displaystyle \mathbf {d} ^{\mathsf {T}}{\boldsymbol {\Sigma }}\mathbf {c} =\operatorname {cov} (\mathbf {d} ^{\mathsf {T}}\mathbf {X} ,\mathbf {c} ^{\mathsf {T}}\mathbf {X} )} . The variance of

296-528: A complex scalar-valued random variable with expected value μ {\displaystyle \mu } is conventionally defined using complex conjugation : var ⁡ ( Z ) = E ⁡ [ ( Z − μ Z ) ( Z − μ Z ) ¯ ] , {\displaystyle \operatorname {var} (Z)=\operatorname {E} \left[(Z-\mu _{Z}){\overline {(Z-\mu _{Z})}}\right],} where

370-406: A shows ⟨ X Y T ⟩ {\displaystyle \langle \mathbf {XY^{\mathsf {T}}} \rangle } , panel b shows ⟨ X ⟩ ⟨ Y T ⟩ {\displaystyle \langle \mathbf {X} \rangle \langle \mathbf {Y} ^{\mathsf {T}}\rangle } and panel c shows their difference, which

444-471: A 2-dimensional map. When vectors X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are discrete random functions , the map shows statistical relations between different regions of the random functions. Statistically independent regions of the functions show up on the map as zero-level flatland, while positive or negative correlations show up, respectively, as hills or valleys. In practice

518-462: A covariance matrix in its mechanism. The characteristic mutation operator draws the update step from a multivariate normal distribution using an evolving covariance matrix. There is a formal proof that the evolution strategy 's covariance matrix adapts to the inverse of the Hessian matrix of the search landscape, up to a scalar factor and small random fluctuations (proven for a single-parent strategy and

592-497: A different point of view, to find an optimal basis for representing the data in a compact way (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal component analysis (PCA) and the Karhunen–Loève transform (KL-transform). The covariance matrix plays a key role in financial economics , especially in portfolio theory and its mutual fund separation theorem and in

666-516: A linear combination is then c T Σ c {\displaystyle \mathbf {c} ^{\mathsf {T}}{\boldsymbol {\Sigma }}\mathbf {c} } , its covariance with itself. Similarly, the (pseudo-)inverse covariance matrix provides an inner product ⟨ c − μ | Σ + | c − μ ⟩ {\displaystyle \langle c-\mu |\Sigma ^{+}|c-\mu \rangle } , which induces

740-437: A narrow window of data (i.e. too few measurements) is continuously used by a Kalman filter. Observing instruments onboard orbiting satellites gives an example of optimal Kalman filtering where their calibration is done indirectly on ground. There may also exist other state parameters that are hardly or not at all observable if too small samples of data are processed at a time by any sort of a Kalman filter. The computing load of

814-1052: A number of other dualities between marginalizing and conditioning for Gaussian random variables. For K X X = var ⁡ ( X ) = E ⁡ [ ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) T ] {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {var} (\mathbf {X} )=\operatorname {E} \left[\left(\mathbf {X} -\operatorname {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\operatorname {E} [\mathbf {X} ]\right)^{\mathsf {T}}\right]} and μ X = E ⁡ [ X ] {\displaystyle {\boldsymbol {\mu }}_{\mathbf {X} }=\operatorname {E} [{\textbf {X}}]} , where X = ( X 1 , … , X n ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\mathsf {T}}}

SECTION 10

#1732858383859

888-445: A row vector of explanatory variables X T {\displaystyle \mathbf {X} ^{\mathsf {T}}} rather than pre-multiplying a column vector X {\displaystyle \mathbf {X} } . In this form they correspond to the coefficients obtained by inverting the matrix of the normal equations of ordinary least squares (OLS). A covariance matrix with all non-zero elements tells us that all

962-406: A smooth spectrum ⟨ X ( t ) ⟩ {\displaystyle \langle \mathbf {X} (t)\rangle } , which is shown in red at the bottom of Fig. 1. The average spectrum ⟨ X ⟩ {\displaystyle \langle \mathbf {X} \rangle } reveals several nitrogen ions in a form of peaks broadened by their kinetic energy, but to find

1036-720: A static model, as the population size increases, relying on the quadratic approximation). Intuitively, this result is supported by the rationale that the optimal covariance distribution can offer mutation steps whose equidensity probability contours match the level sets of the landscape, and so they maximize the progress rate. In covariance mapping the values of the cov ⁡ ( X , Y ) {\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )} or pcov ⁡ ( X , Y ∣ I ) {\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )} matrix are plotted as

1110-467: A wide range of systems, including real-time imaging. The ordinary Kalman filter is an optimal filtering algorithm for linear systems. However, an optimal Kalman filter is not stable (i.e. reliable) if Kalman's observability and controllability conditions are not continuously satisfied. These conditions are very challenging to maintain for any larger system. This means that even optimal Kalman filters may start diverging towards false solutions. Fortunately,

1184-424: Is cov ⁡ ( X , Y ) {\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )} (note a change in the colour scale). Unfortunately, this map is overwhelmed by uninteresting, common-mode correlations induced by laser intensity fluctuating from shot to shot. To suppress such correlations the laser intensity I j {\displaystyle I_{j}}

1258-1237: Is jointly normally distributed , or more generally elliptically distributed , then its probability density function f ⁡ ( X ) {\displaystyle \operatorname {f} (\mathbf {X} )} can be expressed in terms of the covariance matrix Σ {\displaystyle {\boldsymbol {\Sigma }}} as follows f ⁡ ( X ) = ( 2 π ) − n / 2 | Σ | − 1 / 2 exp ⁡ ( − 1 2 ( X − μ ) T Σ − 1 ( X − μ ) ) , {\displaystyle \operatorname {f} (\mathbf {X} )=(2\pi )^{-n/2}|{\boldsymbol {\Sigma }}|^{-1/2}\exp \left(-{\tfrac {1}{2}}\mathbf {(X-\mu )^{\mathsf {T}}\Sigma ^{-1}(X-\mu )} \right),} where μ = E ⁡ [ X ] {\displaystyle {\boldsymbol {\mu }}=\operatorname {E} [\mathbf {X} ]} and | Σ | {\displaystyle |{\boldsymbol {\Sigma }}|}

1332-2035: Is a n {\displaystyle n} -dimensional random variable, the following basic properties apply: The joint mean μ {\displaystyle {\boldsymbol {\mu }}} and joint covariance matrix Σ {\displaystyle {\boldsymbol {\Sigma }}} of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } can be written in block form μ = [ μ X μ Y ] , Σ = [ K X X K X Y K Y X K Y Y ] {\displaystyle {\boldsymbol {\mu }}={\begin{bmatrix}{\boldsymbol {\mu }}_{X}\\{\boldsymbol {\mu }}_{Y}\end{bmatrix}},\qquad {\boldsymbol {\Sigma }}={\begin{bmatrix}\operatorname {K} _{\mathbf {XX} }&\operatorname {K} _{\mathbf {XY} }\\\operatorname {K} _{\mathbf {YX} }&\operatorname {K} _{\mathbf {YY} }\end{bmatrix}}} where K X X = var ⁡ ( X ) {\displaystyle \operatorname {K} _{\mathbf {XX} }=\operatorname {var} (\mathbf {X} )} , K Y Y = var ⁡ ( Y ) {\displaystyle \operatorname {K} _{\mathbf {YY} }=\operatorname {var} (\mathbf {Y} )} and K X Y = K Y X T = cov ⁡ ( X , Y ) {\displaystyle \operatorname {K} _{\mathbf {XY} }=\operatorname {K} _{\mathbf {YX} }^{\mathsf {T}}=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )} . K X X {\displaystyle \operatorname {K} _{\mathbf {XX} }} and K Y Y {\displaystyle \operatorname {K} _{\mathbf {YY} }} can be identified as

1406-516: Is a p × p {\displaystyle p\times p} symmetric positive-semidefinite matrix. From the finite-dimensional case of the spectral theorem , it follows that M {\displaystyle M} has a nonnegative symmetric square root , which can be denoted by M . Let X {\displaystyle \mathbf {X} } be any p × 1 {\displaystyle p\times 1} column vector-valued random variable whose covariance matrix

1480-1042: Is a column vector of complex-valued random variables, then the conjugate transpose Z H {\displaystyle \mathbf {Z} ^{\mathsf {H}}} is formed by both transposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix called the covariance matrix , as its expectation: K Z Z = cov ⁡ [ Z , Z ] = E ⁡ [ ( Z − μ Z ) ( Z − μ Z ) H ] , {\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,\mathbf {Z} ]=\operatorname {E} \left[(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })^{\mathsf {H}}\right],} The matrix so obtained will be Hermitian positive-semidefinite , with real numbers in

1554-425: Is an inversion method to solve sparse linear equations (Wolf, 1978). The sparse coefficient matrix to be inverted may often have either a bordered block- or band-diagonal (BBD) structure. If it is band-diagonal it can be transformed into a block-diagonal form e.g. by means of a generalized Canonical Correlation Analysis (gCCA) . Such a large matrix can thus be most effectively inverted in a blockwise manner by using

SECTION 20

#1732858383859

1628-434: Is effectively the simple covariance matrix K X Y {\displaystyle \operatorname {K} _{\mathbf {XY} }} as if the uninteresting random variables I {\displaystyle \mathbf {I} } were held constant. If a column vector X {\displaystyle \mathbf {X} } of n {\displaystyle n} possibly correlated random variables

1702-759: Is known as the matrix of regression coefficients, while in linear algebra K Y | X {\displaystyle \operatorname {K} _{\mathbf {Y|X} }} is the Schur complement of K X X {\displaystyle \operatorname {K} _{\mathbf {XX} }} in Σ {\displaystyle {\boldsymbol {\Sigma }}} . The matrix of regression coefficients may often be given in transpose form, K X X − 1 ⁡ K X Y {\displaystyle \operatorname {K} _{\mathbf {XX} }^{-1}\operatorname {K} _{\mathbf {XY} }} , suitable for post-multiplying

1776-1049: Is no ambiguity between them. The matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }} is also often called the variance-covariance matrix , since the diagonal terms are in fact variances. By comparison, the notation for the cross-covariance matrix between two vectors is cov ⁡ ( X , Y ) = K X Y = E ⁡ [ ( X − E ⁡ [ X ] ) ( Y − E ⁡ [ Y ] ) T ] . {\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\operatorname {K} _{\mathbf {X} \mathbf {Y} }=\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {Y} -\operatorname {E} [\mathbf {Y} ])^{\mathsf {T}}\right].} The auto-covariance matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}

1850-482: Is obtained by matrix inversion from the respective system of Normal Equations . Its coefficient matrix is usually sparse and the exact solution of all the estimated parameters can be computed by using the HWB (and FKF) method. The optimal solution may also be obtained by Gauss elimination using other sparse-matrix techniques or some iterative methods based e.g. on Variational Calculus . However, these latter methods may solve

1924-461: Is recorded at every shot, put into I {\displaystyle \mathbf {I} } and pcov ⁡ ( X , Y ∣ I ) {\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )} is calculated as panels d and e show. The suppression of the uninteresting correlations is, however, imperfect because there are other sources of common-mode fluctuations than

1998-864: Is related to the autocorrelation matrix R X X {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }} by K X X = E ⁡ [ ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) T ] = R X X − E ⁡ [ X ] E ⁡ [ X ] T {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}]=\operatorname {R} _{\mathbf {X} \mathbf {X} }-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{\mathsf {T}}} where

2072-514: Is the p × p {\displaystyle p\times p} identity matrix. Then var ⁡ ( M 1 / 2 X ) = M 1 / 2 var ⁡ ( X ) M 1 / 2 = M . {\displaystyle \operatorname {var} (\mathbf {M} ^{1/2}\mathbf {X} )=\mathbf {M} ^{1/2}\,\operatorname {var} (\mathbf {X} )\,\mathbf {M} ^{1/2}=\mathbf {M} .} The variance of

2146-542: Is the determinant of Σ {\displaystyle {\boldsymbol {\Sigma }}} . Applied to one vector, the covariance matrix maps a linear combination c of the random variables X onto a vector of covariances with those variables: c T Σ = cov ⁡ ( c T X , X ) {\displaystyle \mathbf {c} ^{\mathsf {T}}\Sigma =\operatorname {cov} (\mathbf {c} ^{\mathsf {T}}\mathbf {X} ,\mathbf {X} )} . Treated as

2220-469: Is the i -th discrete value in sample j of the random function X ( t ) {\displaystyle X(t)} . The expected values needed in the covariance formula are estimated using the sample mean , e.g. ⟨ X ⟩ = 1 n ∑ j = 1 n X j {\displaystyle \langle \mathbf {X} \rangle ={\frac {1}{n}}\sum _{j=1}^{n}\mathbf {X} _{j}} and

2294-541: Is the time-of-flight spectrum of ions from a Coulomb explosion of nitrogen molecules multiply ionised by a laser pulse. Since only a few hundreds of molecules are ionised at each laser pulse, the single-shot spectra are highly fluctuating. However, collecting typically m = 10 4 {\displaystyle m=10^{4}} such spectra, X j ( t ) {\displaystyle \mathbf {X} _{j}(t)} , and averaging them over j {\displaystyle j} produces

FKF - Misplaced Pages Continue

2368-1363: Is the variance of a real-valued random variable, so a covariance matrix is always a positive-semidefinite matrix . The above argument can be expanded as follows: w T E ⁡ [ ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) T ] w = E ⁡ [ w T ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) T w ] = E ⁡ [ ( w T ( X − E ⁡ [ X ] ) ) 2 ] ≥ 0 , {\displaystyle {\begin{aligned}&w^{\mathsf {T}}\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}\right]w=\operatorname {E} \left[w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}w\right]\\&=\operatorname {E} {\big [}{\big (}w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ]){\big )}^{2}{\big ]}\geq 0,\end{aligned}}} where

2442-423: Is the matrix of the diagonal elements of K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }} (i.e., a diagonal matrix of the variances of X i {\displaystyle X_{i}} for i = 1 , … , n {\displaystyle i=1,\dots ,n} ). Equivalently, the correlation matrix can be seen as

2516-646: Is the matrix whose ( i , j ) {\displaystyle (i,j)} entry is the covariance K X i X j = cov ⁡ [ X i , X j ] = E ⁡ [ ( X i − E ⁡ [ X i ] ) ( X j − E ⁡ [ X j ] ) ] {\displaystyle \operatorname {K} _{X_{i}X_{j}}=\operatorname {cov} [X_{i},X_{j}]=\operatorname {E} [(X_{i}-\operatorname {E} [X_{i}])(X_{j}-\operatorname {E} [X_{j}])]} where

2590-1846: The precision matrix (or concentration matrix ). Just as the covariance matrix can be written as the rescaling of a correlation matrix by the marginal variances: cov ⁡ ( X ) = [ σ x 1 0 σ x 2 ⋱ 0 σ x n ] [ 1 ρ x 1 , x 2 ⋯ ρ x 1 , x n ρ x 2 , x 1 1 ⋯ ρ x 2 , x n ⋮ ⋮ ⋱ ⋮ ρ x n , x 1 ρ x n , x 2 ⋯ 1 ] [ σ x 1 0 σ x 2 ⋱ 0 σ x n ] {\displaystyle \operatorname {cov} (\mathbf {X} )={\begin{bmatrix}\sigma _{x_{1}}&&&0\\&\sigma _{x_{2}}\\&&\ddots \\0&&&\sigma _{x_{n}}\end{bmatrix}}{\begin{bmatrix}1&\rho _{x_{1},x_{2}}&\cdots &\rho _{x_{1},x_{n}}\\\rho _{x_{2},x_{1}}&1&\cdots &\rho _{x_{2},x_{n}}\\\vdots &\vdots &\ddots &\vdots \\\rho _{x_{n},x_{1}}&\rho _{x_{n},x_{2}}&\cdots &1\\\end{bmatrix}}{\begin{bmatrix}\sigma _{x_{1}}&&&0\\&\sigma _{x_{2}}\\&&\ddots \\0&&&\sigma _{x_{n}}\end{bmatrix}}} So, using

2664-701: The Mahalanobis distance , a measure of the "unlikelihood" of c . From basic property 4. above, let b {\displaystyle \mathbf {b} } be a ( p × 1 ) {\displaystyle (p\times 1)} real-valued vector, then var ⁡ ( b T X ) = b T var ⁡ ( X ) b , {\displaystyle \operatorname {var} (\mathbf {b} ^{\mathsf {T}}\mathbf {X} )=\mathbf {b} ^{\mathsf {T}}\operatorname {var} (\mathbf {X} )\mathbf {b} ,\,} which must always be nonnegative, since it

2738-432: The capital asset pricing model . The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis ) or are predicted to (in a positive analysis ) choose to hold in a context of diversification . The evolution strategy , a particular family of Randomized Search Heuristics, fundamentally relies on

2812-1738: The conditional distribution for Y {\displaystyle \mathbf {Y} } given X {\displaystyle \mathbf {X} } is given by Y ∣ X ∼   N ( μ Y | X , K Y | X ) , {\displaystyle \mathbf {Y} \mid \mathbf {X} \sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{\mathbf {Y|X} },\operatorname {K} _{\mathbf {Y|X} }),} defined by conditional mean μ Y | X = μ Y + K Y X ⁡ K X X − 1 ⁡ ( X − μ X ) {\displaystyle {\boldsymbol {\mu }}_{\mathbf {Y} |\mathbf {X} }={\boldsymbol {\mu }}_{\mathbf {Y} }+\operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}\left(\mathbf {X} -{\boldsymbol {\mu }}_{\mathbf {X} }\right)} and conditional variance K Y | X = K Y Y − K Y X ⁡ K X X − 1 ⁡ K X Y . {\displaystyle \operatorname {K} _{\mathbf {Y|X} }=\operatorname {K} _{\mathbf {YY} }-\operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}\operatorname {K} _{\mathbf {XY} }.} The matrix K Y X ⁡ K X X − 1 {\displaystyle \operatorname {K} _{\mathbf {YX} }\operatorname {K} _{\mathbf {XX} }^{-1}}

2886-408: The inverse problem of an ordinary Kalman recursion is roughly proportional to the cube of the number of the measurements processed simultaneously. This number can always be set to 1 by processing each scalar measurement independently and (if necessary) performing a simple pre-filtering algorithm to de-correlate these measurements. However, for any large and complex system this pre-filtering may need

2960-979: The variance of the random vector X {\displaystyle \mathbf {X} } , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix , because it is the matrix of covariances between the scalar components of the vector X {\displaystyle \mathbf {X} } . var ⁡ ( X ) = cov ⁡ ( X , X ) = E ⁡ [ ( X − E ⁡ [ X ] ) ( X − E ⁡ [ X ] ) T ] . {\displaystyle \operatorname {var} (\mathbf {X} )=\operatorname {cov} (\mathbf {X} ,\mathbf {X} )=\operatorname {E} \left[(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\mathsf {T}}\right].} Both forms are quite standard, and there

3034-510: The HWB computing. Any continued use of a too narrow window of input data weakens observability of the calibration parameters and, in the long run, this may lead to serious controllability problems totally unacceptable in safety-critical applications. Even when many measurements are processed simultaneously, it is not unusual that the linearized equation system becomes sparse, because some measurements turn out to be independent of some state or calibration parameters. In problems of Satellite Geodesy,

FKF - Misplaced Pages Continue

3108-912: The angular brackets denote sample averaging as before except that the Bessel's correction should be made to avoid bias . Using this estimation the partial covariance matrix can be calculated as pcov ⁡ ( X , Y ∣ I ) = cov ⁡ ( X , Y ) − cov ⁡ ( X , I ) ( cov ⁡ ( I , I ) ∖ cov ⁡ ( I , Y ) ) , {\displaystyle \operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )-\operatorname {cov} (\mathbf {X} ,\mathbf {I} )\left(\operatorname {cov} (\mathbf {I} ,\mathbf {I} )\backslash \operatorname {cov} (\mathbf {I} ,\mathbf {Y} )\right),} where

3182-420: The autocorrelation matrix is defined as R X X = E ⁡ [ X X T ] {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [\mathbf {X} \mathbf {X} ^{\mathsf {T}}]} . An entity closely related to the covariance matrix is the matrix of Pearson product-moment correlation coefficients between each of

3256-504: The backslash denotes the left matrix division operator, which bypasses the requirement to invert a matrix and is available in some computational packages such as Matlab . Fig. 1 illustrates how a partial covariance map is constructed on an example of an experiment performed at the FLASH free-electron laser in Hamburg. The random function X ( t ) {\displaystyle X(t)}

3330-444: The best out of FKF. Covariance matrix In probability theory and statistics , a covariance matrix (also known as auto-covariance matrix , dispersion matrix , variance matrix , or variance–covariance matrix ) is a square matrix giving the covariance between each pair of elements of a given random vector . Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example,

3404-1484: The column vectors X , Y {\displaystyle \mathbf {X} ,\mathbf {Y} } , and I {\displaystyle \mathbf {I} } are acquired experimentally as rows of n {\displaystyle n} samples, e.g. [ X 1 , X 2 , … , X n ] = [ X 1 ( t 1 ) X 2 ( t 1 ) ⋯ X n ( t 1 ) X 1 ( t 2 ) X 2 ( t 2 ) ⋯ X n ( t 2 ) ⋮ ⋮ ⋱ ⋮ X 1 ( t m ) X 2 ( t m ) ⋯ X n ( t m ) ] , {\displaystyle \left[\mathbf {X} _{1},\mathbf {X} _{2},\dots ,\mathbf {X} _{n}\right]={\begin{bmatrix}X_{1}(t_{1})&X_{2}(t_{1})&\cdots &X_{n}(t_{1})\\\\X_{1}(t_{2})&X_{2}(t_{2})&\cdots &X_{n}(t_{2})\\\\\vdots &\vdots &\ddots &\vdots \\\\X_{1}(t_{m})&X_{2}(t_{m})&\cdots &X_{n}(t_{m})\end{bmatrix}},} where X j ( t i ) {\displaystyle X_{j}(t_{i})}

3478-430: The complex conjugate of a complex number z {\displaystyle z} is denoted z ¯ {\displaystyle {\overline {z}}} ; thus the variance of a complex random variable is a real number. If Z = ( Z 1 , … , Z n ) T {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{n})^{\mathsf {T}}}

3552-479: The computing load of the HWB (and FKF) method is roughly proportional to the square of the total number of the state and calibration parameters only and not of the measurements that are billions. Reliable operational Kalman filtering requires continuous fusion of data in real-time. Its optimality depends essentially on the use of exact variances and covariances between all measurements and the estimated state and calibration parameters. This large error covariance matrix

3626-455: The correlations between the ionisation stages and the ion momenta requires calculating a covariance map. In the example of Fig. 1 spectra X j ( t ) {\displaystyle \mathbf {X} _{j}(t)} and Y j ( t ) {\displaystyle \mathbf {Y} _{j}(t)} are the same, except that the range of the time-of-flight t {\displaystyle t} differs. Panel

3700-673: The covariance matrix defined above, Hermitian transposition gets replaced by transposition in the definition. Its diagonal elements may be complex valued; it is a complex symmetric matrix . If M X {\displaystyle \mathbf {M} _{\mathbf {X} }} and M Y {\displaystyle \mathbf {M} _{\mathbf {Y} }} are centered data matrices of dimension p × n {\displaystyle p\times n} and q × n {\displaystyle q\times n} respectively, i.e. with n columns of observations of p and q rows of variables, from which

3774-511: The covariance matrix is estimated by the sample covariance matrix cov ⁡ ( X , Y ) ≈ ⟨ X Y T ⟩ − ⟨ X ⟩ ⟨ Y T ⟩ , {\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )\approx \langle \mathbf {XY^{\mathsf {T}}} \rangle -\langle \mathbf {X} \rangle \langle \mathbf {Y} ^{\mathsf {T}}\rangle ,} where

SECTION 50

#1732858383859

3848-2924: The covariance matrix of the standardized random variables X i / σ ( X i ) {\displaystyle X_{i}/\sigma (X_{i})} for i = 1 , … , n {\displaystyle i=1,\dots ,n} . corr ⁡ ( X ) = [ 1 E ⁡ [ ( X 1 − μ 1 ) ( X 2 − μ 2 ) ] σ ( X 1 ) σ ( X 2 ) ⋯ E ⁡ [ ( X 1 − μ 1 ) ( X n − μ n ) ] σ ( X 1 ) σ ( X n ) E ⁡ [ ( X 2 − μ 2 ) ( X 1 − μ 1 ) ] σ ( X 2 ) σ ( X 1 ) 1 ⋯ E ⁡ [ ( X 2 − μ 2 ) ( X n − μ n ) ] σ ( X 2 ) σ ( X n ) ⋮ ⋮ ⋱ ⋮ E ⁡ [ ( X n − μ n ) ( X 1 − μ 1 ) ] σ ( X n ) σ ( X 1 ) E ⁡ [ ( X n − μ n ) ( X 2 − μ 2 ) ] σ ( X n ) σ ( X 2 ) ⋯ 1 ] . {\displaystyle \operatorname {corr} (\mathbf {X} )={\begin{bmatrix}1&{\frac {\operatorname {E} [(X_{1}-\mu _{1})(X_{2}-\mu _{2})]}{\sigma (X_{1})\sigma (X_{2})}}&\cdots &{\frac {\operatorname {E} [(X_{1}-\mu _{1})(X_{n}-\mu _{n})]}{\sigma (X_{1})\sigma (X_{n})}}\\\\{\frac {\operatorname {E} [(X_{2}-\mu _{2})(X_{1}-\mu _{1})]}{\sigma (X_{2})\sigma (X_{1})}}&1&\cdots &{\frac {\operatorname {E} [(X_{2}-\mu _{2})(X_{n}-\mu _{n})]}{\sigma (X_{2})\sigma (X_{n})}}\\\\\vdots &\vdots &\ddots &\vdots \\\\{\frac {\operatorname {E} [(X_{n}-\mu _{n})(X_{1}-\mu _{1})]}{\sigma (X_{n})\sigma (X_{1})}}&{\frac {\operatorname {E} [(X_{n}-\mu _{n})(X_{2}-\mu _{2})]}{\sigma (X_{n})\sigma (X_{2})}}&\cdots &1\end{bmatrix}}.} Each element on

3922-458: The entries in the column vector X = ( X 1 , X 2 , … , X n ) T {\displaystyle \mathbf {X} =(X_{1},X_{2},\dots ,X_{n})^{\mathsf {T}}} are random variables , each with finite variance and expected value , then the covariance matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}

3996-509: The feedback between these forecasts and the satellite images requires a sensor fusion technique that is both fast and robust, which the FKF fulfills. The computational advantage of FKF is marginal for applications using only small amounts of data in real-time. Therefore, improved built-in calibration and data communication infrastructures need to be developed first and introduced to public use before personal gadgets and machine-to-machine devices can make

4070-419: The following analytic inversion formula : of Frobenius where This is the FKF method that may make it computationally possible to estimate a much larger number of state and calibration parameters than an ordinary Kalman recursion can do. Their operational accuracies may also be reliably estimated from the theory of Minimum-Norm Quadratic Unbiased Estimation ( MINQUE ) of C. R. Rao and used for controlling

4144-3108: The idea of partial correlation , and partial variance, the inverse covariance matrix can be expressed analogously: cov ⁡ ( X ) − 1 = [ 1 σ x 1 | x 2 . . . 0 1 σ x 2 | x 1 , x 3 . . . ⋱ 0 1 σ x n | x 1 . . . x n − 1 ] [ 1 − ρ x 1 , x 2 ∣ x 3 . . . ⋯ − ρ x 1 , x n ∣ x 2 . . . x n − 1 − ρ x 2 , x 1 ∣ x 3 . . . 1 ⋯ − ρ x 2 , x n ∣ x 1 , x 3 . . . x n − 1 ⋮ ⋮ ⋱ ⋮ − ρ x n , x 1 ∣ x 2 . . . x n − 1 − ρ x n , x 2 ∣ x 1 , x 3 . . . x n − 1 ⋯ 1 ] [ 1 σ x 1 | x 2 . . . 0 1 σ x 2 | x 1 , x 3 . . . ⋱ 0 1 σ x n | x 1 . . . x n − 1 ] {\displaystyle \operatorname {cov} (\mathbf {X} )^{-1}={\begin{bmatrix}{\frac {1}{\sigma _{x_{1}|x_{2}...}}}&&&0\\&{\frac {1}{\sigma _{x_{2}|x_{1},x_{3}...}}}\\&&\ddots \\0&&&{\frac {1}{\sigma _{x_{n}|x_{1}...x_{n-1}}}}\end{bmatrix}}{\begin{bmatrix}1&-\rho _{x_{1},x_{2}\mid x_{3}...}&\cdots &-\rho _{x_{1},x_{n}\mid x_{2}...x_{n-1}}\\-\rho _{x_{2},x_{1}\mid x_{3}...}&1&\cdots &-\rho _{x_{2},x_{n}\mid x_{1},x_{3}...x_{n-1}}\\\vdots &\vdots &\ddots &\vdots \\-\rho _{x_{n},x_{1}\mid x_{2}...x_{n-1}}&-\rho _{x_{n},x_{2}\mid x_{1},x_{3}...x_{n-1}}&\cdots &1\\\end{bmatrix}}{\begin{bmatrix}{\frac {1}{\sigma _{x_{1}|x_{2}...}}}&&&0\\&{\frac {1}{\sigma _{x_{2}|x_{1},x_{3}...}}}\\&&\ddots \\0&&&{\frac {1}{\sigma _{x_{n}|x_{1}...x_{n-1}}}}\end{bmatrix}}} This duality motivates

4218-664: The individual random variables are interrelated. This means that the variables are not only directly correlated, but also correlated via other variables indirectly. Often such indirect, common-mode correlations are trivial and uninteresting. They can be suppressed by calculating the partial covariance matrix, that is the part of covariance matrix that shows only the interesting part of correlations. If two vectors of random variables X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are correlated via another vector I {\displaystyle \mathbf {I} } ,

4292-400: The large matrix of all the error variances and covariances only approximately and the data fusion would not be performed in a strictly optimal fashion. Consequently, the long-term stability of Kalman filtering becomes uncertain even if Kalman's observability and controllability conditions were permanently satisfied. The Fast Kalman filter applies only to systems with sparse matrices, since HWB

4366-489: The laser intensity and in principle all these sources should be monitored in vector I {\displaystyle \mathbf {I} } . Yet in practice it is often sufficient to overcompensate the partial covariance correction as panel f shows, where interesting correlations of ion momenta are now clearly visible as straight lines centred on ionisation stages of atomic nitrogen. Two-dimensional infrared spectroscopy employs correlation analysis to obtain 2D spectra of

4440-402: The last inequality follows from the observation that w T ( X − E ⁡ [ X ] ) {\displaystyle w^{\mathsf {T}}(\mathbf {X} -\operatorname {E} [\mathbf {X} ])} is a scalar. Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, suppose M {\displaystyle M}

4514-926: The latter correlations are suppressed in a matrix K X Y ∣ I = pcov ⁡ ( X , Y ∣ I ) = cov ⁡ ( X , Y ) − cov ⁡ ( X , I ) cov ⁡ ( I , I ) − 1 cov ⁡ ( I , Y ) . {\displaystyle \operatorname {K} _{\mathbf {XY\mid I} }=\operatorname {pcov} (\mathbf {X} ,\mathbf {Y} \mid \mathbf {I} )=\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )-\operatorname {cov} (\mathbf {X} ,\mathbf {I} )\operatorname {cov} (\mathbf {I} ,\mathbf {I} )^{-1}\operatorname {cov} (\mathbf {I} ,\mathbf {Y} ).} The partial covariance matrix K X Y ∣ I {\displaystyle \operatorname {K} _{\mathbf {XY\mid I} }}

SECTION 60

#1732858383859

4588-815: The main diagonal and complex numbers off-diagonal. For complex random vectors, another kind of second central moment, the pseudo-covariance matrix (also called relation matrix ) is defined as follows: J Z Z = cov ⁡ [ Z , Z ¯ ] = E ⁡ [ ( Z − μ Z ) ( Z − μ Z ) T ] {\displaystyle \operatorname {J} _{\mathbf {Z} \mathbf {Z} }=\operatorname {cov} [\mathbf {Z} ,{\overline {\mathbf {Z} }}]=\operatorname {E} \left[(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })(\mathbf {Z} -{\boldsymbol {\mu }}_{\mathbf {Z} })^{\mathsf {T}}\right]} In contrast to

4662-413: The most straightforward and most often used estimators for the covariance matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties. The covariance matrix is a useful tool in many different areas. From it a transformation matrix can be derived, called a whitening transformation , that allows one to completely decorrelate the data or, from

4736-424: The operator E {\displaystyle \operatorname {E} } denotes the expected value (mean) of its argument. Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications , call the matrix K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }}

4810-455: The principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Each off-diagonal element is between −1 and +1 inclusive. The inverse of this matrix , K X X − 1 {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }^{-1}} , if it exists, is the inverse covariance matrix (or inverse concentration matrix ), also known as

4884-964: The random variables in the random vector X {\displaystyle \mathbf {X} } , which can be written as corr ⁡ ( X ) = ( diag ⁡ ( K X X ) ) − 1 2 K X X ( diag ⁡ ( K X X ) ) − 1 2 , {\displaystyle \operatorname {corr} (\mathbf {X} )={\big (}\operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} }){\big )}^{-{\frac {1}{2}}}\,\operatorname {K} _{\mathbf {X} \mathbf {X} }\,{\big (}\operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} }){\big )}^{-{\frac {1}{2}}},} where diag ⁡ ( K X X ) {\displaystyle \operatorname {diag} (\operatorname {K} _{\mathbf {X} \mathbf {X} })}

4958-907: The row means have been subtracted, then, if the row means were estimated from the data, sample covariance matrices Q X X {\displaystyle \mathbf {Q} _{\mathbf {XX} }} and Q X Y {\displaystyle \mathbf {Q} _{\mathbf {XY} }} can be defined to be Q X X = 1 n − 1 M X M X T , Q X Y = 1 n − 1 M X M Y T {\displaystyle \mathbf {Q} _{\mathbf {XX} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {X} }^{\mathsf {T}},\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {Y} }^{\mathsf {T}}} or, if

5032-603: The row means were known a priori, Q X X = 1 n M X M X T , Q X Y = 1 n M X M Y T . {\displaystyle \mathbf {Q} _{\mathbf {XX} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {X} }^{\mathsf {T}},\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }\mathbf {M} _{\mathbf {Y} }^{\mathsf {T}}.} These empirical sample covariance matrices are

5106-461: The stability of an optimal Kalman filter can be controlled by monitoring its error variances if only these can be reliably estimated (e.g. by MINQUE ). Their precise computation is, however, much more demanding than the optimal Kalman filtering itself. The FKF computing method often provides the required speed-up also in this respect. Calibration parameters are a typical example of those state parameters that may create serious observability problems if

5180-457: The stability of this optimal fast Kalman filtering. The FKF method extends the very high accuracies of Satellite Geodesy to Virtual Reference Station (VRS) Real Time Kinematic (RTK) surveying, mobile positioning and ultra-reliable navigation. First important applications will be real-time optimum calibration of global observing systems in Meteorology, Geophysics, Astronomy etc. For example,

5254-575: The title FKF . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=FKF&oldid=745077371 " Category : Disambiguation pages Hidden categories: Articles containing Swedish-language text Short description is different from Wikidata All article disambiguation pages All disambiguation pages Fast Kalman filter Kalman filters are an important filtering technique for building fault-tolerance into

5328-967: The two-dimensional variation. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). The covariance matrix of a random vector X {\displaystyle \mathbf {X} } is typically denoted by K X X {\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }} , Σ {\displaystyle \Sigma } or S {\displaystyle S} . Throughout this article, boldfaced unsubscripted X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are used to refer to random vectors, and Roman subscripted X i {\displaystyle X_{i}} and Y i {\displaystyle Y_{i}} are used to refer to scalar random variables. If

5402-629: The variance matrices of the marginal distributions for X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } respectively. If X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } are jointly normally distributed , X , Y ∼   N ( μ , Σ ) , {\displaystyle \mathbf {X} ,\mathbf {Y} \sim \ {\mathcal {N}}({\boldsymbol {\mu }},\operatorname {\boldsymbol {\Sigma }} ),} then

5476-423: The variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x {\displaystyle x} and y {\displaystyle y} directions contain all of the necessary information; a 2 × 2 {\displaystyle 2\times 2} matrix would be necessary to fully characterize

#858141