Rare or extreme events are events that occur with low frequency, and often refers to infrequent events that have a widespread effect and which might destabilize systems (for example, stock markets, ocean wave intensity or optical fibers or society). Rare events encompass natural phenomena (major earthquakes, tsunamis, hurricanes, floods, asteroid impacts, solar flares, etc.), anthropogenic hazards ( warfare and related forms of violent conflict, acts of terrorism , industrial accidents, financial and commodity market crashes, etc.), as well as phenomena for which natural and anthropogenic factors interact in complex ways (epidemic disease spread, global warming -related changes in climate and weather, etc.).
31-540: Rare or extreme events are discrete occurrences of infrequently observed events. Despite being statistically improbable, such events are plausible insofar as historical instances of the event (or a similar event) have been documented. Scholarly and popular analyses of rare events often focus on those events that could be reasonably expected to have a substantial negative effect on a society—either economically or in terms of human casualties (typically, both). Examples of such events might include an 8.0+ Richter magnitude earthquake,
62-462: A yes–no question . Such questions lead to outcomes that are Boolean -valued: a single bit whose value is success/ yes / true / one with probability p and failure/no/ false / zero with probability q . It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and p would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and p would be
93-544: A Bernoulli distributed X {\displaystyle X} is We first find From this follows With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside [ 0 , 1 / 4 ] {\displaystyle [0,1/4]} . The skewness is q − p p q = 1 − 2 p p q {\displaystyle {\frac {q-p}{\sqrt {pq}}}={\frac {1-2p}{\sqrt {pq}}}} . When we take
124-431: A continuum of extremity, with more extreme-magnitude cases being statistically more infrequent. Therefore, rather than viewing rare event data as its own class of information, data concerning "rare" events often exists as a subset of data within a broader parent event class (e.g., a seismic activity data set would include instances of extreme earthquakes, as well as data on much lower-intensity seismic events). The following
155-424: A nuclear incident that kills thousands of people, or a 10%+ single-day change in the value of a stock market index. Rare event modeling (REM) refers to efforts to characterize the statistical distribution parameters, generative processes, or dynamics that govern the occurrence of statistically rare events, including but not limited to highly influential natural or human-made catastrophes. Such “modeling” may include
186-480: A random sample is the sample mean . The expected value of a Bernoulli random variable X {\displaystyle X} is This is due to the fact that for a Bernoulli distributed random variable X {\displaystyle X} with Pr ( X = 1 ) = p {\displaystyle \Pr(X=1)=p} and Pr ( X = 0 ) = q {\displaystyle \Pr(X=0)=q} we find The variance of
217-718: A wide range of approaches, including, most notably, statistical models for analyzing historical event data and computational software models that attempt to simulate rare event processes and dynamics. REM also encompasses efforts to forecast the occurrence of similar events over some future time horizon, which may be of interest for both scholarly and applied purposes (e.g., risk mitigation and planning). Novel data collection techniques can be used for learning about rare events data. In many cases, rare and catastrophic events can be regarded as extreme-magnitude instances of more mundane phenomena. For example, seismic activity, stock market fluctuations, and acts of organized violence all occur along
248-530: Is consistent . This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko–Cantelli theorem , which states that the convergence in fact happens uniformly over t : The sup-norm in this expression is called the Kolmogorov–Smirnov statistic for testing the goodness-of-fit between
279-418: Is a binomial random variable with mean nF ( t ) and variance nF ( t )(1 − F ( t )) . This implies that F ^ n ( t ) {\displaystyle {\widehat {F}}_{n}(t)} is an unbiased estimator for F ( t ) . However, in some textbooks, the definition is given as Since the ratio ( n + 1)/ n approaches 1 as n goes to infinity,
310-475: Is a list of data sets focusing on domains that are of broad scholarly and policy interest, and where "rare" (extreme-magnitude) cases may be of particularly keen interest due to their potentially devastating consequences. Descriptions of the data sets are extracted from the source websites or providers. Statistical distribution In statistics , an empirical distribution function (commonly also called an empirical cumulative distribution function , eCDF )
341-392: Is a measure of uncertainty or randomness in a probability distribution. For a Bernoulli random variable X {\displaystyle X} with success probability p {\displaystyle p} and failure probability q = 1 − p {\displaystyle q=1-p} , the entropy H ( X ) {\displaystyle H(X)}
SECTION 10
#1733086170488372-561: Is a random variable with a Bernoulli distribution, then: The probability mass function f {\displaystyle f} of this distribution, over possible outcomes k , is This can also be expressed as or as The Bernoulli distribution is a special case of the binomial distribution with n = 1. {\displaystyle n=1.} The kurtosis goes to infinity for high and low values of p , {\displaystyle p,} but for p = 1 / 2 {\displaystyle p=1/2}
403-466: Is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko–Cantelli theorem . A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. Let ( X 1 , …, X n ) be independent, identically distributed real random variables with
434-596: Is defined as: The entropy is maximized when p = 0.5 {\displaystyle p=0.5} , indicating the highest level of uncertainty when both outcomes are equally likely. The entropy is zero when p = 0 {\displaystyle p=0} or p = 1 {\displaystyle p=1} , where one outcome is certain. Fisher information measures the amount of information that an observable random variable X {\displaystyle X} carries about an unknown parameter p {\displaystyle p} upon which
465-557: Is specified as As per the above bounds, we can plot the Empirical CDF, CDF and confidence intervals for different distributions by using any one of the statistical implementations. A non-exhaustive list of software implementations of Empirical Distribution function includes: Bernoulli distribution Three examples of Bernoulli distribution: 0 ≤ p ≤ 1 {\displaystyle 0\leq p\leq 1} In probability theory and statistics ,
496-411: Is the distribution function associated with the empirical measure of a sample . This cumulative distribution function is a step function that jumps up by 1/ n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value. The empirical distribution function
527-456: The central limit theorem states that pointwise , F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} has asymptotically normal distribution with the standard n {\displaystyle {\sqrt {n}}} rate of convergence: This result is extended by the Donsker’s theorem , which asserts that
558-582: The empirical process n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} , viewed as a function indexed by t ∈ R {\displaystyle \scriptstyle t\in \mathbb {R} } , converges in distribution in the Skorokhod space D [ − ∞ , + ∞ ] {\displaystyle \scriptstyle D[-\infty ,+\infty ]} to
589-514: The Bernoulli distribution , named after Swiss mathematician Jacob Bernoulli , is the discrete probability distribution of a random variable which takes the value 1 with probability p {\displaystyle p} and the value 0 with probability q = 1 − p {\displaystyle q=1-p} . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks
620-458: The Kolmogorov distribution that does not depend on the form of F . Another result, which follows from the law of the iterated logarithm , is that and As per Dvoretzky–Kiefer–Wolfowitz inequality the interval that contains the true CDF, F ( x ) {\displaystyle F(x)} , with probability 1 − α {\displaystyle 1-\alpha }
651-531: The asymptotic behavior of the sup-norm of this expression. Number of results exist in this venue, for example the Dvoretzky–Kiefer–Wolfowitz inequality provides bound on the tail probabilities of n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} : In fact, Kolmogorov has shown that if
SECTION 20
#1733086170488682-503: The asymptotic properties of the two definitions that are given above are the same. By the strong law of large numbers , the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} converges to F ( t ) as n → ∞ almost surely , for every value of t : thus the estimator F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)}
713-579: The common cumulative distribution function F ( t ) . Then the empirical distribution function is defined as where 1 A {\displaystyle \mathbf {1} _{A}} is the indicator of event A . For a fixed t , the indicator 1 X i ≤ t {\displaystyle \mathbf {1} _{X_{i}\leq t}} is a Bernoulli random variable with parameter p = F ( t ) ; hence n F ^ n ( t ) {\displaystyle n{\widehat {F}}_{n}(t)}
744-464: The cumulative distribution function F is continuous, then the expression n ‖ F ^ n − F ‖ ∞ {\displaystyle \scriptstyle {\sqrt {n}}\|{\widehat {F}}_{n}-F\|_{\infty }} converges in distribution to ‖ B ‖ ∞ {\displaystyle \scriptstyle \|B\|_{\infty }} , which has
775-525: The empirical distribution F ^ n ( t ) {\displaystyle \scriptstyle {\widehat {F}}_{n}(t)} and the assumed true cumulative distribution function F . Other norm functions may be reasonably used here instead of the sup-norm. For example, the L -norm gives rise to the Cramér–von Mises statistic . The asymptotic distribution can be further characterized in several different ways. First,
806-557: The fact that 1 k = 1 {\displaystyle 1^{k}=1} and 0 k = 0 {\displaystyle 0^{k}=0} . The central moment of order k {\displaystyle k} is given by The first six central moments are The higher central moments can be expressed more compactly in terms of μ 2 {\displaystyle \mu _{2}} and μ 3 {\displaystyle \mu _{3}} The first six cumulants are Entropy
837-731: The mean-zero Gaussian process G F = B ∘ F {\displaystyle \scriptstyle G_{F}=B\circ F} , where B is the standard Brownian bridge . The covariance structure of this Gaussian process is The uniform rate of convergence in Donsker’s theorem can be quantified by the result known as the Hungarian embedding : Alternatively, the rate of convergence of n ( F ^ n − F ) {\displaystyle \scriptstyle {\sqrt {n}}({\widehat {F}}_{n}-F)} can also be quantified in terms of
868-641: The probability of X {\displaystyle X} depends. For the Bernoulli distribution, the Fisher information with respect to the parameter p {\displaystyle p} is given by: Proof: This represents the probability of observing X {\displaystyle X} given the parameter p {\displaystyle p} . It is maximized when p = 0.5 {\displaystyle p=0.5} , reflecting maximum uncertainty and thus maximum information about
899-475: The probability of tails). In particular, unfair coins would have p ≠ 1 / 2. {\displaystyle p\neq 1/2.} The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution , for which the possible outcomes need not be 0 and 1. If X {\displaystyle X}
930-688: The standardized Bernoulli distributed random variable X − E [ X ] Var [ X ] {\displaystyle {\frac {X-\operatorname {E} [X]}{\sqrt {\operatorname {Var} [X]}}}} we find that this random variable attains q p q {\displaystyle {\frac {q}{\sqrt {pq}}}} with probability p {\displaystyle p} and attains − p p q {\displaystyle -{\frac {p}{\sqrt {pq}}}} with probability q {\displaystyle q} . Thus we get The raw moments are all equal due to
961-401: The two-point distributions including the Bernoulli distribution have a lower excess kurtosis , namely −2, than any other probability distribution. The Bernoulli distributions for 0 ≤ p ≤ 1 {\displaystyle 0\leq p\leq 1} form an exponential family . The maximum likelihood estimator of p {\displaystyle p} based on