The Boltzmann equation or Boltzmann transport equation ( BTE ) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium ; it was devised by Ludwig Boltzmann in 1872. The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
100-437: The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element d 3 r {\displaystyle d^{3}\mathbf {r} } ) centered at
200-687: A ∭ p o s i t i o n s f ( x , y , z , p x , p y , p z , t ) d x d y d z d p x d p y d p z {\displaystyle {\begin{aligned}N&=\int \limits _{\mathrm {momenta} }d^{3}\mathbf {p} \int \limits _{\mathrm {positions} }d^{3}\mathbf {r} \,f(\mathbf {r} ,\mathbf {p} ,t)\\[5pt]&=\iiint \limits _{\mathrm {momenta} }\quad \iiint \limits _{\mathrm {positions} }f(x,y,z,p_{x},p_{y},p_{z},t)\,dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}\end{aligned}}} which
300-624: A momentum space element d 3 p {\displaystyle d^{3}\mathbf {p} } about p , at time t . Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region: N = ∫ m o m e n t a d 3 p ∫ p o s i t i o n s d 3 r f ( r , p , t ) = ∭ m o m e n t
400-467: A monotonic function , then the resulting density function is f Y ( y ) = f X ( g − 1 ( y ) ) | d d y ( g − 1 ( y ) ) | . {\displaystyle f_{Y}(y)=f_{X}{\big (}g^{-1}(y){\big )}\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|.} Here g denotes
500-730: A collapsed random variable with probability density function p Z ( z ) = δ ( z ) {\displaystyle p_{Z}(z)=\delta (z)} (i.e., a constant equal to zero). Let the random vector X ~ {\displaystyle {\tilde {X}}} and the transform H {\displaystyle H} be defined as H ( Z , X ) = [ Z + V ( X ) X ] = [ Y X ~ ] . {\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde {X}}\end{bmatrix}}.} It
600-665: A density function: the distributions of discrete random variables do not; nor does the Cantor distribution , even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if and only if its cumulative distribution function F ( x ) is absolutely continuous . In this case: F is almost everywhere differentiable , and its derivative can be used as probability density: d d x F ( x ) = f ( x ) . {\displaystyle {\frac {d}{dx}}F(x)=f(x).} If
700-437: A differentiable function and X {\displaystyle X} be a random vector taking values in R n {\displaystyle \mathbb {R} ^{n}} , f X {\displaystyle f_{X}} be the probability density function of X {\displaystyle X} and δ ( ⋅ ) {\displaystyle \delta (\cdot )} be
800-487: A discrete variable can take n different values among real numbers, then the associated probability density function is: f ( t ) = ∑ i = 1 n p i δ ( t − x i ) , {\displaystyle f(t)=\sum _{i=1}^{n}p_{i}\,\delta (t-x_{i}),} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are
900-476: A fluid consisting of only one kind of particle, the number density n is given by n = ∫ f d 3 p . {\displaystyle n=\int f\,d^{3}\mathbf {p} .} The average value of any function A is ⟨ A ⟩ = 1 n ∫ A f d 3 p . {\displaystyle \langle A\rangle ={\frac {1}{n}}\int Af\,d^{3}\mathbf {p} .} Since
1000-429: A fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity , thermal conductivity , and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation . The equation is a nonlinear integro-differential equation , and the unknown function in the equation is a probability density function in six-dimensional space of
1100-1023: A force F instantly acts on each particle, then at time t + Δ t their position will be r + Δ r = r + p m Δ t {\displaystyle \mathbf {r} +\Delta \mathbf {r} =\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t} and momentum p + Δ p = p + F Δ t . Then, in the absence of collisions, f must satisfy f ( r + p m Δ t , p + F Δ t , t + Δ t ) d 3 r d 3 p = f ( r , p , t ) d 3 r d 3 p {\displaystyle f\left(\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t,\mathbf {p} +\mathbf {F} \,\Delta t,t+\Delta t\right)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} =f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} } Note that we have used
SECTION 10
#17328582207781200-407: A given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since
1300-479: A joint density are all independent from each other if and only if f X 1 , … , X n ( x 1 , … , x n ) = f X 1 ( x 1 ) ⋯ f X n ( x n ) . {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).} If
1400-498: A number of parameters t . One parameter x i ( t ) would describe a curved 1D path, two parameters x i ( t 1 , t 2 ) describes a curved 2D surface, three x i ( t 1 , t 2 , t 3 ) describes a curved 3D volume of space, and so on. The linear span of a basis set B = { e 1 , e 2 , …, e n } equals the position space R , denoted span( B ) = R . Position vector fields are used to describe continuous and differentiable space curves, in which case
1500-439: A particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising. The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z , and three more for each momentum component p x , p y , p z . The entire space
1600-525: A particular range of values , as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1. The terms probability distribution function and probability function have also sometimes been used to denote
1700-578: A point Q with respect to point P is the Euclidean vector resulting from the subtraction of the two absolute position vectors (each with respect to the origin): where s = O Q → {\displaystyle \mathbf {s} ={\overrightarrow {OQ}}} . The relative direction between two points is their relative position normalized as a unit vector In three dimensions , any set of three-dimensional coordinates and their corresponding basis vectors can be used to define
1800-408: A probability distribution admits a density, then the probability of every one-point set { a } is zero; the same holds for finite and countable sets. Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero . In the field of statistical physics , a non-formal reformulation of the relation above between
1900-471: A random variable X is given and its distribution admits a probability density function f , then the expected value of X (if the expected value exists) can be calculated as E [ X ] = ∫ − ∞ ∞ x f ( x ) d x . {\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,f(x)\,dx.} Not every probability distribution has
2000-410: A random variable (or vector) X is given as f X ( x ) , it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g ( X ) . This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape f g ( X ) = f Y using a known (for instance, uniform) random number generator. It
2100-718: A reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere . Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f ( x ) = 2 for 0 ≤ x ≤ 1/2 and f ( x ) = 0 elsewhere. The standard normal distribution has probability density f ( x ) = 1 2 π e − x 2 / 2 . {\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\,e^{-x^{2}/2}.} If
SECTION 20
#17328582207782200-413: A sequence of successive spatial locations given by the coordinates, the continuum limit of many successive locations is a path the particle traces. In the case of one dimension, the position has only one component, so it effectively degenerates to a scalar coordinate. It could be, say, a vector in the x direction, or the radial r direction. Equivalent notations include For a position vector r that
2300-976: A whole, often called joint probability density function . This density function is defined as a function of the n variables, such that, for any domain D in the n -dimensional space of the values of the variables X 1 , ..., X n , the probability that a realisation of the set variables falls inside the domain D is Pr ( X 1 , … , X n ∈ D ) = ∫ D f X 1 , … , X n ( x 1 , … , x n ) d x 1 ⋯ d x n . {\displaystyle \Pr \left(X_{1},\ldots ,X_{n}\in D\right)=\int _{D}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{n}.} If F ( x 1 , ..., x n ) = Pr( X 1 ≤ x 1 , ..., X n ≤ x n )
2400-410: Is Pr ( X > 0 , Y > 0 ) = ∫ 0 ∞ ∫ 0 ∞ f X , Y ( x , y ) d x d y . {\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.} If the probability density function of
2500-660: Is (2 hour )×(1 nanosecond) ≈ 6 × 10 (using the unit conversion 3.6 × 10 nanoseconds = 1 hour). There is a probability density function f with f (5 hours) = 2 hour . The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. A probability density function is most commonly associated with absolutely continuous univariate distributions . A random variable X {\displaystyle X} has density f X {\displaystyle f_{X}} , where f X {\displaystyle f_{X}}
2600-445: Is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. More precisely, the PDF is used to specify the probability of the random variable falling within
2700-505: Is 6- dimensional : a point in this space is ( r , p ) = ( x, y, z, p x , p y , p z ) , and each coordinate is parameterized by time t . The small volume ("differential volume element ") is written d 3 r d 3 p = d x d y d z d p x d p y d p z . {\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} =dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}.} Since
2800-424: Is a 6-fold integral . While f is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r 1 , p 1 for particle 1, r 2 , p 2 for particle 2, etc. up to r N , p N for particle N . It
2900-439: Is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value
3000-511: Is a probability density function : f ( r , p , t ) , defined so that, d N = f ( r , p , t ) d 3 r d 3 p {\displaystyle dN=f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} } is the number of molecules which all have positions lying within a volume element d 3 r {\displaystyle d^{3}\mathbf {r} } about r and momenta lying within
3100-466: Is a function of time t , the time derivatives can be computed with respect to t . These derivatives have common utility in the study of kinematics , control theory , engineering and other sciences. These names for the first, second and third derivative of position are commonly used in basic kinematics. By extension, the higher-order derivatives can be computed in a similar fashion. Study of these higher-order derivatives can improve approximations of
Boltzmann equation - Misplaced Pages Continue
3200-745: Is a non-negative Lebesgue-integrable function, if: Pr [ a ≤ X ≤ b ] = ∫ a b f X ( x ) d x . {\displaystyle \Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)\,dx.} Hence, if F X {\displaystyle F_{X}} is the cumulative distribution function of X {\displaystyle X} , then: F X ( x ) = ∫ − ∞ x f X ( u ) d u , {\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(u)\,du,} and (if f X {\displaystyle f_{X}}
3300-761: Is a shorthand for the momentum analogue of ∇ , and ê x , ê y , ê z are Cartesian unit vectors . Dividing ( 3 ) by dt and substituting into ( 2 ) gives: ∂ f ∂ t + p m ⋅ ∇ f + F ⋅ ∂ f ∂ p = ( ∂ f ∂ t ) c o l l {\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\left({\frac {\partial f}{\partial t}}\right)_{\mathrm {coll} }} In this context, F ( r , t )
3400-460: Is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours
3500-549: Is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that f Y , X ( y , x ) = f X ( x ) δ ( y − V ( x ) ) , {\displaystyle f_{Y,X}(y,x)=f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )},} which if marginalized over x {\displaystyle x} leads to
3600-784: Is assumed the particles in the system are identical (so each has an identical mass m ). For a mixture of more than one chemical species , one distribution is needed for each, see below. The general equation can then be written as d f d t = ( ∂ f ∂ t ) force + ( ∂ f ∂ t ) diff + ( ∂ f ∂ t ) coll , {\displaystyle {\frac {df}{dt}}=\left({\frac {\partial f}{\partial t}}\right)_{\text{force}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{diff}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}},} where
3700-411: Is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour ) dt . This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond),
3800-940: Is clear that H {\displaystyle H} is a bijective mapping, and the Jacobian of H − 1 {\displaystyle H^{-1}} is given by: d H − 1 ( y , x ~ ) d y d x ~ = [ 1 − d V ( x ~ ) d x ~ 0 n × 1 I n × n ] , {\displaystyle {\frac {dH^{-1}(y,{\tilde {\mathbf {x} }})}{dy\,d{\tilde {\mathbf {x} }}}}={\begin{bmatrix}1&-{\frac {dV({\tilde {\mathbf {x} }})}{d{\tilde {\mathbf {x} }}}}\\\mathbf {0} _{n\times 1}&\mathbf {I} _{n\times n}\end{bmatrix}},} which
3900-456: Is continuous at x {\displaystyle x} ) f X ( x ) = d d x F X ( x ) . {\displaystyle f_{X}(x)={\frac {d}{dx}}F_{X}(x).} Intuitively, one can think of f X ( x ) d x {\displaystyle f_{X}(x)\,dx} as being the probability of X {\displaystyle X} falling within
4000-408: Is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval
4100-501: Is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by f X i ( x i ) = f i ( x i ) ∫ f i ( x ) d x . {\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.} This elementary example illustrates
Boltzmann equation - Misplaced Pages Continue
4200-479: Is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour ). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour . This quantity 2 hour
4300-455: Is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function . (This is not possible with a probability density function in the sense defined above, it may be done with a distribution .) For example, consider a binary discrete random variable having
4400-900: Is tempting to think that in order to find the expected value E( g ( X )) , one must first find the probability density f g ( X ) of the new random variable Y = g ( X ) . However, rather than computing E ( g ( X ) ) = ∫ − ∞ ∞ y f g ( X ) ( y ) d y , {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }yf_{g(X)}(y)\,dy,} one may find instead E ( g ( X ) ) = ∫ − ∞ ∞ g ( x ) f X ( x ) d x . {\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.} The values of
4500-737: Is the Radon–Nikodym derivative : f = d X ∗ P d μ . {\displaystyle f={\frac {dX_{*}P}{d\mu }}.} That is, f is any measurable function with the property that: Pr [ X ∈ A ] = ∫ X − 1 A d P = ∫ A f d μ {\displaystyle \Pr[X\in A]=\int _{X^{-1}A}\,dP=\int _{A}f\,d\mu } for any measurable set A ∈ A . {\displaystyle A\in {\mathcal {A}}.} In
4600-534: Is the cumulative distribution function of the vector ( X 1 , ..., X n ) , then the joint probability density function can be computed as a partial derivative f ( x ) = ∂ n F ∂ x 1 ⋯ ∂ x n | x {\displaystyle f(x)=\left.{\frac {\partial ^{n}F}{\partial x_{1}\cdots \partial x_{n}}}\right|_{x}} For i = 1, 2, ..., n , let f X i ( x i ) be
4700-409: Is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions , is often called
4800-768: Is the gradient operator, · is the dot product , ∂ f ∂ p = e ^ x ∂ f ∂ p x + e ^ y ∂ f ∂ p y + e ^ z ∂ f ∂ p z = ∇ p f {\displaystyle {\frac {\partial f}{\partial \mathbf {p} }}=\mathbf {\hat {e}} _{x}{\frac {\partial f}{\partial p_{x}}}+\mathbf {\hat {e}} _{y}{\frac {\partial f}{\partial p_{y}}}+\mathbf {\hat {e}} _{z}{\frac {\partial f}{\partial p_{z}}}=\nabla _{\mathbf {p} }f}
4900-2357: Is the total change in f . Dividing ( 1 ) by d 3 r d 3 p Δ t {\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} \,\Delta t} and taking the limits Δ t → 0 and Δ f → 0 , we have d f d t = ( ∂ f ∂ t ) c o l l {\displaystyle {\frac {df}{dt}}=\left({\frac {\partial f}{\partial t}}\right)_{\mathrm {coll} }} The total differential of f is: d f = ∂ f ∂ t d t + ( ∂ f ∂ x d x + ∂ f ∂ y d y + ∂ f ∂ z d z ) + ( ∂ f ∂ p x d p x + ∂ f ∂ p y d p y + ∂ f ∂ p z d p z ) = ∂ f ∂ t d t + ∇ f ⋅ d r + ∂ f ∂ p ⋅ d p = ∂ f ∂ t d t + ∇ f ⋅ p m d t + ∂ f ∂ p ⋅ F d t {\displaystyle {\begin{aligned}df&={\frac {\partial f}{\partial t}}\,dt+\left({\frac {\partial f}{\partial x}}\,dx+{\frac {\partial f}{\partial y}}\,dy+{\frac {\partial f}{\partial z}}\,dz\right)+\left({\frac {\partial f}{\partial p_{x}}}\,dp_{x}+{\frac {\partial f}{\partial p_{y}}}\,dp_{y}+{\frac {\partial f}{\partial p_{z}}}\,dp_{z}\right)\\[5pt]&={\frac {\partial f}{\partial t}}dt+\nabla f\cdot d\mathbf {r} +{\frac {\partial f}{\partial \mathbf {p} }}\cdot d\mathbf {p} \\[5pt]&={\frac {\partial f}{\partial t}}dt+\nabla f\cdot {\frac {\mathbf {p} }{m}}dt+{\frac {\partial f}{\partial \mathbf {p} }}\cdot \mathbf {F} \,dt\end{aligned}}} where ∇
5000-418: Is the differential cross-section, as before, between particles i and j . The integration is over the momentum components in the integrand (which are labelled i and j ). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element. The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy. For
5100-524: Is the magnitude of the relative momenta (see relative velocity for more on this concept), and I ( g , Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle d Ω , due to the collision. Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify
SECTION 50
#17328582207785200-413: Is the mass density, and V i = ⟨ v i ⟩ {\displaystyle V_{i}=\langle v_{i}\rangle } is the average fluid velocity. Letting A = m ( v i ) 1 = p i {\displaystyle A=m(v_{i})^{1}=p_{i}} , the momentum of the particle, the integrated Boltzmann equation becomes
5300-1065: Is the molecular collision frequency, and f 0 {\displaystyle f_{0}} is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation". For a mixture of chemical species labelled by indices i = 1, 2, 3, ..., n the equation for species i is ∂ f i ∂ t + p i m i ⋅ ∇ f i + F ⋅ ∂ f i ∂ p i = ( ∂ f i ∂ t ) coll , {\displaystyle {\frac {\partial f_{i}}{\partial t}}+{\frac {\mathbf {p} _{i}}{m_{i}}}\cdot \nabla f_{i}+\mathbf {F} \cdot {\frac {\partial f_{i}}{\partial \mathbf {p} _{i}}}=\left({\frac {\partial f_{i}}{\partial t}}\right)_{\text{coll}},} where f i = f i ( r , p i , t ) , and
5400-976: Is the number of solutions in x for the equation g ( x ) = y {\displaystyle g(x)=y} , and g k − 1 ( y ) {\displaystyle g_{k}^{-1}(y)} are these solutions. Suppose x is an n -dimensional random variable with joint density f . If y = G ( x ) , where G is a bijective , differentiable function , then y has density p Y : p Y ( y ) = f ( G − 1 ( y ) ) | det [ d G − 1 ( z ) d z | z = y ] | {\displaystyle p_{Y}(\mathbf {y} )=f{\Bigl (}G^{-1}(\mathbf {y} ){\Bigr )}\left|\det \left[\left.{\frac {dG^{-1}(\mathbf {z} )}{d\mathbf {z} }}\right|_{\mathbf {z} =\mathbf {y} }\right]\right|} with
5500-399: Is the particle velocity vector. Define A ( p i ) {\displaystyle A(p_{i})} as some function of momentum p i {\displaystyle p_{i}} only, whose total value is conserved in a collision. Assume also that the force F i {\displaystyle F_{i}} is a function of position only, and that f
5600-408: Is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure ). Letting A = m ( v i ) 2 2 = p i p i 2 m {\displaystyle A={\frac {m(v_{i})^{2}}{2}}={\frac {p_{i}p_{i}}{2m}}} , the kinetic energy of the particle, the integrated Boltzmann equation becomes
5700-594: Is therefore modified to the BGK form: ∂ f ∂ t + p m ⋅ ∇ f + F ⋅ ∂ f ∂ p = ν ( f 0 − f ) , {\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\nu (f_{0}-f),} where ν {\displaystyle \nu }
5800-2023: Is zero for p i → ± ∞ {\displaystyle p_{i}\to \pm \infty } . Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as ∫ A ∂ f ∂ t d 3 p = ∂ ∂ t ( n ⟨ A ⟩ ) , {\displaystyle \int A{\frac {\partial f}{\partial t}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}(n\langle A\rangle ),} ∫ p j A m ∂ f ∂ x j d 3 p = 1 m ∂ ∂ x j ( n ⟨ A p j ⟩ ) , {\displaystyle \int {\frac {p_{j}A}{m}}{\frac {\partial f}{\partial x_{j}}}\,d^{3}\mathbf {p} ={\frac {1}{m}}{\frac {\partial }{\partial x_{j}}}(n\langle Ap_{j}\rangle ),} ∫ A F j ∂ f ∂ p j d 3 p = − n F j ⟨ ∂ A ∂ p j ⟩ , {\displaystyle \int AF_{j}{\frac {\partial f}{\partial p_{j}}}\,d^{3}\mathbf {p} =-nF_{j}\left\langle {\frac {\partial A}{\partial p_{j}}}\right\rangle ,} ∫ A ( ∂ f ∂ t ) coll d 3 p = ∂ ∂ t coll ( n ⟨ A ⟩ ) = 0 , {\displaystyle \int A\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}_{\text{coll}}(n\langle A\rangle )=0,} where
5900-537: The Borel sets as measurable subsets) has as probability distribution the pushforward measure X ∗ P on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} : the density of X {\displaystyle X} with respect to a reference measure μ {\displaystyle \mu } on ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})}
6000-671: The Dirac delta function. It is possible to use the formulas above to determine f Y {\displaystyle f_{Y}} , the probability density function of Y = V ( X ) {\displaystyle Y=V(X)} , which will be given by f Y ( y ) = ∫ R n f X ( x ) δ ( y − V ( x ) ) d x . {\displaystyle f_{Y}(y)=\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} .} This result leads to
6100-471: The Rademacher distribution —that is, taking −1 or 1 for values, with probability 1 ⁄ 2 each. The density of probability associated with this variable is: f ( t ) = 1 2 ( δ ( t + 1 ) + δ ( t − 1 ) ) . {\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).} More generally, if
SECTION 60
#17328582207786200-591: The Vlasov equation . This equation is more useful than the principal one above, yet still incomplete, since f cannot be solved unless the collision term in f is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann , Fermi–Dirac or Bose–Einstein distributions. A key insight applied by Boltzmann
6300-464: The continuous univariate case above , the reference measure is the Lebesgue measure . The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers , or some subset thereof). It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as
6400-1357: The inverse function . This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, | f Y ( y ) d y | = | f X ( x ) d x | , {\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,} or f Y ( y ) = | d x d y | f X ( x ) = | d d y ( x ) | f X ( x ) = | d d y ( g − 1 ( y ) ) | f X ( g − 1 ( y ) ) = | ( g − 1 ) ′ ( y ) | ⋅ f X ( g − 1 ( y ) ) . {\displaystyle f_{Y}(y)=\left|{\frac {dx}{dy}}\right|f_{X}(x)=\left|{\frac {d}{dy}}(x)\right|f_{X}(x)=\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|f_{X}{\big (}g^{-1}(y){\big )}={\left|\left(g^{-1}\right)'(y)\right|}\cdot f_{X}{\big (}g^{-1}(y){\big )}.} For functions that are not monotonic,
6500-1349: The law of the unconscious statistician : E Y [ Y ] = ∫ R y f Y ( y ) d y = ∫ R y ∫ R n f X ( x ) δ ( y − V ( x ) ) d x d y = ∫ R n ∫ R y f X ( x ) δ ( y − V ( x ) ) d y d x = ∫ R n V ( x ) f X ( x ) d x = E X [ V ( X ) ] . {\displaystyle \operatorname {E} _{Y}[Y]=\int _{\mathbb {R} }yf_{Y}(y)\,dy=\int _{\mathbb {R} }y\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} \,dy=\int _{{\mathbb {R} }^{n}}\int _{\mathbb {R} }yf_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,dy\,d\mathbf {x} =\int _{\mathbb {R} ^{n}}V(\mathbf {x} )f_{X}(\mathbf {x} )\,d\mathbf {x} =\operatorname {E} _{X}[V(X)].} Proof: Let Z {\displaystyle Z} be
6600-477: The mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation: ∂ ∂ t ρ + ∂ ∂ x j ( ρ V j ) = 0 , {\displaystyle {\frac {\partial }{\partial t}}\rho +{\frac {\partial }{\partial x_{j}}}(\rho V_{j})=0,} where ρ = m n {\displaystyle \rho =mn}
6700-552: The mean , variance , and kurtosis ), starting from the formulas given for a continuous distribution of the probability. It is common for probability density functions (and probability mass functions ) to be parametrized—that is, to be characterized by unspecified parameters . For example, the normal distribution is parametrized in terms of the mean and the variance , denoted by μ {\displaystyle \mu } and σ 2 {\displaystyle \sigma ^{2}} respectively, giving
6800-455: The "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below. Note that some authors use the particle velocity v instead of momentum p ; they are related in
6900-434: The above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call R → {\displaystyle {\vec {R}}} a 2-dimensional random vector of coordinates ( X , Y ) : the probability to obtain R → {\displaystyle {\vec {R}}} in the quarter plane of positive x and y
7000-755: The collision term is ( ∂ f i ∂ t ) c o l l = ∑ j = 1 n ∬ g i j I i j ( g i j , Ω ) [ f i ′ f j ′ − f i f j ] d Ω d 3 p ′ , {\displaystyle \left({\frac {\partial f_{i}}{\partial t}}\right)_{\mathrm {coll} }=\sum _{j=1}^{n}\iint g_{ij}I_{ij}(g_{ij},\Omega )[f'_{i}f'_{j}-f_{i}f_{j}]\,d\Omega \,d^{3}\mathbf {p'} ,} where f′ = f′ ( p′ i , t ) ,
7100-425: The collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation
7200-532: The conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus x ↦ x i {\displaystyle \mathbf {x} \mapsto x_{i}} and p ↦ p i = m v i {\displaystyle \mathbf {p} \mapsto p_{i}=mv_{i}} , where v i {\displaystyle v_{i}}
7300-464: The conservation of energy equation: Position vector In geometry , a position or position vector , also known as location vector or radius vector , is a Euclidean vector that represents a point P in space . Its length represents the distance in relation to an arbitrary reference origin O , and its direction represents the angular orientation with respect to given reference axes. Usually denoted x , r , or s , it corresponds to
7400-755: The conservation of momentum equation: ∂ ∂ t ( ρ V i ) + ∂ ∂ x j ( ρ V i V j + P i j ) − n F i = 0 , {\displaystyle {\frac {\partial }{\partial t}}(\rho V_{i})+{\frac {\partial }{\partial x_{j}}}(\rho V_{i}V_{j}+P_{ij})-nF_{i}=0,} where P i j = ρ ⟨ ( v i − V i ) ( v j − V j ) ⟩ {\displaystyle P_{ij}=\rho \langle (v_{i}-V_{i})(v_{j}-V_{j})\rangle }
7500-492: The definition of momentum by p = m v . Consider particles described by f , each experiencing an external force F not due to other particles (see the collision term for the latter treatment). Suppose at time t some number of particles all have position r within element d 3 r {\displaystyle d^{3}\mathbf {r} } and momentum p within d 3 p {\displaystyle d^{3}\mathbf {p} } . If
7600-542: The derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If dt is an infinitely small number, the probability that X is included within the interval ( t , t + dt ) is equal to f ( t ) dt , or: Pr ( t < X < t + d t ) = f ( t ) d t . {\displaystyle \Pr(t<X<t+dt)=f(t)\,dt.} It
7700-1819: The differential regarded as the Jacobian of the inverse of G (⋅) , evaluated at y . For example, in the 2-dimensional case x = ( x 1 , x 2 ) , suppose the transform G is given as y 1 = G 1 ( x 1 , x 2 ) , y 2 = G 2 ( x 1 , x 2 ) with inverses x 1 = G 1 ( y 1 , y 2 ) , x 2 = G 2 ( y 1 , y 2 ) . The joint distribution for y = ( y 1 , y 2 ) has density p Y 1 , Y 2 ( y 1 , y 2 ) = f X 1 , X 2 ( G 1 − 1 ( y 1 , y 2 ) , G 2 − 1 ( y 1 , y 2 ) ) | ∂ G 1 − 1 ∂ y 1 ∂ G 2 − 1 ∂ y 2 − ∂ G 1 − 1 ∂ y 2 ∂ G 2 − 1 ∂ y 1 | . {\displaystyle p_{Y_{1},Y_{2}}(y_{1},y_{2})=f_{X_{1},X_{2}}{\big (}G_{1}^{-1}(y_{1},y_{2}),G_{2}^{-1}(y_{1},y_{2}){\big )}\left\vert {\frac {\partial G_{1}^{-1}}{\partial y_{1}}}{\frac {\partial G_{2}^{-1}}{\partial y_{2}}}-{\frac {\partial G_{1}^{-1}}{\partial y_{2}}}{\frac {\partial G_{2}^{-1}}{\partial y_{1}}}\right\vert .} Let V : R n → R {\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} } be
7800-426: The discrete values accessible to the variable and p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} are the probabilities associated with these values. This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as
7900-1706: The fact that the phase space volume element d 3 r d 3 p {\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} } is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem ). However, since collisions do occur, the particle density in the phase-space volume d 3 r d 3 p {\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} } changes, so d N c o l l = ( ∂ f ∂ t ) c o l l Δ t d 3 r d 3 p = f ( r + p m Δ t , p + F Δ t , t + Δ t ) d 3 r d 3 p − f ( r , p , t ) d 3 r d 3 p = Δ f d 3 r d 3 p {\displaystyle {\begin{aligned}dN_{\mathrm {coll} }&=\left({\frac {\partial f}{\partial t}}\right)_{\mathrm {coll} }\Delta t\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} \\[5pt]&=f\left(\mathbf {r} +{\frac {\mathbf {p} }{m}}\Delta t,\mathbf {p} +\mathbf {F} \Delta t,t+\Delta t\right)d^{3}\mathbf {r} \,d^{3}\mathbf {p} -f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} \\[5pt]&=\Delta f\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} \end{aligned}}} where Δ f
8000-463: The family of densities f ( x ; μ , σ 2 ) = 1 σ 2 π e − 1 2 ( x − μ σ ) 2 . {\displaystyle f(x;\mu ,\sigma ^{2})={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}.} Different values of
8100-474: The independent parameter needs not be time, but can be (e.g.) arc length of the curve. In any equation of motion , the position vector r ( t ) is usually the most sought-after quantity because this function defines the motion of a particle (i.e. a point mass ) – its location relative to a given coordinate system at some time t . To define motion in terms of position, each coordinate may be parametrized by time; since each successive value of time corresponds to
8200-554: The infinitesimal interval [ x , x + d x ] {\displaystyle [x,x+dx]} . ( This definition may be extended to any probability distribution using the measure-theoretic definition of probability . ) A random variable X {\displaystyle X} with values in a measurable space ( X , A ) {\displaystyle ({\mathcal {X}},{\mathcal {A}})} (usually R n {\displaystyle \mathbb {R} ^{n}} with
8300-530: The joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable f X 1 , … , X n ( x 1 , … , x n ) = f 1 ( x 1 ) ⋯ f n ( x n ) , {\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),} (where each f i
8400-426: The last term is zero, since A is conserved in a collision. The values of A correspond to moments of velocity v i {\displaystyle v_{i}} (and momentum p i {\displaystyle p_{i}} , as they are linearly dependent). Letting A = m ( v i ) 0 = m {\displaystyle A=m(v_{i})^{0}=m} ,
8500-475: The latter case one needs an additional time coordinate). Linear algebra allows for the abstraction of an n -dimensional position vector. A position vector can be expressed as a linear combination of basis vectors: The set of all position vectors forms position space (a vector space whose elements are the position vectors), since positions can be added ( vector addition ) and scaled in length ( scalar multiplication ) to obtain another position vector in
8600-543: The location of a point in space—whichever is the simplest for the task at hand may be used. Commonly, one uses the familiar Cartesian coordinate system , or sometimes spherical polar coordinates , or cylindrical coordinates : where t is a parameter , owing to their rectangular or circular symmetry. These different coordinates and corresponding basis vectors represent the same position vector. More general curvilinear coordinates could be used instead and are in contexts like continuum mechanics and general relativity (in
8700-398: The magnitude of the relative momenta is g i j = | p i − p j | = | p ′ i − p ′ j | , {\displaystyle g_{ij}=|\mathbf {p} _{i}-\mathbf {p} _{j}|=|\mathbf {p'} _{i}-\mathbf {p'} _{j}|,} and I ij
8800-477: The momenta of any two particles (labeled as A and B for convenience) before a collision, p′ A and p′ B are the momenta after the collision, g = | p B − p A | = | p ′ B − p ′ A | {\displaystyle g=|\mathbf {p} _{B}-\mathbf {p} _{A}|=|\mathbf {p'} _{B}-\mathbf {p'} _{A}|}
8900-424: The original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite sequence , enabling several analytical techniques in engineering and physics. Probability density function In probability theory , a probability density function ( PDF ), density function , or density of an absolutely continuous random variable ,
9000-404: The parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. For continuous random variables X 1 , ..., X n , it is also possible to define a probability density function associated to the set as
9100-417: The parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of
9200-482: The position r {\displaystyle \mathbf {r} } , and has momentum nearly equal to a given momentum vector p {\displaystyle \mathbf {p} } (thus occupying a very small region of momentum space d 3 p {\displaystyle d^{3}\mathbf {p} } ), at an instant of time. The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum , when
9300-800: The probability density function associated with variable X i alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X 1 , ..., X n by integrating over all values of the other n − 1 variables: f X i ( x i ) = ∫ f ( x 1 , … , x n ) d x 1 ⋯ d x i − 1 d x i + 1 ⋯ d x n . {\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}.} Continuous random variables X 1 , ..., X n admitting
9400-475: The probability density function for y is ∑ k = 1 n ( y ) | d d y g k − 1 ( y ) | ⋅ f X ( g k − 1 ( y ) ) , {\displaystyle \sum _{k=1}^{n(y)}\left|{\frac {d}{dy}}g_{k}^{-1}(y)\right|\cdot f_{X}{\big (}g_{k}^{-1}(y){\big )},} where n ( y )
9500-419: The probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function , or it may be a probability mass function (PMF) rather than the density. "Density function" itself
9600-423: The probability of N molecules, which all have r and p within d 3 r d 3 p {\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} } , is in question, at the heart of the equation is a quantity f which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t . This
9700-491: The space. The notion of "space" is intuitive, since each x i ( i = 1, 2, …, n ) can have any value, the collection of values defines a point in space. The dimension of the position space is n (also denoted dim( R ) = n ). The coordinates of the vector r with respect to the basis vectors e i are x i . The vector of coordinates forms the coordinate vector or n - tuple ( x 1 , x 2 , …, x n ). Each coordinate x i may be parameterized
9800-461: The straight line segment from O to P . In other words, it is the displacement or translation that maps the origin to P : The term position vector is used mostly in the fields of differential geometry , mechanics and occasionally vector calculus . Frequently this is used in two-dimensional or three-dimensional space , but can be easily generalized to Euclidean spaces and affine spaces of any dimension . The relative position of
9900-427: The two integrals are the same in all cases in which both X and g ( X ) actually have probability density functions. It is not necessary that g be a one-to-one function . In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician . Let g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } be
10000-1271: Was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the " Stosszahlansatz " and is also known as the " molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions: ( ∂ f ∂ t ) coll = ∬ g I ( g , Ω ) [ f ( r , p ′ A , t ) f ( r , p ′ B , t ) − f ( r , p A , t ) f ( r , p B , t ) ] d Ω d 3 p B , {\displaystyle \left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}=\iint gI(g,\Omega )[f(\mathbf {r} ,\mathbf {p'} _{A},t)f(\mathbf {r} ,\mathbf {p'} _{B},t)-f(\mathbf {r} ,\mathbf {p} _{A},t)f(\mathbf {r} ,\mathbf {p} _{B},t)]\,d\Omega \,d^{3}\mathbf {p} _{B},} where p A and p B are
#777222