Catalog of articles in probability theory
Encyclopedia
This page lists articles related to Probability theory
Probability theory
Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single...

. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents.

Basic notions (bsc)

Random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...



Continuous probability distribution / (1:C)

Cumulative distribution function
Cumulative distribution function
In probability theory and statistics, the cumulative distribution function , or just distribution function, describes the probability that a real-valued random variable X with a given probability distribution will be found at a value less than or equal to x. Intuitively, it is the "area so far"...

 / (1:DCR)

Discrete probability distribution / (1:D)

Independent and identically-distributed random variables / (FS:BDCR)

Joint probability distribution / (F:DC)


Marginal distribution
Marginal distribution
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. The term marginal variable is used to refer to those variables in the subset of variables being retained...

 / (2F:DC)

Probability density function
Probability density function
In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the...

 / (1:C)

Probability distribution
Probability distribution
In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....

 / (1:DCRG)

Probability distribution function
Probability distribution function
Depending upon which text is consulted, a probability distribution function is any of:* a probability distribution function,* a cumulative distribution function,* a probability mass function, or* a probability density function....



Probability mass function
Probability mass function
In probability theory and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value...

 / (1:D)

Sample space


Instructive examples (paradoxes) (iex)

Berkson's paradox
Berkson's paradox
Berkson's paradox or Berkson's fallacy is a result in conditional probability and statistics which is counter-intuitive for some people, and hence a veridical paradox. It is a complicating factor arising in statistical tests of proportions...

 / (2:B)

Bertrand's box paradox
Bertrand's box paradox
Bertrand's box paradox is a classic paradox of elementary probability theory. It was first posed by Joseph Bertrand in his Calcul des probabilités, published in 1889.There are three boxes:# a box containing two gold coins,...

 / (F:B)

Borel–Kolmogorov paradox / cnd (2:CM)

Boy or Girl paradox / (2:B)

Exchange paradox / (2:D)

Monty Hall problem / (F:B)

Necktie paradox
Necktie paradox
The necktie paradox is a puzzle or paradox within the subjectivistic interpretation of probability theory. It is a variation of the two-envelope paradox....




Nontransitive dice
Nontransitive dice
A set of nontransitive dice is a set of dice for which the relation "is more likely to roll a higher number" is not transitive. See also intransitivity....



Simpson's paradox
Simpson's paradox
In probability and statistics, Simpson's paradox is a paradox in which a correlation present in different groups is reversed when the groups are combined. This result is often encountered in social-science and medical-science statistics, and it occurs when frequencydata are hastily given causal...



Sleeping Beauty problem
Sleeping Beauty problem
The Sleeping Beauty problem is a puzzle in probability theory and formal epistemology in which an ideally rational epistemic agent is to be wakened once or twice according to the toss of a coin, and asked her degree of belief for the coin having come up heads....



St. Petersburg paradox
St. Petersburg paradox
In economics, the St. Petersburg paradox is a paradox related to probability theory and decision theory. It is based on a particular lottery game that leads to a random variable with infinite expected value, i.e., infinite expected payoff, but would nevertheless be considered to be worth only a...

 / mnt (1:D)

Three Prisoners problem
Three Prisoners Problem
The Three Prisoners problem appeared in Martin Gardner's Mathematical Games column in Scientific American in 1959. It is mathematically equivalent to the Monty Hall problem with car and goat replaced with freedom and execution respectively, and also equivalent to, and presumably based on,...



Two envelopes problem
Two envelopes problem
The two envelopes problem, also known as the exchange paradox, is a brain teaser, puzzle or paradox in logic, philosophy, probability and recreational mathematics, of special interest in decision theory and for the Bayesian interpretation of probability theory...




Moments (mnt)

Expected value
Expected value
In probability theory, the expected value of a random variable is the weighted average of all possible values that this random variable can take on...

 / (12:DCR)

Canonical correlation
Canonical correlation
In statistics, canonical correlation analysis, introduced by Harold Hotelling, is a way of making sense of cross-covariance matrices. If we have two sets of variables, x_1, \dots, x_n and y_1, \dots, y_m, and there are correlations among the variables, then canonical correlation analysis will...

 / (F:R)

Carleman's condition
Carleman's condition
In mathematics, Carleman's condition is a sufficient condition for the determinacy of the moment problem.-Hamburger moment problem:For the Hamburger moment problem, the theorem, proved by Torsten Carleman, states the following:...

 / anl (1:R)

Central moment
Central moment
In probability theory and statistics, central moments form one set of values by which the properties of a probability distribution can be usefully characterised...

 / (1:R)

Coefficient of variation
Coefficient of variation
In probability theory and statistics, the coefficient of variation is a normalized measure of dispersion of a probability distribution. It is also known as unitized risk or the variation coefficient. The absolute value of the CV is sometimes known as relative standard deviation , which is...

 / (1:R)

Correlation
Correlation
In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence....

 / (2:R)

Correlation function
Correlation function
A correlation function is the correlation between random variables at two different points in space or time, usually as a function of the spatial or temporal distance between the points...

 / (U:R)

Covariance
Covariance
In probability theory and statistics, covariance is a measure of how much two variables change together. Variance is a special case of the covariance when the two variables are identical.- Definition :...

 / (2F:R) (1:G)

Covariance function
Covariance function
In probability theory and statistics, covariance is a measure of how much two variables change together and the covariance function describes the variance of a random variable process or field...

 / (U:R)

Covariance matrix
Covariance matrix
In probability theory and statistics, a covariance matrix is a matrix whose element in the i, j position is the covariance between the i th and j th elements of a random vector...

 / (F:R)

Cumulant
Cumulant
In probability theory and statistics, the cumulants κn of a probability distribution are a set of quantities that provide an alternative to the moments of the distribution. The moments determine the cumulants in the sense that any two probability distributions whose moments are identical will have...

 / (12F:DCR)

Factorial moment / (1:R)

Factorial moment generating function / anl (1:R)

Fano factor

Geometric standard deviation
Geometric standard deviation
In probability theory and statistics, the geometric standard deviation describes how spread out are a set of numbers whose preferred average is the geometric mean...

 / (1:R)

Hamburger moment problem / anl (1:R)

Hausdorff moment problem / anl (1:R)

Isserlis Gaussian moment theorem
Isserlis Gaussian moment theorem
In probability theory, Isserlis’ theorem or Wick’s theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix....

 / Gau

Jensen's inequality
Jensen's inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context,...

 / (1:DCR)

Kurtosis
Kurtosis
In probability theory and statistics, kurtosis is any measure of the "peakedness" of the probability distribution of a real-valued random variable...

 / (1:CR)

Law of the unconscious statistician
Law of the unconscious statistician
In probability theory and statistics, the law of the unconscious statistician is a theorem used to calculate the expected value of a function g of a random variable X when one knows the probability distribution of X but one does not explicitly know the distribution of g.The form of the law can...

 / (1:DCR)


Moment
Moment (mathematics)
In mathematics, a moment is, loosely speaking, a quantitative measure of the shape of a set of points. The "second moment", for example, is widely used and measures the "width" of a set of points in one dimension or in higher dimensions measures the shape of a cloud of points as it could be fit by...

 / (12FU:CRG)

Law of total covariance
Law of total covariance
In probability theory, the law of total covariance or covariance decomposition formula states that if X, Y, and Z are random variables on the same probability space, and the covariance of X and Y is finite, then...

 / (F:R)

Law of total cumulance
Law of total cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series...

 / (F:R)

Law of total expectation
Law of total expectation
The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, the smoothing theorem, among other names, states that if X is an integrable random variable The proposition in probability theory known as the law of total expectation, ...

 / (F:DR)

Law of total variance
Law of total variance
In probability theory, the law of total variance or variance decomposition formula states that if X and Y are random variables on the same probability space, and the variance of Y is finite, then...

 / (F:R)

Logmoment generating function

Marcinkiewicz–Zygmund inequality
Marcinkiewicz–Zygmund inequality
In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables...

 / inq

Method of moments / lmt (L:R)

Moment problem / anl (1:R)

Moment-generating function
Moment-generating function
In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or...

 / anl (1F:R)

Second moment method
Second moment method
In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive...

 / (1FL:DR)

Skewness
Skewness
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable. The skewness value can be positive or negative, or even undefined...

 / (1:R)

St. Petersburg paradox
St. Petersburg paradox
In economics, the St. Petersburg paradox is a paradox related to probability theory and decision theory. It is based on a particular lottery game that leads to a random variable with infinite expected value, i.e., infinite expected payoff, but would nevertheless be considered to be worth only a...

 / iex (1:D)

Standard deviation
Standard deviation
Standard deviation is a widely used measure of variability or diversity used in statistics and probability theory. It shows how much variation or "dispersion" there is from the average...

 / (1:DCR)

Standardized moment / (1:R)

Stieltjes moment problem / anl (1:R)

Trigonometric moment problem / anl (1:R)

Uncorrelated
Uncorrelated
In probability theory and statistics, two real-valued random variables are said to be uncorrelated if their covariance is zero. Uncorrelatedness is by definition pairwise; i.e...

 / (2:R)

Variance
Variance
In probability theory and statistics, the variance is a measure of how far a set of numbers is spread out. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean . In particular, the variance is one of the moments of a distribution...

 / (12F:DCR)

Variance-to-mean ratio / (1:R)


Inequalities (inq)

Chebyshev's inequality
Chebyshev's inequality
In probability theory, Chebyshev’s inequality guarantees that in any data sample or probability distribution,"nearly all" values are close to the mean — the precise statement being that no more than 1/k2 of the distribution’s values can be more than k standard deviations away from the mean...

 / (1:R)

An inequality on location and scale parameters / (1:R)

Azuma's inequality / (F:BR)

Bennett's inequality
Bennett's inequality
In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount...

 / (F:R)

Bernstein inequalities / (F:R)

Bhatia–Davis inequality
Bhatia–Davis inequality
In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance of any bounded probability distribution on the real line....



Chernoff bound
Chernoff bound
In probability theory, the Chernoff bound, named after Herman Chernoff, gives exponentially decreasing bounds on tail distributions of sums of independent random variables...

 / (F:B)

Doob's martingale inequality
Doob's martingale inequality
In mathematics, Doob's martingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a stochastic process exceeds any given value over a given interval of time...

 / (FU:R)

Dudley's theorem
Dudley's theorem
In probability theory, Dudley’s theorem is a result relating the expected upper bound and regularity properties of a Gaussian process to its entropy and covariance structure. The result was proved in a landmark 1967 paper of Richard M...

 / Gau

Entropy power inequality
Entropy power inequality
In mathematics, the entropy power inequality is a result in probability theory that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by...



Etemadi's inequality
Etemadi's inequality
In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound...

 / (F:R)


Gauss's inequality
Gauss's inequality
In probability theory, Gauss's inequality gives an upper bound on the probability that a unimodal random variable lies more than any given distance from its mode....



Hoeffding's inequality
Hoeffding's inequality
In probability theory, Hoeffding's inequality provides an upper bound on the probability for the sum of random variables to deviate from its expected value. Hoeffding's inequality was proved by Wassily Hoeffding.LetX_1, \dots, X_n \!...

 / (F:R)

Khintchine inequality
Khintchine inequality
In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Roman alphabet, is a theorem from probability, and is also frequently used in analysis...

 / (F:B)

Kolmogorov's inequality
Kolmogorov's inequality
In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound...

 / (F:R)

Marcinkiewicz–Zygmund inequality
Marcinkiewicz–Zygmund inequality
In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables...

 / mnt

Markov's inequality
Markov's inequality
In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant...

 / (1:R)

McDiarmid's inequality

Multidimensional Chebyshev's inequality
Multidimensional Chebyshev's inequality
In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount....



Paley–Zygmund inequality
Paley–Zygmund inequality
In mathematics, the Paley–Zygmund inequality bounds theprobability that a positive random variable is small, in terms ofits mean and variance...

 / (1:R)

Pinsker's inequality
Pinsker's inequality
In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that relates Kullback-Leibler divergence and the total variation distance...

 / (2:R)

Vysochanskiï–Petunin inequality / (1:C)


Markov chains, processes, fields, networks (Mar)

Markov chain
Markov chain
A Markov chain, named after Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized as memoryless: the next state depends only on the current state and not on the...

 / (FLSU:D)

Additive Markov chain
Additive Markov chain
In probability theory, an additive Markov chain is a Markov chain with an additive conditional probability function. Here the process is a discrete-time Markov chain of order m and the transition probability to a state at the next time is a sum of functions, each depending on the next state and one...



Bayesian network
Bayesian network
A Bayesian network, Bayes network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph . For example, a Bayesian network could represent the probabilistic...

 / Bay

Birth-death process
Birth-death process
The birth–death process is a special case of continuous-time Markov process where the states represent the current size of a population and where the transitions are limited to births and deaths...

 / (U:D)

CIR process
CIR process
The CIR process is a Markov process with continuous paths defined by the following stochastic differential equation :dr_t = \theta \,dt + \sigma\, \sqrt r_t dW_t\,...

 / scl

Chapman–Kolmogorov equation / (F:DC)

Cheeger bound
Cheeger bound
In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special case of Cheeger inequalities in expander graphs....

 / (L:D)

Conductance

Contact process

Continuous-time Markov process / (U:D)

Detailed balance
Detailed balance
The principle of detailed balance is formulated for kinetic systems which are decomposed into elementary processes : At equilibrium, each elementary process should be equilibrated by its reverse process....

 / (F:D)

Examples of Markov chains
Examples of Markov chains
- Board games played with dice :A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a 'memory' of the past moves. To see the...

 / (FL:D)

Feller process
Feller process
In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.-Definitions:Let X be a locally compact topological space with a countable base...

 / (U:G)

Fokker–Planck equation / scl anl

Foster's theorem
Foster's theorem
In probability theory, Foster's theorem, named after F. G. Foster, is used to draw conclusions about the positive recurrence of Markov chains with countable state spaces...

 / (L:D)

Gauss–Markov process
Gauss–Markov process
Gauss–Markov stochastic processes are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. The stationary Gauss–Markov process is a very special case because it is unique, except for some trivial exceptions...

 / Gau

Geometric Brownian motion
Geometric Brownian motion
A geometric Brownian motion is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion, also called a Wiener process...

 / scl

Hammersley–Clifford theorem
Hammersley–Clifford theorem
The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics, that gives necessary and sufficient conditions under which a positive probability distribution can be represented as a Markov network...

 / (F:C)

Harris chain / (L:DC)

Hidden Markov model
Hidden Markov model
A hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be considered as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E...

 / (F:D)

Hidden Markov random field
Hidden Markov random field
A hidden Markov random field is a generalization of a hidden Markov model. Instead of having an underlying Markov chain, hidden Markov random fields have an underlying Markov random field.Suppose that we observe a random variable Y_i , where i \in S ....



Hunt process / (U:R)

Kalman filter
Kalman filter
In statistics, the Kalman filter is a mathematical method named after Rudolf E. Kálmán. Its purpose is to use measurements observed over time, containing noise and other inaccuracies, and produce values that tend to be closer to the true values of the measurements and their associated calculated...

 / (F:C)

Kolmogorov backward equation
Kolmogorov backward equation
The Kolmogorov backward equation and its adjoint sometimes known as the Kolmogorov forward equation are partial differential equations that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931...

 / scl

Kolmogorov’s criterion
Kolmogorov’s criterion
In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem in Markov processes concerning stationary Markov chains...

 / (F:D)

Kolmogorov’s generalized criterion / (U:D)


Krylov–Bogolyubov theorem / anl

Lumpability
Lumpability
In probability theory, lumpability is a method for reducing the size of the state space of some continuous-time Markov chains, first published by Kemeny and Snell.-Definition:...



Markov additive process

Markov blanket
Markov blanket
In machine learning, the Markov blanket for a node A in a Bayesian network is the set of nodes \partial A composed of A's parents, its children, and its children's other parents. In a Markov network, the Markov blanket of a node is its set of neighbouring nodes...

 / Bay

Markov chain mixing time
Markov chain mixing time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution.More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and,...

 / (L:D)

Markov decision process
Markov decision process
Markov decision processes , named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via...



Markov information source
Markov information source
In mathematics, a Markov information source, or simply, a Markov source, is an information source whose underlying dynamics are given by a stationary finite Markov chain.-Formal definition:...



Markov kernel
Markov kernel
In probability theory, a Markov kernel is a map that plays the role, in the general theory of Markov processes, that the transition matrix does in the theory of Markov processes with a finite state space.- Formal definition :...



Markov logic network
Markov logic network
A Markov logic network is a probabilistic logic which applies the ideas of a Markov network to first-order logic, enabling uncertain inference...



Markov network
Markov network
A Markov random field, Markov network or undirected graphical model is a set of variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies...



Markov process
Markov process
In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov, is a time-varying random phenomenon for which a specific property holds...

 / (U:D)

Markov property
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It was named after the Russian mathematician Andrey Markov....

 / (F:D)

Markov random field

Master equation
Master equation
In physics and chemistry and related fields, master equations are used to describe the time-evolution of a system that can be modelled as being in exactly one of countable number of states at any given time, and where switching between states is treated probabilistically...

 / phs (U:D)

Milstein method
Milstein method
In mathematics, the Milstein method, named after Grigori N. Milstein, is a technique for the approximate numerical solution of a stochastic differential equation.Consider the Itō stochastic differential equation...

 / scl

Moran process
Moran process
A Moran process, named after Patrick Moran, is a stochastic process used in biology to describe finite populations. It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection...



Ornstein–Uhlenbeck process / Gau scl

Partially observable Markov decision process
Partially observable Markov decision process
A Partially Observable Markov Decision Process is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state...



Product form solution
Product form solution
In probability theory, a product form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components...

 / spr

Quantum Markov chain
Quantum Markov chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability...

 / phs

Semi-Markov process
Semi-Markov process
A continuous-time stochastic process is called a semi-Markov process or 'Markov renewal process' if the embedded jump chain is a Markov chain, and where the holding times are random variables with any distribution, whose distribution function may depend on the two states between which the move is...



Stochastic matrix
Stochastic matrix
In mathematics, a stochastic matrix is a matrix used to describe the transitions of a Markov chain. It has found use in probability theory, statistics and linear algebra, as well as computer science...

 / anl

Telegraph process
Telegraph process
In probability theory, the telegraph process is a memoryless continuous-time stochastic process that shows two distinct values.If these are called a and b, the process can be described by the following master equations:...

 / (U:B)

Variable-order Markov model
Variable-order Markov model
Variable-order Markov models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of...



Wiener process
Wiener process
In mathematics, the Wiener process is a continuous-time stochastic process named in honor of Norbert Wiener. It is often called standard Brownian motion, after Robert Brown...

 / Gau scl


Gaussian random variables, vectors, functions (Gau)

Normal distribution / spd

Abstract Wiener space
Abstract Wiener space
An abstract Wiener space is a mathematical object in measure theory, used to construct a "decent" measure on an infinite-dimensional vector space. It is named after the American mathematician Norbert Wiener...



Brownian bridge
Brownian bridge
A Brownian bridge is a continuous-time stochastic process B whose probability distribution is the conditional probability distribution of a Wiener process W given the condition that B = B = 0.The expected value of the bridge is zero, with variance t, implying that the most...



Classical Wiener space
Classical Wiener space
In mathematics, classical Wiener space is the collection of all continuous functions on a given domain , taking values in a metric space . Classical Wiener space is useful in the study of stochastic processes whose sample paths are continuous functions...



Concentration dimension
Concentration dimension
In mathematics — specifically, in probability theory — the concentration dimension of a Banach space-valued random variable is a numerical measure of how “spread out” the random variable is compared to the norm on the space.-Definition:...



Dudley's theorem
Dudley's theorem
In probability theory, Dudley’s theorem is a result relating the expected upper bound and regularity properties of a Gaussian process to its entropy and covariance structure. The result was proved in a landmark 1967 paper of Richard M...

 / inq

Estimation of covariance matrices
Estimation of covariance matrices
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution...



Fractional Brownian motion

Gaussian isoperimetric inequality

Gaussian measure
Gaussian measure
In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space Rn, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces...

 / anl

Gaussian random field
Gaussian random field
A Gaussian random field is a random field involving Gaussian probability density functions of the variables. A one-dimensional GRF is also called a Gaussian process....



Gauss–Markov process
Gauss–Markov process
Gauss–Markov stochastic processes are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. The stationary Gauss–Markov process is a very special case because it is unique, except for some trivial exceptions...

 / Mar

Integration of the normal density function / spd anl


Gaussian process
Gaussian process
In probability theory and statistics, a Gaussian process is a stochastic process whose realisations consist of random values associated with every point in a range of times such that each such random variable has a normal distribution...



Isserlis Gaussian moment theorem
Isserlis Gaussian moment theorem
In probability theory, Isserlis’ theorem or Wick’s theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix....

 / mnt

Karhunen–Loève theorem

Large deviations of Gaussian random functions
Large deviations of Gaussian random functions
A random function – of either one variable , or two or more variables – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful when analysing* the anomalies in the cosmic microwave background...

 / lrd

Lévy's modulus of continuity theorem / (U:R)

Matrix normal distribution
Matrix normal distribution
The matrix normal distribution is a probability distribution that is a generalization of the normal distribution to matrix-valued random variables.- Definition :...

 / spd

Multivariate normal distribution / spd

Ornstein–Uhlenbeck process / Mar scl

Paley–Wiener integral / anl

Pregaussian class
Pregaussian class
In probability theory, a pregaussian class or pregaussian set of functions is a set of functions, square integrable with respect to some probability measure, such that there exists a certain Gaussian process, indexed by this set, satisfying the conditions below.-Definition:For a probability space ,...



Schilder's theorem
Schilder's theorem
In mathematics, Schilder's theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, Schilder's theorem gives an estimate for the probability that a sample path of Brownian motion will stray far from the mean path . This statement is made precise using rate...

 / lrd

Wiener process
Wiener process
In mathematics, the Wiener process is a continuous-time stochastic process named in honor of Norbert Wiener. It is often called standard Brownian motion, after Robert Brown...

 / Mar scl


Conditioning (cnd)

Conditioning
Conditioning (probability)
Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations and conditional distributions are treated on three levels: discrete probabilities, probability density functions, and measure theory...

 / (2:BDCR)

Bayes' theorem
Bayes' theorem
In probability theory and applications, Bayes' theorem relates the conditional probabilities P and P. It is commonly used in science and engineering. The theorem is named for Thomas Bayes ....

 / (2:BCG)

Borel–Kolmogorov paradox / iex (2:CM)

Conditional expectation
Conditional expectation
In probability theory, a conditional expectation is the expected value of a real random variable with respect to a conditional probability distribution....

 / (2:BDR)

Conditional independence
Conditional independence
In probability theory, two events R and B are conditionally independent given a third event Y precisely if the occurrence or non-occurrence of R and the occurrence or non-occurrence of B are independent events in their conditional probability distribution given Y...

 / (3F:BR)

Conditional probability
Conditional probability
In probability theory, the "conditional probability of A given B" is the probability of A if B is known to occur. It is commonly notated P, and sometimes P_B. P can be visualised as the probability of event A when the sample space is restricted to event B...



Conditional probability distribution / (2:DC)


Conditional random field
Conditional random field
A conditional random field is a statistical modelling method often applied in pattern recognition.More specifically it is a type of discriminative undirected probabilistic graphical model. It is used to encode known relationships between observations and construct consistent interpretations...

 / (F:R)

Disintegration theorem
Disintegration theorem
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures...

 / anl (2:G)

Inverse probability
Inverse probability
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable.Today, the problem of determining an unobserved variable is called inferential statistics, the method of inverse probability is called Bayesian probability, the "distribution"...

 / Bay

Luce's choice axiom
Luce's choice axiom
In probability theory, Luce's choice axiom, formulated by R. Duncan Luce , states that the probability of selecting one item over another from a pool of many items is not affected by the presence or absence of other items in the pool...



Regular conditional probability
Regular conditional probability
Regular conditional probability is a concept that has developed to overcome certain difficulties in formally defining conditional probabilities for continuous probability distributions...

 / (2:G)

Rule of succession
Rule of succession
In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem....

 / (F:B)


Specific distributions (spd)

Binomial distribution / (1:D)

(a,b,0) class of distributions
(a,b,0) class of distributions
In probability theory, the distribution of a discrete random variable N is said to be a member of the class of distributions if its probability mass function obeyswhere p_k = P ....

 / (1:D)

Anscombe transform
Anscombe transform
In statistics, the Anscombe transform, named after Francis Anscombe, is a variance-stabilizing transformation that transforms a random variable with a Poisson distribution into one with an approximately standard Gaussian distribution. The Anscombe transform is widely used in photon-limited imaging ...



Bernoulli distribution / (1:B)

Beta distribution / (1:C)

Bose–Einstein statistics / (F:D)

Cantor distribution / (1:C)

Cauchy distribution
Cauchy distribution
The Cauchy–Lorentz distribution, named after Augustin Cauchy and Hendrik Lorentz, is a continuous probability distribution. As a probability distribution, it is known as the Cauchy distribution, while among physicists, it is known as the Lorentz distribution, Lorentz function, or Breit–Wigner...

 / (1:C)

Chi-squared distribution / (1:C)

Compound Poisson distribution
Compound Poisson distribution
In probability theory, a compound Poisson distribution is the probability distribution of the sum of a "Poisson-distributed number" of independent identically-distributed random variables...

 / (F:DR)

Degenerate distribution / (1:D)

Dirichlet distribution / (F:C)

Discrete phase-type distribution
Discrete phase-type distribution
The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process...

 / (1:D)

Erlang distribution / (1:C)

Exponential-logarithmic distribution / (1:C)

Exponential distribution
Exponential distribution
In probability theory and statistics, the exponential distribution is a family of continuous probability distributions. It describes the time between events in a Poisson process, i.e...

 / (1:C)

F-distribution / (1:C)

Fermi–Dirac statistics / (1F:D)

Fisher–Tippett distribution / (1:C)

Gamma distribution / (1:C)

Generalized normal distribution / (1:C)

Geometric distribution / (1:D)

Half circle distribution / (1:C)

Hypergeometric distribution / (1:D)


Normal distribution / Gau

Integration of the normal density function / Gau anl

Lévy distribution / (1:C)

Matrix normal distribution
Matrix normal distribution
The matrix normal distribution is a probability distribution that is a generalization of the normal distribution to matrix-valued random variables.- Definition :...

 / Gau

Maxwell–Boltzmann statistics / (F:D)

McCullagh's parametrization of the Cauchy distributions / (1:C)

Multinomial distribution / (F:D)

Multivariate normal distribution / Gau

Negative binomial distribution
Negative binomial distribution
In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified number of failures occur...

 / (1:D)

Pareto distribution / (1:C)

Phase-type distribution
Phase-type distribution
A phase-type distribution is a probability distribution that results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence in which each of the phases occur may itself be a stochastic process. The distribution can be represented by a random...

 / (1:C)

Poisson distribution
Poisson distribution
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since...

 / (1:D)

Power law
Power law
A power law is a special kind of mathematical relationship between two quantities. When the frequency of an event varies as a power of some attribute of that event , the frequency is said to follow a power law. For instance, the number of cities having a certain population size is found to vary...

 / (1:C)

Skew normal distribution / (1:C)

Stable distribution / (1:C)

Student's t-distribution / (1:C)

Tracy–Widom distribution
Tracy–Widom distribution
The Tracy–Widom distribution, introduced by , is the probability distribution of the largest eigenvalue of a random hermitian matrix in the edge scaling limit. It also appears in the distribution of the length of the longest increasing subsequence of random permutations and in current fluctuations...

 / rmt

Triangular distribution / (1:C)

Weibull distribution / (1:C)

Wigner semicircle distribution / (1:C)

Wishart distribution / (F:C)

Zeta distribution / (1:D)

Zipf's law / (1:D)


Empirical measure (emm)

Donsker's theorem
Donsker's theorem
In probability theory, Donsker's theorem, named after M. D. Donsker, identifies a certain stochastic process as a limit of empirical processes. It is sometimes called the functional central limit theorem....

 / (LU:C)

Empirical distribution function
Empirical distribution function
In statistics, the empirical distribution function, or empirical cdf, is the cumulative distribution function associated with the empirical measure of the sample. This cdf is a step function that jumps up by 1/n at each of the n data points. The empirical distribution function estimates the true...



Empirical measure
Empirical measure
In probability theory, an empirical measure is a random measure arising from a particular realization of a sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics....

 / (FL:RG) (U:D)

Empirical process
Empirical process
The study of empirical processes is a branch of mathematical statistics and a sub-area of probability theory. It is a generalization of the central limit theorem for empirical measures...

 / (FL:RG) (U:D)


Glivenko–Cantelli theorem / (FL:RG) (U:D)

Khmaladze transformation
Khmaladze transformation
The Khmaladze Transformation is a statistical tool.Consider the sequence of empirical distribution functions F_n based on asequence of i.i.d random variables, X_1,\ldots, X_n, as n increases.Suppose F is the hypothetical distribution function of...

 / (FL:RG) (U:D)

Vapnik–Chervonenkis theory


Limit theorems (lmt)

Central limit theorem
Central limit theorem
In probability theory, the central limit theorem states conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed. The central limit theorem has a number of variants. In its common...

 / (L:R)

Berry–Esseen theorem
Berry–Esséen theorem
The central limit theorem in probability theory and statistics states that under certain circumstances the sample mean, considered as a random quantity, becomes more normally distributed as the sample size is increased...

 / (F:R)

Characteristic function
Characteristic function (probability theory)
In probability theory and statistics, the characteristic function of any random variable completely defines its probability distribution. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative...

 / anl (1F:DCR)

De Moivre–Laplace theorem
De Moivre–Laplace theorem
In probability theory, the de Moivre–Laplace theorem is a normal approximation to the binomial distribution. It is a special case of the central limit theorem...

 / (L:BD)

Helly–Bray theorem
Helly–Bray theorem
In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray....

 / anl (L:R)

Illustration of the central limit theorem
Illustration of the central limit theorem
This article gives two concrete illustrations of the central limit theorem. Both involve the sum of independent and identically-distributed random variables and show how the probability distribution of the sum approaches the normal distribution as the number of terms in the sum increases.The first...

 / (L:DC)

Lindeberg's condition
Lindeberg's condition
In probability theory, Lindeberg's condition is a sufficient condition for the central limit theorem to hold for a sequence of independent random variables...




Lyapunov's central limit theorem / (L:R)

Lévy's continuity theorem
Lévy's continuity theorem
In probability theory, the Lévy’s continuity theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions...

 / anl (L:R)

Lévy's convergence theorem / (S:R)

Martingale central limit theorem
Martingale central limit theorem
In probability theory, the central limit theorem says that, under certain conditions, the sum of many independent identically-distributed random variables, when scaled appropriately, converges in distribution to a standard normal distribution...

 / (S:R)

Method of moments / mnt (L:R)

Slutsky's theorem
Slutsky's theorem
In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.The theorem was named after Eugen Slutsky. Slutsky’s theorem is also attributed to Harald Cramér....

 / anl

Weak convergence of measures / anl


Large deviations (lrd)

Large deviations theory
Large deviations theory
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. Some basic ideas of the theory can be tracked back to Laplace and Cramér, although a clear unified formal definition was introduced in 1966 by Varadhan...



Contraction principle
Contraction principle (large deviations theory)
In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" to a large deviation principle on another space via a continuous function.-Statement of the theorem:Let X and Y be...



Cramér's theorem
Cramér's theorem
In mathematical statistics, Cramér's theorem is one of several theorems of Harald Cramér, a Swedish statistician and probabilist.- Normal random variables :...



Exponentially equivalent measures
Exponentially equivalent measures
In mathematics, the notion of exponential equivalence of measures is a concept that describes how two sequences or families of probability measures are “the same” from the point of view of large deviations theory.-Definition:...



Freidlin–Wentzell theorem

Laplace principle
Laplace principle (large deviations theory)
In mathematics, Laplace's principle is a basic theorem in large deviations theory, similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp over a fixed set A as θ becomes large...




Large deviations of Gaussian random functions
Large deviations of Gaussian random functions
A random function – of either one variable , or two or more variables – is called Gaussian if every finite-dimensional distribution is a multivariate normal distribution. Gaussian random fields on the sphere are useful when analysing* the anomalies in the cosmic microwave background...

 / Gau

Rate function
Rate function
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. It is required to have several "nice" properties which assist in the formulation of the large deviation principle...



Schilder's theorem
Schilder's theorem
In mathematics, Schilder's theorem is a result in the large deviations theory of stochastic processes. Roughly speaking, Schilder's theorem gives an estimate for the probability that a sample path of Brownian motion will stray far from the mean path . This statement is made precise using rate...

 / Gau

Tilted large deviation principle
Tilted large deviation principle
In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by "tilting", i.e. integration against an exponential functional...



Varadhan's lemma
Varadhan's lemma
In mathematics, Varadhan's lemma is a result in large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.-Statement...




Random graphs (rgr)

Random graph
Random graph
In mathematics, a random graph is a graph that is generated by some random process. The theory of random graphs lies at the intersection between graph theory and probability theory, and studies the properties of typical random graphs.-Random graph models:...



BA model
BA model
The Barabási–Albert model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Scale-free networks are widely observed in natural and man-made systems, including the Internet, the world wide web, citation networks, and some social...



Barabási–Albert model

Erdős–Rényi model
Erdos–Rényi model
In graph theory, the Erdős–Rényi model, named for Paul Erdős and Alfréd Rényi, is either of two models for generating random graphs, including one that sets an edge between each pair of nodes with equal probability, independently of the other edges...



Percolation theory
Percolation theory
In mathematics, percolation theory describes the behavior of connected clusters in a random graph. The applications of percolation theory to materials science and other domains are discussed in the article percolation.-Introduction:...

 / phs (L:B)


Percolation threshold
Percolation threshold
Percolation threshold is a mathematical term related to percolation theory, which is the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist while above it, there exists a giant component of the order of system size...

 / phs

Random geometric graph
Random geometric graph
In graph theory, a random geometric graph is a random undirected graph drawn on a bounded region, eg. the unit torus [0, 1)2.It is generated by# Placing vertices at random uniformly and independently on the region...



Random regular graph

Watts and Strogatz model
Watts and Strogatz model
The Watts and Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their joint 1998 Nature paper...




Random matrices (rmt)

Random matrix
Random matrix
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable. Many important properties of physical systems can be represented mathematically as matrix problems...



Circular ensemble
Circular ensemble
In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introduced by Freeman Dyson as modifications of the Gaussian matrix ensembles...



Gaussian matrix ensemble


Tracy–Widom distribution
Tracy–Widom distribution
The Tracy–Widom distribution, introduced by , is the probability distribution of the largest eigenvalue of a random hermitian matrix in the edge scaling limit. It also appears in the distribution of the length of the longest increasing subsequence of random permutations and in current fluctuations...

 / spd

Weingarten function
Weingarten function
In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups...

 / anl


Stochastic calculus (scl)

Itô calculus
Ito calculus
Itō calculus, named after Kiyoshi Itō, extends the methods of calculus to stochastic processes such as Brownian motion . It has important applications in mathematical finance and stochastic differential equations....



Bessel process
Bessel process
In mathematics, a Bessel process, named after Friedrich Bessel, is a type of stochastic process. The n-dimensional Bessel process is the real-valued process X given byX_t = \| W_t \|,...



CIR process
CIR process
The CIR process is a Markov process with continuous paths defined by the following stochastic differential equation :dr_t = \theta \,dt + \sigma\, \sqrt r_t dW_t\,...

 / Mar

Doléans-Dade exponential
Doléans-Dade exponential
In stochastic calculus, the Doléans-Dade exponential, Doléans exponential, or stochastic exponential, of a semimartingale X is defined to be the solution to the stochastic differential equation with initial condition . The concept is named after Catherine Doléans-Dade...



Dynkin's formula
Dynkin's formula
In mathematics — specifically, in stochastic analysis — Dynkin's formula is a theorem giving the expected value of any suitably smooth statistic of an Itō diffusion at a stopping time. It may be seen as a stochastic generalization of the fundamental theorem of calculus...



Euler–Maruyama method

Feynman–Kac formula

Filtering problem
Filtering problem (stochastic processes)
In the theory of stochastic processes, the filtering problem is a mathematical model for a number of filtering problems in signal processing and the like. The general idea is to form some kind of "best estimate" for the true value of some system, given only some observations of that system...



Fokker–Planck equation / Mar anl

Geometric Brownian motion
Geometric Brownian motion
A geometric Brownian motion is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion, also called a Wiener process...

 / Mar

Girsanov theorem
Girsanov theorem
In probability theory, the Girsanov theorem describes how the dynamics of stochastic processes change when the original measure is changed to an equivalent probability measure...



Green measure
Green measure
In mathematics — specifically, in stochastic analysis — the Green measure is a measure associated to an Itō diffusion. There is an associated Green formula representing suitably smooth functions in terms of the Green measure and first exit times of the diffusion...



Heston model
Heston model
In finance, the Heston model, named after Steven Heston, is a mathematical model describing the evolution of the volatility of an underlying asset...

 / fnc

Hörmander's condition
Hörmander's condition
In mathematics, Hörmander's condition is a property of vector fields that, if satisfied, has many useful consequences in the theory of partial and stochastic differential equations...

 / anl

Infinitesimal generator
Infinitesimal generator (stochastic processes)
In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a stochastic process is a partial differential operator that encodes a great deal of information about the process...



Itô's lemma
Ito's lemma
In mathematics, Itō's lemma is used in Itō stochastic calculus to find the differential of a function of a particular type of stochastic process. It is named after its discoverer, Kiyoshi Itō...



Itō calculus
Ito calculus
Itō calculus, named after Kiyoshi Itō, extends the methods of calculus to stochastic processes such as Brownian motion . It has important applications in mathematical finance and stochastic differential equations....



Itō diffusion
Ito diffusion
In mathematics — specifically, in stochastic analysis — an Itō diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation, used in Physics to describe the brownian motion of a particle subjected to a potential in a...



Itō isometry
Ito isometry
In mathematics, the Itō isometry, named after Kiyoshi Itō, is a crucial fact about Itō stochastic integrals. One of its main applications is to enable the computation of variances for stochastic processes....



Itō's lemma
Ito's lemma
In mathematics, Itō's lemma is used in Itō stochastic calculus to find the differential of a function of a particular type of stochastic process. It is named after its discoverer, Kiyoshi Itō...




Kolmogorov backward equation
Kolmogorov backward equation
The Kolmogorov backward equation and its adjoint sometimes known as the Kolmogorov forward equation are partial differential equations that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931...

 / Mar

Local time
Local time (mathematics)
In the mathematical theory of stochastic processes, local time is a stochastic process associated with diffusion processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level...



Milstein method
Milstein method
In mathematics, the Milstein method, named after Grigori N. Milstein, is a technique for the approximate numerical solution of a stochastic differential equation.Consider the Itō stochastic differential equation...

 / Mar

Novikov's condition
Novikov's condition
In probability theory, Novikov's condition is the sufficient condition for a stochastic process which takes the form of the Radon-Nikodym derivative in Girsanov's theorem to be a martingale...



Ornstein–Uhlenbeck process / Gau Mar

Quadratic variation
Quadratic variation
In mathematics, quadratic variation is used in the analysis of stochastic processes such as Brownian motion and martingales. Quadratic variation is just one kind of variation of a process.- Definition :...



Random dynamical system
Random dynamical system
In mathematics, a random dynamical system is a measure-theoretic formulation of a dynamical system with an element of "randomness", such as the dynamics of solutions to a stochastic differential equation...

 / rds

Reversible diffusion
Reversible diffusion
In mathematics, a reversible diffusion is a specific example of a reversible stochastic process. Reversible diffusions have an elegant characterization due to the Russian mathematician Andrey Nikolaevich Kolmogorov....



Runge–Kutta method
Runge–Kutta method (SDE)
In mathematics, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalization of the Runge–Kutta method for ordinary differential equations to stochastic differential equations....



Russo–Vallois integral

Schramm–Loewner evolution

Semimartingale
Semimartingale
In probability theory, a real valued process X is called a semimartingale if it can be decomposed as the sum of a local martingale and an adapted finite-variation process....



Stochastic calculus
Stochastic calculus
Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes...



Stochastic differential equation
Stochastic differential equation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process....



Stochastic processes and boundary value problems
Stochastic processes and boundary value problems
In mathematics, some boundary value problems can be solved using the methods of stochastic analysis. Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion...

 / anl

Stratonovich integral
Stratonovich integral
In stochastic processes, the Stratonovich integral is a stochastic integral, the most common alternative to the Itō integral...



Tanaka equation
Tanaka equation
In mathematics, Tanaka's equation is an example of a stochastic differential equation which admits a weak solution but has no strong solution. It is named after the Japanese mathematician Tanaka Hiroshi....



Tanaka's formula

Wiener process
Wiener process
In mathematics, the Wiener process is a continuous-time stochastic process named in honor of Norbert Wiener. It is often called standard Brownian motion, after Robert Brown...

 / Gau Mar

Wiener sausage
Wiener sausage
In the mathematical field of probability, the Wiener sausage is a neighborhood of the trace of a Brownian motion up to a time t, given by taking all points within a fixed distance of Brownian motion. It can be visualized as a sausage of fixed radius whose centerline is Brownian motion...




Malliavin calculus (Mal)

Malliavin calculus
Malliavin calculus
The Malliavin calculus, named after Paul Malliavin, is a theory of variational stochastic calculus. In other words it provides the mechanics to compute derivatives of random variables....



Clark–Ocone theorem

H-derivative

Integral representation theorem for classical Wiener space
Integral representation theorem for classical Wiener space
In mathematics, the integral representation theorem for classical Wiener space is a result in the fields of measure theory and stochastic analysis...



Integration by parts operator
Integration by parts operator
In mathematics, an integration by parts operator is a linear operator used to formulate integration by parts formulae; the most interesting examples of integration by parts operators occur in infinite-dimensional settings and find uses in stochastic analysis and its applications.-Definition:Let E...




Malliavin derivative
Malliavin derivative
In mathematics, the Malliavin derivative is a notion of derivative in the Malliavin calculus. Intuitively, it is the notion of derivative appropriate to paths in classical Wiener space, which are "usually" not differentiable in the usual sense...



Malliavin's absolute continuity lemma
Malliavin's absolute continuity lemma
In mathematics — specifically, in measure theory — Malliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity theorems of the Malliavin calculus...



Ornstein–Uhlenbeck operator
Ornstein–Uhlenbeck operator
In mathematics, the Ornstein–Uhlenbeck operator can be thought of as a generalization of the Laplace operator to an infinite-dimensional setting...



Skorokhod integral
Skorokhod integral
In mathematics, the Skorokhod integral, often denoted δ, is an operator of great importance in the theory of stochastic processes. It is named after the Ukrainian mathematician Anatoliy Skorokhod...




Random dynamical systems (rds)

Random dynamical system
Random dynamical system
In mathematics, a random dynamical system is a measure-theoretic formulation of a dynamical system with an element of "randomness", such as the dynamics of solutions to a stochastic differential equation...

 / scl

Absorbing set
Absorbing set (random dynamical systems)
In mathematics, an absorbing set for a random dynamical system is a subset of the phase space that eventually contains the image of any bounded set under the cocycle of the random dynamical system...



Base flow

Pullback attractor
Pullback attractor
In mathematics, the attractor of a random dynamical system may be loosely thought of as a set to which the system evolves after a long enough time. The basic idea is the same as for a deterministic dynamical system, but requires careful treatment because random dynamical systems are necessarily...


Analytic aspects (including measure theoretic) (anl)

Probability space
Probability space
In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind...



Carleman's condition
Carleman's condition
In mathematics, Carleman's condition is a sufficient condition for the determinacy of the moment problem.-Hamburger moment problem:For the Hamburger moment problem, the theorem, proved by Torsten Carleman, states the following:...

 / mnt (1:R)

Characteristic function
Characteristic function (probability theory)
In probability theory and statistics, the characteristic function of any random variable completely defines its probability distribution. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative...

 / lmt (1F:DCR)

Contiguity#Probability theory

Càdlàg
Càdlàg
In mathematics, a càdlàg , RCLL , or corlol function is a function defined on the real numbers that is everywhere right-continuous and has left limits everywhere...



Disintegration theorem
Disintegration theorem
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures...

 / cnd (2:G)

Dynkin system
Dynkin system
A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set \Omega satisfying a set of axioms weaker than those of σ-algebra. Dynkin systems are sometimes referred to as λ-systems or d-system...



Exponential family
Exponential family
In probability and statistics, an exponential family is an important class of probability distributions sharing a certain form, specified below. This special form is chosen for mathematical convenience, on account of some useful algebraic properties, as well as for generality, as exponential...



Factorial moment generating function / mnt (1:R)

Filtration

Fokker–Planck equation / scl Mar

Gaussian measure
Gaussian measure
In mathematics, Gaussian measure is a Borel measure on finite-dimensional Euclidean space Rn, closely related to the normal distribution in statistics. There is also a generalization to infinite-dimensional spaces...

 / Gau

Hamburger moment problem / mnt (1:R)

Hausdorff moment problem / mnt (1:R)

Helly–Bray theorem
Helly–Bray theorem
In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray....

 / lmt (L:R)

Hörmander's condition
Hörmander's condition
In mathematics, Hörmander's condition is a property of vector fields that, if satisfied, has many useful consequences in the theory of partial and stochastic differential equations...

 / scl

Integration of the normal density function / spd Gau

Kolmogorov extension theorem
Kolmogorov extension theorem
In mathematics, the Kolmogorov extension theorem is a theorem that guarantees that a suitably "consistent" collection of finite-dimensional distributions will define a stochastic process...

 / (SU:R)


Krylov–Bogolyubov theorem / Mar

Law (stochastic processes)
Law (stochastic processes)
In mathematics, the law of a stochastic process is the measure that the process induces on the collection of functions from the index set into the state space...

 / (U:G)

Location-scale family
Location-scale family
In probability theory, especially as that field is used in statistics, a location-scale family is a family of univariate probability distributions parametrized by a location parameter and a non-negative scale parameter; if X is any random variable whose probability distribution belongs to such a...



Lévy's continuity theorem
Lévy's continuity theorem
In probability theory, the Lévy’s continuity theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions...

 / lmt (L:R)

Minlos' theorem
Minlos' theorem
In mathematics, Minlos' theorem states that a cylindrical measure on the dual of a nuclear space is a Radon measure if its Fourier transform is continuous. It can be proved using Sazonov's theorem....



Moment problem / mnt (1:R)

Moment-generating function
Moment-generating function
In probability theory and statistics, the moment-generating function of any random variable is an alternative definition of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or...

 / mnt (1F:R)

Natural filtration
Natural filtration
In the theory of stochastic processes in mathematics and statistics, the natural filtration associated to a stochastic process is a filtration associated to the process which records its "past behaviour" at each time...

 / (U:G)

Paley–Wiener integral / Gau

Sazonov's theorem
Sazonov's theorem
In mathematics, Sazonov's theorem, named after Vyacheslav Vasilievich Sazonov , is a theorem in functional analysis.It states that a bounded linear operator between two Hilbert spaces is γ-radonifying if it is Hilbert–Schmidt...



Slutsky's theorem
Slutsky's theorem
In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.The theorem was named after Eugen Slutsky. Slutsky’s theorem is also attributed to Harald Cramér....

 / lmt

Standard probability space
Standard probability space
In probability theory, a standard probability space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940...



Stieltjes moment problem / mnt (1:R)

Stochastic matrix
Stochastic matrix
In mathematics, a stochastic matrix is a matrix used to describe the transitions of a Markov chain. It has found use in probability theory, statistics and linear algebra, as well as computer science...

 / Mar

Stochastic processes and boundary value problems
Stochastic processes and boundary value problems
In mathematics, some boundary value problems can be solved using the methods of stochastic analysis. Perhaps the most celebrated example is Shizuo Kakutani's 1944 solution of the Dirichlet problem for the Laplace operator using Brownian motion...

 / scl

Trigonometric moment problem / mnt (1:R)

Weak convergence of measures / lmt

Weingarten function
Weingarten function
In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups...

 / rmt


Binary (1:B)

Bernoulli trial
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial is an experiment whose outcome is random and can be either of two possible outcomes, "success" and "failure"....

 / (1:B)

Complementary event
Complementary event
In probability theory, the complement of any event A is the event [not A], i.e. the event that A does not occur. The event A and its complement [not A] are mutually exclusive and exhaustive. Generally, there is only one event B such that A and B are both mutually exclusive and...

 / (1:B)

Entropy / (1:BDC)


Event
Event (probability theory)
In probability theory, an event is a set of outcomes to which a probability is assigned. Typically, when the sample space is finite, any subset of the sample space is an event...

 / (1:B)

Indecomposable distribution
Indecomposable distribution
In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable:...

 / (1:BDCR)

Indicator function / (1F:B)


Discrete (1:D)

Binomial probability
Binomial probability
Binomial probability typically deals with the probability of several successive decisions, each of which has two possible outcomes.- Definition :...

 / (1:D)

Continuity correction
Continuity correction
In probability theory, if a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then...

 / (1:DC)

Entropy / (1:BDC)

Equiprobable
Equiprobable
Equiprobability is a philosophical concept in probability theory that allows one to assign equal probabilities to outcomes when they are judged to be equipossible or to be "equally likely" in some sense...

 / (1:D)

Hann function
Hann function
The Hann function, named after the Austrian meteorologist Julius von Hann, is a discrete probability mass function given byw= 0.5\; \leftorw= \sin^2 \left...

 / (1:D)

Indecomposable distribution
Indecomposable distribution
In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable:...

 / (1:BDCR)

Infinite divisibility
Infinite divisibility (probability)
The concepts of infinite divisibility and the decomposition of distributions arise in probability and statistics in relation to seeking families of probability distributions that might be a natural choice in certain applications, in the same way that the normal distribution is...

 / (1:DCR)


Le Cam's theorem
Le Cam's theorem
In probability theory, Le Cam's theorem, named after Lucien le Cam , is as follows.Suppose:* X1, ..., Xn are independent random variables, each with a Bernoulli distribution , not necessarily identically distributed.* Pr = pi for i = 1, 2, 3, ...* \lambda_n = p_1 + \cdots + p_n.\,* S_n = X_1...

 / (F:B) (1:D)

Limiting density of discrete points
Limiting density of discrete points
In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Elwood Shannon for differential entropy.It was formulated by Edwin Thompson Jaynes to address defects in the initial definition of differential entropy....

 / (1:DC)

Mean difference
Mean difference
The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference divided by the arithmetic mean...

 / (1:DCR)

Memorylessness
Memorylessness
In probability and statistics, memorylessness is a property of certain probability distributions: the exponential distributions of non-negative real numbers and the geometric distributions of non-negative integers....

 / (1:DCR)

Probability vector
Probability vector
Stochastic vector redirects here. For the concept of a random vector, see Multivariate random variable.In mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one....

 / (1:D)

Probability-generating function
Probability-generating function
In probability theory, the probability-generating function of a discrete random variable is a power series representation of the probability mass function of the random variable...

 / (1:D)

Tsallis entropy
Tsallis entropy
In physics, the Tsallis entropy is a generalization of the standard Boltzmann-Gibbs entropy. In the scientific literature, the physical relevance of the Tsallis entropy is highly debated...

 / (1:DC)


Continuous (1:C)

Almost surely
Almost surely
In probability theory, one says that an event happens almost surely if it happens with probability one. The concept is analogous to the concept of "almost everywhere" in measure theory...

 / (1:C) (LS:D)

Continuity correction
Continuity correction
In probability theory, if a random variable X has a binomial distribution with parameters n and p, i.e., X is distributed as the number of "successes" in n independent Bernoulli trials with probability p of success on each trial, then...

 / (1:DC)

Edgeworth series
Edgeworth series
The Gram–Charlier A series , and the Edgeworth series are series that approximate a probability distribution in terms of its cumulants...

 / (1:C)

Entropy / (1:BDC)

Indecomposable distribution
Indecomposable distribution
In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable:...

 / (1:BDCR)

Infinite divisibility
Infinite divisibility (probability)
The concepts of infinite divisibility and the decomposition of distributions arise in probability and statistics in relation to seeking families of probability distributions that might be a natural choice in certain applications, in the same way that the normal distribution is...

 / (1:DCR)

Limiting density of discrete points
Limiting density of discrete points
In information theory, the limiting density of discrete points is an adjustment to the formula of Claude Elwood Shannon for differential entropy.It was formulated by Edwin Thompson Jaynes to address defects in the initial definition of differential entropy....

 / (1:DC)

Location parameter
Location parameter
In statistics, a location family is a class of probability distributions that is parametrized by a scalar- or vector-valued parameter μ, which determines the "location" or shift of the distribution...

 / (1:C)


Mean difference
Mean difference
The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference divided by the arithmetic mean...

 / (1:DCR)

Memorylessness
Memorylessness
In probability and statistics, memorylessness is a property of certain probability distributions: the exponential distributions of non-negative real numbers and the geometric distributions of non-negative integers....

 / (1:DCR)

Monotone likelihood ratio / (1:C)

Scale parameter
Scale parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions...

 / (1:C)

Stability
Stability (probability)
In probability theory, the stability of a random variable is the property that a linear combination of two independent copies of the variable has the same distribution, up to location and scale parameters. The distributions of random variables having this property are said to be "stable...

 / (1:C)

Stein's lemma
Stein's lemma
Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice...

 / (12:C)

Truncated distribution
Truncated distribution
In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or even to know about, occurrences is limited to values...

 / (1:C)

Tsallis entropy
Tsallis entropy
In physics, the Tsallis entropy is a generalization of the standard Boltzmann-Gibbs entropy. In the scientific literature, the physical relevance of the Tsallis entropy is highly debated...

 / (1:DC)


Real-valued, arbitrary (1:R)

Heavy-tailed distribution
Heavy-tailed distribution
In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution...

 / (1:R)

Indecomposable distribution
Indecomposable distribution
In probability theory, an indecomposable distribution is a probability distribution that cannot be represented as the distribution of the sum of two or more non-constant independent random variables: Z ≠ X + Y. If it can be so expressed, it is decomposable:...

 / (1:BDCR)

Infinite divisibility
Infinite divisibility (probability)
The concepts of infinite divisibility and the decomposition of distributions arise in probability and statistics in relation to seeking families of probability distributions that might be a natural choice in certain applications, in the same way that the normal distribution is...

 / (1:DCR)

Locality / (1:R)

Mean difference
Mean difference
The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference divided by the arithmetic mean...

 / (1:DCR)


Memorylessness
Memorylessness
In probability and statistics, memorylessness is a property of certain probability distributions: the exponential distributions of non-negative real numbers and the geometric distributions of non-negative integers....

 / (1:DCR)

Quantile
Quantile
Quantiles are points taken at regular intervals from the cumulative distribution function of a random variable. Dividing ordered data into q essentially equal-sized data subsets is the motivation for q-quantiles; the quantiles are the data values marking the boundaries between consecutive subsets...

 / (1:R)

Survival function
Survival function
The survival function, also known as a survivor function or reliability function, is a property of any random variable that maps a set of events, usually associated with mortality or failure of some system, onto time. It captures the probability that the system will survive beyond a specified time...

 / (1:R)

Taylor expansions for the moments of functions of random variables
Taylor expansions for the moments of functions of random variables
In probability theory, it is possible to approximate the moments of a function f of a random variable X using Taylor expansions, provided that f is sufficiently differentiable and that the moments of X are finite...

 / (1:R)


Random point of a manifold (1:M)

Bertrand's paradox
Bertrand's paradox (probability)
The Bertrand paradox is a problem within the classical interpretation of probability theory. Joseph Bertrand introduced it in his work Calcul des probabilités as an example to show that probabilities may not be well defined if the mechanism or method that produces the random variable is not...

 / (1:M)

General (random element of an abstract space) (1:G)

Pitman–Yor process
Pitman–Yor process
In probability theory, a Pitman–Yor process, denoted PY, is a stochastic process whose sample path is a probability distribution. A random sample from this process is a finite-dimensional Pitman–Yor distribution, named after Jim Pitman and Marc Yor...

 / (1:G)

Random compact set
Random compact set
In mathematics, a random compact set is essentially a compact set-valued random variable. Random compact sets are useful in the study of attractors for random dynamical systems.-Definition:...

 / (1:G)

Random element
Random element
In probability theory, random element is a generalization of the concept of random variable to more complicated spaces than the simple real line...

 / (1:G)

Binary (2:B)

Coupling
Coupling (probability)
In probability theory, coupling is a proof technique that allows one to compare two unrelated variables by "forcing" them to be related in some way.-Definition:...

 / (2:BRG)

Craps principle
Craps principle
In probability theory, the craps principle is a theorem about event probabilities under repeated iid trials. Let E_1 and E_2 denote two mutually exclusive events which might occur on a given trial...

 / (2:B)

Discrete (2:D)

Kullback–Leibler divergence
Kullback–Leibler divergence
In probability theory and information theory, the Kullback–Leibler divergence is a non-symmetric measure of the difference between two probability distributions P and Q...

 / (2:DCR)

Mutual information
Mutual information
In probability theory and information theory, the mutual information of two random variables is a quantity that measures the mutual dependence of the two random variables...

 / (23F:DC)

Continuous (2:C)

Copula
Copula (statistics)
In probability theory and statistics, a copula can be used to describe the dependence between random variables. Copulas derive their name from linguistics....

 / (2F:C)

Cramér's theorem
Cramér's theorem
In mathematical statistics, Cramér's theorem is one of several theorems of Harald Cramér, a Swedish statistician and probabilist.- Normal random variables :...

 / (2:C)

Kullback–Leibler divergence
Kullback–Leibler divergence
In probability theory and information theory, the Kullback–Leibler divergence is a non-symmetric measure of the difference between two probability distributions P and Q...

 / (2:DCR)

Mutual information
Mutual information
In probability theory and information theory, the mutual information of two random variables is a quantity that measures the mutual dependence of the two random variables...

 / (23F:DC)


Normally distributed and uncorrelated does not imply independent
Normally distributed and uncorrelated does not imply independent
In probability theory, two random variables being uncorrelated does not imply their independence. In some contexts, uncorrelatedness implies at least pairwise independence ....

 / (2:C)

Posterior probability
Posterior probability
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account...

 / Bay (2:C)

Stein's lemma
Stein's lemma
Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice...

 / (12:C)


Real-valued, arbitrary (2:R)

Coupling
Coupling (probability)
In probability theory, coupling is a proof technique that allows one to compare two unrelated variables by "forcing" them to be related in some way.-Definition:...

 / (2:BRG)

Hellinger distance
Hellinger distance
In probability and statistics, the Hellinger distance is used to quantify the similarity between two probability distributions. It is a type of f-divergence...

 / (2:R)

Kullback–Leibler divergence
Kullback–Leibler divergence
In probability theory and information theory, the Kullback–Leibler divergence is a non-symmetric measure of the difference between two probability distributions P and Q...

 / (2:DCR)


Lévy metric
Lévy metric
In mathematics, the Lévy metric is a metric on the space of cumulative distribution functions of one-dimensional random variables. It is a special case of the Lévy–Prokhorov metric, and is named after the French mathematician Paul Pierre Lévy.-Definition:...

 / (2:R)

Total variation#Total variation distance in probability theory / (2:R)


General (random element of an abstract space) (2:G)

Coupling
Coupling (probability)
In probability theory, coupling is a proof technique that allows one to compare two unrelated variables by "forcing" them to be related in some way.-Definition:...

 / (2:BRG)

Lévy–Prokhorov metric / (2:G)

Wasserstein metric
Wasserstein metric
In mathematics, the Wasserstein metric is a distance function defined between probability distributions on a given metric space M....

 / (2:G)

Binary (F:B)

Bertrand's ballot theorem / (F:B)

Boole's inequality
Boole's inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events...

 / (FS:B)

Coin flipping
Coin flipping
Coin flipping or coin tossing or heads or tails is the practice of throwing a coin in the air to choose between two alternatives, sometimes to resolve a dispute between two parties...

 / (F:B)

Collectively exhaustive events / (F:B)

Inclusion-exclusion principle
Inclusion-exclusion principle
In combinatorics, the inclusion–exclusion principle is an equation relating the sizes of two sets and their union...

 / (F:B)

Independence / (F:BR)

Indicator function / (1F:B)


Law of total probability
Law of total probability
In probability theory, the law of total probability is a fundamental rule relating marginal probabilities to conditional probabilities.-Statement:The law of total probability is the proposition that if \left\...

 / (F:B)

Le Cam's theorem
Le Cam's theorem
In probability theory, Le Cam's theorem, named after Lucien le Cam , is as follows.Suppose:* X1, ..., Xn are independent random variables, each with a Bernoulli distribution , not necessarily identically distributed.* Pr = pi for i = 1, 2, 3, ...* \lambda_n = p_1 + \cdots + p_n.\,* S_n = X_1...

 / (F:B) (1:D)

Leftover hash-lemma
Leftover hash-lemma
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby.Imagine that you have a secret key X that has n uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the...

 / (F:B)

Lovász local lemma
Lovász local lemma
In probability theory, if a large number of events are all independent of one another and each has probability less than 1, then there is a positive probability that none of the events will occur...

 / (F:B)

Mutually exclusive
Mutually exclusive
In layman's terms, two events are mutually exclusive if they cannot occur at the same time. An example is tossing a coin once, which can result in either heads or tails, but not both....

 / (F:B)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)

Schuette–Nesbitt formula
Schuette–Nesbitt formula
In probability theory, the Schuette–Nesbitt formula is a generalization of the probabilistic version of the inclusion-exclusion principle. It is named after Donald R. Schuette and Cecil J...

 / (F:B)


Discrete (F:D)

Coupon collector's problem
Coupon collector's problem
In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there are n coupons, from which coupons are being collected with replacement...

 / gmb (F:D)

Coupon collector's problem (generating function approach)
Coupon collector's problem (generating function approach)
The coupon collector's problem can be solved in several different ways. The generating function approach is a combinatorial technique that allows to obtain precise results....

 / gmb (F:D)

Graphical model
Graphical model
A graphical model is a probabilistic model for which a graph denotes the conditional independence structure between random variables. They are commonly used in probability theory, statistics—particularly Bayesian statistics—and machine learning....

 / (F:D)

Kirkwood approximation
Kirkwood approximation
The Kirkwood superposition approximation was introduced by Matsuda as a means of representing a discrete probability distribution. The name apparently refers to a 1942 paper by John G. Kirkwood...

 / (F:D)


Mutual information
Mutual information
In probability theory and information theory, the mutual information of two random variables is a quantity that measures the mutual dependence of the two random variables...

 / (23F:DC)

Random field
Random field
A random field is a generalization of a stochastic process such that the underlying parameter need no longer be a simple real or integer valued "time", but can instead take values that are multidimensional vectors, or points on some manifold....

 / (F:D)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)

Stopped process
Stopped process
In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.-Definition:Let* be a probability space;...

 / (FU:DG)


Continuous (F:C)

Anderson's theorem#Application to probability theory / (F:C)

Autoregressive integrated moving average
Autoregressive integrated moving average
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average model is a generalization of an autoregressive moving average model. These models are fitted to time series data either to better understand the data or to predict future points...

 / (FS:C)

Autoregressive model
Autoregressive model
In statistics and signal processing, an autoregressive model is a type of random process which is often used to model and predict various types of natural phenomena...

 / (FS:C)

Autoregressive moving average model
Autoregressive moving average model
In statistics and signal processing, autoregressive–moving-average models, sometimes called Box–Jenkins models after the iterative Box–Jenkins methodology usually used to estimate them, are typically applied to autocorrelated time series data.Given a time series of data Xt, the ARMA model is a...

 / (FS:C)

Copula
Copula (statistics)
In probability theory and statistics, a copula can be used to describe the dependence between random variables. Copulas derive their name from linguistics....

 / (2F:C)


Maxwell's theorem
Maxwell's theorem
In probability theory, Maxwell's theorem, named in honor of James Clerk Maxwell, states that if the probability distribution of a vector-valued random variable X = T is the same as the distribution of GX for every n×n orthogonal matrix G and the components are independent, then the components...

 / (F:C)

Moving average model
Moving average model
In time series analysis, the moving-average model is a common approach for modeling univariate time series models. The notation MA refers to the moving average model of order q:...

 / (FS:C)

Mutual information
Mutual information
In probability theory and information theory, the mutual information of two random variables is a quantity that measures the mutual dependence of the two random variables...

 / (23F:DC)

Schrödinger method
Schrödinger method
In combinatorial mathematics and probability theory, the Schrödinger method, named after the Austrian physicist Erwin Schrödinger, is used to solve some problems of distribution and occupancy.SupposeX_1,\dots,X_n \,...

 / (F:C)


Real-valued, arbitrary (F:R)

Bapat–Beg theorem / (F:R)

Comonotonicity
Comonotonicity
In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable...

 / (F:R)

Doob martingale
Doob martingale
A Doob martingale is a mathematical construction of a stochastic process which approximates a given random variable and has the martingale property with respect to the given filtration...

 / (F:R)

Independence / (F:BR)

Littlewood–Offord problem / (F:R)

Lévy flight
Lévy flight
A Lévy flight is a random walk in which the step-lengths have a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random directions...

 / (F:R) (U:C)

Martingale
Martingale (probability theory)
In probability theory, a martingale is a model of a fair game where no knowledge of past events can help to predict future winnings. In particular, a martingale is a sequence of random variables for which, at a particular time in the realized sequence, the expectation of the next value in the...

 / (FU:R)

Martingale difference sequence
Martingale difference sequence
In probability theory, a martingale difference sequence is related to the concept of the martingale. A stochastic series Y is an MDS if its expectation with respect to past values of another stochastic series X is zero...

 / (F:R)


Maximum likelihood
Maximum likelihood
In statistics, maximum-likelihood estimation is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood estimation provides estimates for the model's parameters....

 / (FL:R)

Multivariate random variable
Multivariate random variable
In mathematics, probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose values is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value.More formally, a multivariate random...

 / (F:R)

Optional stopping theorem
Optional stopping theorem
In probability theory, the optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value...

 / (FS:R)

Pairwise independence
Pairwise independence
In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent...

 / (3:B) (F:R)

Stopping time / (FU:R)

Time series
Time series
In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at uniform time intervals. Examples of time series are the daily closing value of the Dow Jones index or the annual flow volume of the...

 / (FS:R)

Wald's equation
Wald's equation
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities...

 / (FS:R)

Wick product
Wick product
In probability theory, the Wick product\langle X_1,\dots,X_k \rangle\,named after physicist Gian-Carlo Wick, is a sort of product of the random variables, X1, ..., Xk, defined recursively as follows:\langle \rangle = 1\,...

 / (F:R)


General (random element of an abstract space) (F:G)

Finite-dimensional distribution
Finite-dimensional distribution
In mathematics, finite-dimensional distributions are a tool in the study of measures and stochastic processes. A lot of information can be gained by studying the "projection" of a measure onto a finite-dimensional vector space .-Finite-dimensional distributions of a measure:Let be a measure space...

 / (FU:G)

Hitting time
Hitting time
In the study of stochastic processes in mathematics, a hitting time is a particular instance of a stopping time, the first time at which a given process "hits" a given subset of the state space...

 / (FU:G)

Stopped process
Stopped process
In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.-Definition:Let* be a probability space;...

 / (FU:DG)

Discrete (L:D)

Almost surely
Almost surely
In probability theory, one says that an event happens almost surely if it happens with probability one. The concept is analogous to the concept of "almost everywhere" in measure theory...

 / (1:C) (LS:D)

Gambler's ruin
Gambler's ruin
The term gambler's ruin is used for a number of related statistical ideas:* The original meaning is that a gambler who raises his bet to a fixed fraction of bankroll when he wins, but does not reduce it when he loses, will eventually go broke, even if he has a positive expected value on each bet.*...

 / gmb (L:D)

Loop-erased random walk
Loop-erased random walk
In mathematics, loop-erased random walk is a model for a random simple path with important applications in combinatorics and, in physics, quantum field theory. It is intimately connected to the uniform spanning tree, a model for a random tree...

 / (L:D) (U:C)


Preferential attachment
Preferential attachment
A preferential attachment process is any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not...

 / (L:D)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)

Typical set
Typical set
In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property which is a kind of law...

 / (L:D)


Real-valued, arbitrary (L:R)

Convergence of random variables
Convergence of random variables
In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes...

 / (LS:R)

Law of large numbers
Law of large numbers
In probability theory, the law of large numbers is a theorem that describes the result of performing the same experiment a large number of times...

 / (LS:R)


Maximum likelihood
Maximum likelihood
In statistics, maximum-likelihood estimation is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood estimation provides estimates for the model's parameters....

 / (FL:R)

Stochastic convergence / (LS:R)


Binary (S:B)

Bernoulli process
Bernoulli process
In probability and statistics, a Bernoulli process is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables Xi are identical and independent...

 / (S:B)

Boole's inequality
Boole's inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events...

 / (FS:B)

Borel–Cantelli lemma / (S:B)


De Finetti's theorem
De Finetti's theorem
In probability theory, de Finetti's theorem explains why exchangeable observations are conditionally independent given some latent variable to which an epistemic probability distribution would then be assigned...

 / (S:B)

Exchangeable random variables / (S:BR)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)


Discrete (S:D)

Almost surely
Almost surely
In probability theory, one says that an event happens almost surely if it happens with probability one. The concept is analogous to the concept of "almost everywhere" in measure theory...

 / (1:C) (LS:D)

Asymptotic equipartition property
Asymptotic equipartition property
In information theory the asymptotic equipartition property is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of compression....

 / (S:DC)

Bernoulli scheme
Bernoulli scheme
In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes. Bernoulli schemes are important in the study of dynamical systems, as most such systems exhibit a repellor that is the product of the Cantor set and a smooth...

 / (S:D)

Branching process
Branching process
In probability theory, a branching process is a Markov process that models a population in which each individual in generation n produces some random number of individuals in generation n + 1, according to a fixed probability distribution that does not vary from individual to...

 / (S:D)


Chinese restaurant process / (S:D)

Galton–Watson process / (S:D)

Information source
Information source (mathematics)
In mathematics, an information source is a sequence of random variables ranging over a finite alphabet Γ, having a stationary distribution.The uncertainty, or entropy rate, of an information source is defined as...

 / (S:D)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)


Continuous (S:C)

Asymptotic equipartition property
Asymptotic equipartition property
In information theory the asymptotic equipartition property is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of compression....

 / (S:DC)

Autoregressive integrated moving average
Autoregressive integrated moving average
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average model is a generalization of an autoregressive moving average model. These models are fitted to time series data either to better understand the data or to predict future points...

 / (FS:C)

Autoregressive model
Autoregressive model
In statistics and signal processing, an autoregressive model is a type of random process which is often used to model and predict various types of natural phenomena...

 / (FS:C)


Autoregressive moving average model
Autoregressive moving average model
In statistics and signal processing, autoregressive–moving-average models, sometimes called Box–Jenkins models after the iterative Box–Jenkins methodology usually used to estimate them, are typically applied to autocorrelated time series data.Given a time series of data Xt, the ARMA model is a...

 / (FS:C)

Moving average model
Moving average model
In time series analysis, the moving-average model is a common approach for modeling univariate time series models. The notation MA refers to the moving average model of order q:...

 / (FS:C)


Real-valued, arbitrary (S:R)

Big O in probability notation
Big O in probability notation
The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation which is standard in mathematics...

 / (S:R)

Convergence of random variables
Convergence of random variables
In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes...

 / (LS:R)

Doob's martingale convergence theorems
Doob's martingale convergence theorems
In mathematics — specifically, in stochastic analysis — Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph Leo Doob....

 / (SU:R)

Ergodic theory
Ergodic theory
Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics....

 / (S:R)

Exchangeable random variables / (S:BR)

Hewitt–Savage zero-one law / (S:RG)

Kolmogorov's zero-one law
Kolmogorov's zero-one law
In probability theory, Kolmogorov's zero-one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, called a tail event, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one.Tail...

 / (S:R)

Law of large numbers
Law of large numbers
In probability theory, the law of large numbers is a theorem that describes the result of performing the same experiment a large number of times...

 / (LS:R)

Law of the iterated logarithm
Law of the iterated logarithm
In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Y. Khinchin . Another statement was given by A.N...

 / (S:R)


Maximal ergodic theorem / (S:R)

Op (statistics) / (S:R)

Optional stopping theorem
Optional stopping theorem
In probability theory, the optional stopping theorem says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial value...

 / (FS:R)

Stationary process
Stationary process
In the mathematical sciences, a stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space...

 / (SU:R)

Stochastic convergence / (LS:R)

Stochastic process
Stochastic process
In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process...

 / (SU:RG)

Time series
Time series
In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at uniform time intervals. Examples of time series are the daily closing value of the Dow Jones index or the annual flow volume of the...

 / (FS:R)

Uniform integrability / (S:R)

Wald's equation
Wald's equation
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities...

 / (FS:R)


General (random element of an abstract space) (S:G)

Hewitt–Savage zero-one law / (S:RG)

Mixing
Mixing (mathematics)
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: mixing paint, mixing drinks, etc....

 / (S:G)


Skorokhod's representation theorem
Skorokhod's representation theorem
In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables...

 / (S:G)

Stochastic process
Stochastic process
In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process...

 / (SU:RG)


Discrete (U:D)

Counting process / (U:D)

Cox process
Cox process
A Cox process , also known as a doubly stochastic Poisson process or mixed Poisson process, is a stochastic process which is a generalization of a Poisson process...

 / (U:D)

Dirichlet process
Dirichlet process
In probability theory, a Dirichlet process is a stochastic process that can be thought of as a probability distribution whose domain is itself a random distribution...

 / (U:D)

Lévy process
Lévy process
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process that starts at 0, admits càdlàg modification and has "stationary independent increments" — this phrase will be explained below...

 / (U:DC)

Non-homogeneous Poisson process
Non-homogeneous Poisson process
In probability theory, a non-homogeneous Poisson process is a Poisson process with rate parameter \lambda such that the rate parameter of the process is a function of time...

 / (U:D)

Point process
Point process
In statistics and probability theory, a point process is a type of random process for which any one realisation consists of a set of isolated points either in time or geographical space, or in even more general spaces...

 / (U:D)


Poisson process
Poisson process
A Poisson process, named after the French mathematician Siméon-Denis Poisson , is a stochastic process in which events occur continuously and independently of one another...

 / (U:D)

Poisson random measure / (U:D)

Random measure / (U:D)

Renewal theory
Renewal theory
Renewal theory is the branch of probability theory that generalizes Poisson processes for arbitrary holding times. Applications include calculating the expected time for a monkey who is randomly tapping at a keyboard to type the word Macbeth and comparing the long-term benefits of different...

 / (U:D)

Stopped process
Stopped process
In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.-Definition:Let* be a probability space;...

 / (FU:DG)


Continuous (U:C)

Brownian motion
Brownian motion
Brownian motion or pedesis is the presumably random drifting of particles suspended in a fluid or the mathematical model used to describe such random movements, which is often called a particle theory.The mathematical model of Brownian motion has several real-world applications...

 / phs (U:C)

Gamma process / (U:C)

Loop-erased random walk
Loop-erased random walk
In mathematics, loop-erased random walk is a model for a random simple path with important applications in combinatorics and, in physics, quantum field theory. It is intimately connected to the uniform spanning tree, a model for a random tree...

 / (L:D) (U:C)

Lévy flight
Lévy flight
A Lévy flight is a random walk in which the step-lengths have a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random directions...

 / (F:R) (U:C)


Lévy process
Lévy process
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process that starts at 0, admits càdlàg modification and has "stationary independent increments" — this phrase will be explained below...

 / (U:DC)

Martingale representation theorem
Martingale representation theorem
In probability theory, the martingale representation theorem states that a random variable which is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion....

 / (U:C)

Random walk
Random walk
A random walk, sometimes denoted RW, is a mathematical formalisation of a trajectory that consists of taking successive random steps. For example, the path traced by a molecule as it travels in a liquid or a gas, the search path of a foraging animal, the price of a fluctuating stock and the...

 / (FLS:BD) (U:C)

Skorokhod's embedding theorem
Skorokhod's embedding theorem
In mathematics and probability theory, Skorokhod's embedding theorem is either or both of two theorems that allow one to regard any suitable collection of random variables as a Wiener process evaluated at a collection of stopping times. Both results are named for the Ukrainian mathematician A.V...

 / (U:C)


Real-valued, arbitrary (U:R)

Compound Poisson process
Compound Poisson process
A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution...

 / (U:R)

Continuous stochastic process
Continuous stochastic process
In the probability theory, a continuous stochastic process is a type of stochastic process that may be said to be "continuous" as a function of its "time" or index parameter. Continuity is a nice property for a process to have, since it implies that they are well-behaved in some sense, and,...

 / (U:RG)

Doob's martingale convergence theorems
Doob's martingale convergence theorems
In mathematics — specifically, in stochastic analysis — Doob's martingale convergence theorems are a collection of results on the long-time limits of supermartingales, named after the American mathematician Joseph Leo Doob....

 / (SU:R)

Doob–Meyer decomposition theorem / (U:R)

Feller-continuous process
Feller-continuous process
In mathematics, a Feller-continuous process is a continuous-time stochastic process for which the expected value of suitable statistics of the process at a given time in the future depend continuously on the initial condition of the process...

 / (U:R)

Kolmogorov continuity theorem
Kolmogorov continuity theorem
In mathematics, the Kolmogorov continuity theorem is a theorem that guarantees that a stochastic process that satisfies certain constraints on the moments of its increments will be continuous...

 / (U:R)


Local martingale
Local martingale
In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; however, in general a local martingale is not a martingale, because its...

 / (U:R)

Martingale
Martingale (probability theory)
In probability theory, a martingale is a model of a fair game where no knowledge of past events can help to predict future winnings. In particular, a martingale is a sequence of random variables for which, at a particular time in the realized sequence, the expectation of the next value in the...

 / (FU:R)

Stationary process
Stationary process
In the mathematical sciences, a stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space...

 / (SU:R)

Stochastic process
Stochastic process
In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process...

 / (SU:RG)

Stopping time / (FU:R)


General (random element of an abstract space) (U:G)

Adapted process
Adapted process
In the study of stochastic processes, an adapted process is one that cannot "see into the future". An informal interpretation is that X is adapted if and only if, for every realisation and every n, Xn is known at time n...

 / (U:G)

Continuous stochastic process
Continuous stochastic process
In the probability theory, a continuous stochastic process is a type of stochastic process that may be said to be "continuous" as a function of its "time" or index parameter. Continuity is a nice property for a process to have, since it implies that they are well-behaved in some sense, and,...

 / (U:RG)

Finite-dimensional distribution
Finite-dimensional distribution
In mathematics, finite-dimensional distributions are a tool in the study of measures and stochastic processes. A lot of information can be gained by studying the "projection" of a measure onto a finite-dimensional vector space .-Finite-dimensional distributions of a measure:Let be a measure space...

 / (FU:G)

Hitting time
Hitting time
In the study of stochastic processes in mathematics, a hitting time is a particular instance of a stopping time, the first time at which a given process "hits" a given subset of the state space...

 / (FU:G)

Killed process
Killed process
In probability theory — specifically, in stochastic analysis — a killed process is a stochastic process that is forced to assume an undefined or "killed" state at some time.-Definition:...

 / (U:G)


Progressively measurable process
Progressively measurable process
In mathematics, progressive measurability is a property of stochastic processes. A progressively measurable process is one for which events defined in terms of values of the process across a range of times can be assigned probabilities . Being progressively measurable is a strictly stronger...

 / (U:G)

Sample-continuous process / (U:G)

Stochastic process
Stochastic process
In probability theory, a stochastic process , or sometimes random process, is the counterpart to a deterministic process...

 / (SU:RG)

Stopped process
Stopped process
In mathematics, a stopped process is a stochastic process that is forced to assume the same value after a prescribed time.-Definition:Let* be a probability space;...

 / (FU:DG)


General aspects (grl)

Aleatoric

Average
Average
In mathematics, an average, or central tendency of a data set is a measure of the "middle" value of the data set. Average is one form of central tendency. Not all central tendencies should be considered definitions of average....



Bean machine
Bean machine
The bean machine, also known as the quincunx or Galton box, is a device invented by Sir Francis Galton to demonstrate the central limit theorem, in particular that the normal distribution is approximate to the binomial distribution....



Cox's theorem
Cox's theorem
Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability. As the laws of probability derived by Cox's theorem are applicable to...



Equipossible
Equipossible
Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what can occur in a probability experiment...



Exotic probability
Exotic probability
Exotic probability is a branch of probability theory that deals with probabilities which are outside the normal range of [0, 1]. The most common author of papers on exotic probability theory is Saul Youssef...



Extractor

Free probability
Free probability
Free probability is a mathematical theory that studies non-commutative random variables. The "freeness" or free independence property is the analogue of the classical notion of independence, and it is connected with free products....



Frequency
Frequency (statistics)
In statistics the frequency of an event i is the number ni of times the event occurred in the experiment or the study. These frequencies are often graphically represented in histograms....



Frequency probability
Frequency probability
Frequency probability is the interpretation of probability that defines an event's probability as the limit of its relative frequency in a large number of trials. The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the...



Impossible event
Impossible event
In the mathematics of probability, an impossible event is an event A with probability zero, or Pr = 0. See in particular almost surely.An impossible event is not the same as the stronger concept of logical impossibility...



Infinite monkey theorem
Infinite monkey theorem
The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare....



Information geometry
Information geometry
Information geometry is a branch of mathematics that applies the techniques of differential geometry to the field of probability theory. It derives its name from the fact that the Fisher information is used as the Riemannian metric when considering the geometry of probability distribution families...



Law of Truly Large Numbers
Law of Truly Large Numbers
The law of truly large numbers, attributed to Persi Diaconis and Frederick Mosteller, states that with a sample size large enough, any outrageous thing is likely to happen. Because we never find it notable when likely events occur, we highlight unlikely events and notice them more...



Littlewood's law
Littlewood's law
Littlewood's Law states that individuals can expect a "miracle" to happen to them at the rate of about one per month.-History:The law was framed by Cambridge University Professor J. E...




Observational error
Observational error
Observational error is the difference between a measured value of quantity and its true value. In statistics, an error is not a "mistake". Variability is an inherent part of things being measured and of the measurement process.-Science and experiments:...



Principle of indifference
Principle of indifference
The principle of indifference is a rule for assigning epistemic probabilities.Suppose that there are n > 1 mutually exclusive and collectively exhaustive possibilities....



Principle of maximum entropy
Principle of maximum entropy
In Bayesian probability, the principle of maximum entropy is a postulate which states that, subject to known constraints , the probability distribution which best represents the current state of knowledge is the one with largest entropy.Let some testable information about a probability distribution...



Probability
Probability
Probability is ordinarily used to describe an attitude of mind towards some proposition of whose truth we arenot certain. The proposition of interest is usually of the form "Will a specific event occur?" The attitude of mind is of the form "How certain are we that the event will occur?" The...



Probability interpretations
Probability interpretations
The word probability has been used in a variety of ways since it was first coined in relation to games of chance. Does probability measure the real, physical tendency of something to occur, or is it just a measure of how strongly one believes it will occur? In answering such questions, we...



Propensity probability
Propensity probability
The propensity theory of probability is one interpretation of the concept of probability. Theorists who adopt this interpretation think of probability as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a...



Random number generator

Random sequence
Random sequence
The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in...



Randomization
Randomization
Randomization is the process of making something random; this means:* Generating a random permutation of a sequence .* Selecting a random sample of a population ....



Randomness
Randomness
Randomness has somewhat differing meanings as used in various fields. It also has common meanings which are connected to the notion of predictability of events....



Statistical dispersion
Statistical dispersion
In statistics, statistical dispersion is variability or spread in a variable or a probability distribution...



Statistical regularity
Statistical regularity
Statistical regularity is a notion in statistics and probability theory that random events exhibit regularity when repeated enough times or that enough sufficiently similar random events exhibit regularity...



Uncertainty
Uncertainty
Uncertainty is a term used in subtly different ways in a number of fields, including physics, philosophy, statistics, economics, finance, insurance, psychology, sociology, engineering, and information science...



Upper and lower probabilities
Upper and lower probabilities
Upper and lower probabilities are representations of imprecise probability. Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, this method uses two numbers: the upper probability of the event and the lower probability of the event.Because...



Urn problem
Urn problem
In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest are represented as colored balls in an urn or other container....




Foundations of probability theory (fnd)

Algebra of random variables
Algebra of random variables
In the algebraic axiomatization of probability theory, the primary concept is not that of probability of an event, but rather that of a random variable. Probability distributions are determined by assigning an expectation to each random variable...



Belief propagation
Belief propagation
Belief propagation is a message passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node, conditional on any observed nodes...



Dempster–Shafer theory

Dutch book
Dutch book
In gambling a Dutch book or lock is a set of odds and bets which guarantees a profit, regardless of the outcome of the gamble. It is associated with probabilities implied by the odds not being coherent....



Elementary event
Elementary event
In probability theory, an elementary event or atomic event is a singleton of a sample space. An outcome is an element of a sample space. An elementary event is a set containing exactly one outcome, not the outcome itself...




Normalizing constant
Normalizing constant
The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics.-Definition and examples:In probability theory, a normalizing constant is a constant by which an everywhere non-negative function must be multiplied so the area under its graph is 1, e.g.,...



Possibility theory
Possibility theory
Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. Professor Lotfi Zadeh first introduced possibility theory in 1978 as an extension of his theory of fuzzy sets and fuzzy logic. D. Dubois and H. Prade further...



Probability axioms
Probability axioms
In probability theory, the probability P of some event E, denoted P, is usually defined in such a way that P satisfies the Kolmogorov axioms, named after Andrey Kolmogorov, which are described below....



Transferable belief model
Transferable belief model
The transferable belief model is an elaboration on the Dempster-Shafer theory of evidence.-Context:Consider the following classical problem of information fusion. A patient has an illness that can be caused by three different factors A, B and C...



Unit measure
Unit measure
Unit measure is an axiom of probability theory that states that the probability of the entire sample space is equal to one ; that is, P=1 where S is the sample space. Loosely speaking, it means that S must be chosen so that when the experiment is performed, something happens...




Gambling (gmb)

Betting

Bookmaker
Bookmaker
A bookmaker, or bookie, is an organization or a person that takes bets on sporting and other events at agreed upon odds.- Range of events :...



Coherence
Coherence (philosophical gambling strategy)
In a thought experiment proposed by the Italian probabilist Bruno de Finetti in order to justify Bayesian probability, an array of wagers is coherent precisely if it does not expose the wagerer to certain loss regardless of the outcomes of events on which he is wagering, even if his opponent makes...



Coupon collector's problem
Coupon collector's problem
In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there are n coupons, from which coupons are being collected with replacement...

 / (F:D)

Coupon collector's problem (generating function approach)
Coupon collector's problem (generating function approach)
The coupon collector's problem can be solved in several different ways. The generating function approach is a combinatorial technique that allows to obtain precise results....

 / (F:D)

Gambler's fallacy
Gambler's fallacy
The Gambler's fallacy, also known as the Monte Carlo fallacy , and also referred to as the fallacy of the maturity of chances, is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are...



Gambler's ruin
Gambler's ruin
The term gambler's ruin is used for a number of related statistical ideas:* The original meaning is that a gambler who raises his bet to a fixed fraction of bankroll when he wins, but does not reduce it when he loses, will eventually go broke, even if he has a positive expected value on each bet.*...

 / (L:D)

Game of chance
Game of chance
A game of chance is a game whose outcome is strongly influenced by some randomizing device, and upon which contestants may or may not wager money or anything of monetary value...



Inverse gambler's fallacy
Inverse gambler's fallacy
The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is similar to the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random process, that the process is likely to have occurred...



Lottery
Lottery
A lottery is a form of gambling which involves the drawing of lots for a prize.Lottery is outlawed by some governments, while others endorse it to the extent of organizing a national or state lottery. It is common to find some degree of regulation of lottery by governments...



Lottery machine
Lottery machine
A lottery machine is the machine used to draw the winning numbers for a lottery.Early lotteries were done by drawing numbers, or winning tickets, from a container...



Luck
Luck
Luck or fortuity is good fortune which occurs beyond one's control, without regard to one's will, intention, or desired result. There are at least two senses people usually mean when they use the term, the prescriptive sense and the descriptive sense...



Martingale
Martingale (betting system)
Originally, martingale referred to a class of betting strategies popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails...




Odds
Odds
The odds in favor of an event or a proposition are expressed as the ratio of a pair of integers, which is the ratio of the probability that an event will happen to the probability that it will not happen...



Pachinko
Pachinko
is a type of game originating in Japan, and used as both a form of recreational arcade game and much more frequently as a gambling device, filling a niche in gambling in Japan comparable to that of the slot machine in Western gambling. A pachinko machine resembles a vertical pinball machine, but...



Parimutuel betting
Parimutuel betting
Parimutuel betting is a betting system in which all bets of a particular type are placed together in a pool; taxes and the "house-take" or "vig" is removed, and payoff odds are calculated by sharing the pool among all winning bets...



Parrondo's paradox
Parrondo's paradox
Parrondo's paradox, a paradox in game theory, has been described as: A losing strategy that wins. It is named after its creator, Spanish physicist Juan Parrondo, who discovered the paradox in 1996...



Pascal's wager
Pascal's Wager
Pascal's Wager, also known as Pascal's Gambit, is a suggestion posed by the French philosopher, mathematician, and physicist Blaise Pascal that even if the existence of God could not be determined through reason, a rational person should wager as though God exists, because one living life...



Poker probability
Poker probability
In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands.-Frequency of 5-card poker hands:...



Poker probability (Omaha)
Poker probability (Omaha)
In poker, the probability of many events can be determined by direct calculation. This article discusses how to compute the probabilities for many commonly occurring events in the game of Omaha hold 'em and provides some probabilities and odds for specific situations...



Poker probability (Texas hold 'em)
Poker probability (Texas hold 'em)
In poker, the probability of many events can be determined by direct calculation. This article discusses computing probabilities for many commonly occurring events in the game of Texas hold 'em and provides some probabilities and odds for specific situations...



Pot odds
Pot odds
In poker, pot odds are the ratio of the current size of the pot to the cost of a contemplated call. Pot odds are often compared to the probability of winning a hand with a future card in order to estimate the call's expected value....



Proebsting's paradox
Proebsting's paradox
In probability theory, Proebsting's paradox is an argument that appears to show that the Kelly criterion can lead to ruin. Although it can be resolved mathematically, it raises some interesting issues about the practical application of Kelly, especially in investing. It was named and first...



Roulette
Roulette
Roulette is a casino game named after a French diminutive for little wheel. In the game, players may choose to place bets on either a single number or a range of numbers, the colors red or black, or whether the number is odd or even....



Spread betting
Spread betting
Spread betting is any of various types of wagering on the outcome of an event, where the pay-off is based on the accuracy of the wager, rather than a simple "win or lose" outcome, such as fixed-odds betting or parimutuel betting. A spread is a range of outcomes and the bet is whether the outcome...



The man who broke the bank at Monte Carlo
The Man Who Broke the Bank at Monte Carlo
The Man Who Broke the Bank at Monte Carlo is a 1935 American romantic comedy film made by 20th Century Fox. It was directed by Stephen Roberts, and starred Ronald Colman, Joan Bennett, and Colin Clive. The screenplay was written by Nunnally Johnson and Howard Smith, based on play by Ilya Surguchev...




Coincidence (cnc)

Bible code
Bible code
The Bible code , also known as the Torah code, is a purported set of secret messages encoded within the text Hebrew Bible and describing prophesies and other guidance regarding the future. This hidden code has been described as a method by which specific letters from the text can be selected to...



Birthday paradox
Birthday paradox
In probability theory, the birthday problem or birthday paradox pertains to the probability that, in a set of n randomly chosen people, some pair of them will have the same birthday. By the pigeonhole principle, the probability reaches 100% when the number of people reaches 366. However, 99%...



Birthday problem


Index of coincidence
Index of coincidence
In cryptography, coincidence counting is the technique of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts...



Spurious relationship
Spurious relationship
In statistics, a spurious relationship is a mathematical relationship in which two events or variables have no direct causal connection, yet it may be wrongly inferred that they do, due to either coincidence or the presence of a certain third, unseen factor In statistics, a spurious relationship...




Algorithmics (alg)

Algorithmic Lovász local lemma
Algorithmic Lovász local lemma
In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence....



Box–Muller transform

Gibbs sampling
Gibbs sampling
In statistics and in statistical physics, Gibbs sampling or a Gibbs sampler is an algorithm to generate a sequence of samples from the joint probability distribution of two or more random variables...



Inverse transform sampling method
Inverse transform sampling method
Inverse transform sampling, also known as the inverse probability integral transform or inverse transformation method or Smirnov transform or even golden rule, is a basic method for pseudo-random number sampling, i.e. for generating sample numbers at random from any probability distribution given...



Las Vegas algorithm
Las Vegas algorithm
In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. In other words, a Las Vegas algorithm does not gamble with the verity of the result; it gambles only with the resources...



Metropolis algorithm

Monte Carlo method
Monte Carlo method
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...




Panjer recursion
Panjer recursion
The Panjer recursion is an algorithm to compute the probability distribution of a compound random variablewhere both N\, and X_i\, are random variables and of special types. In more general cases the distribution of S is a compound distribution. The recursion for the special cases considered was...



Probabilistic Turing machine
Probabilistic Turing machine
In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which randomly chooses between the available transitions at each point according to some probability distribution....



Probabilistic algorithm

Probabilistically checkable proof

Probable prime
Probable prime
In number theory, a probable prime is an integer that satisfies a specific condition also satisfied by all prime numbers. Different types of probable primes have different specific conditions...



Stochastic programming
Stochastic programming
Stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within...




Bayesian approach (Bay)

Bayes factor
Bayes factor
In statistics, the use of Bayes factors is a Bayesian alternative to classical hypothesis testing. Bayesian model comparison is a method of model selection based on Bayes factors.-Definition:...



Bayesian model comparison

Bayesian network
Bayesian network
A Bayesian network, Bayes network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph . For example, a Bayesian network could represent the probabilistic...

 / Mar

Bayesian probability
Bayesian probability
Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with propositions, whose truth or falsity is...



Bayesianism

Checking if a coin is fair
Checking if a coin is fair
In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of...



Conjugate prior
Conjugate prior
In Bayesian probability theory, if the posterior distributions p are in the same family as the prior probability distribution p, the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood...



Factor graph
Factor graph
In probability theory and its applications, a factor graph is a particular type of graphical model, with applications in Bayesian inference, that enables efficient computation of marginal distributions through the sum-product algorithm...



Good–Turing frequency estimation


Imprecise probability
Imprecise probability
Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify...



Inverse probability
Inverse probability
In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable.Today, the problem of determining an unobserved variable is called inferential statistics, the method of inverse probability is called Bayesian probability, the "distribution"...

 / cnd

Marginal likelihood
Marginal likelihood
In statistics, a marginal likelihood function, or integrated likelihood, is a likelihood function in which some parameter variables have been marginalised...



Markov blanket
Markov blanket
In machine learning, the Markov blanket for a node A in a Bayesian network is the set of nodes \partial A composed of A's parents, its children, and its children's other parents. In a Markov network, the Markov blanket of a node is its set of neighbouring nodes...

 / Mar

Posterior probability
Posterior probability
In Bayesian statistics, the posterior probability of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence is taken into account...

 / (2:C)

Prior probability
Prior probability
In Bayesian statistical inference, a prior probability distribution, often called simply the prior, of an uncertain quantity p is the probability distribution that would express one's uncertainty about p before the "data"...



SIPTA
SIPTA
The Society for Imprecise Probability: Theories and Applications was created in February 2002, with the aim of promoting the research on Imprecise probability...



Subjective logic
Subjective logic
Subjective logic is a type of probabilistic logic that explicitly takes uncertainty and belief ownership into account. In general, subjective logic is suitable for modeling and analysing situations involving uncertainty and incomplete knowledge...



Subjectivism#Subjectivism in probability / hst


Financial mathematics (fnc)

Allais paradox
Allais paradox
The Allais paradox is a choice problem designed by Maurice Allais to show an inconsistency of actual observed choices with the predictions of expected utility theory.-Statement of the Problem:...



Black–Scholes

Cox–Ingersoll–Ross model

Forward measure
Forward measure
In finance, a T-forward measure is a pricing measure absolutely continuous with respect to a risk-neutral measure but rather than using the money market as numeraire, it uses a bond with maturity T...



Heston model
Heston model
In finance, the Heston model, named after Steven Heston, is a mathematical model describing the evolution of the volatility of an underlying asset...

 / scl

Jump process
Jump process
A jump process is a type of stochastic process that has discrete movements, called jumps, rather than small continuous movements.In physics, jump processes result in diffusion...



Jump-diffusion model

Kelly criterion
Kelly criterion
In probability theory, the Kelly criterion, or Kelly strategy or Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any...



Market risk
Market risk
Market risk is the risk that the value of a portfolio, either an investment portfolio or a trading portfolio, will decrease due to the change in value of the market risk factors. The four standard market risk factors are stock prices, interest rates, foreign exchange rates, and commodity prices...



Mathematics of bookmaking
Mathematics of bookmaking
In betting parlance, making a book is the practice of laying bets on the various possible outcomes of a single event. The term originates from the practice of recording such wagers in a hard-bound ledger and gives the English language the term bookmaker for the person laying the bets and thus...




Risk
Risk
Risk is the potential that a chosen action or activity will lead to a loss . The notion implies that a choice having an influence on the outcome exists . Potential losses themselves may also be called "risks"...



Risk-neutral measure
Risk-neutral measure
In mathematical finance, a risk-neutral measure, is a prototypical case of an equivalent martingale measure. It is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market a derivative's price is the discounted...



Ruin theory
Ruin theory
Ruin theory, sometimes referred to as collective risk theory, is a branch of actuarial science that studies an insurer's vulnerability to insolvency based on mathematical modeling of the insurer's surplus....



Sethi model
Sethi model
The Sethi model was developed by Suresh P. Sethi and describes the process of how sales evolve over time in response to advertising. The rate of change in sales depend on three effects: response to advertising that acts positively on the unsold portion of the market, the loss due to forgetting or...



Technical analysis
Technical analysis
In finance, technical analysis is security analysis discipline for forecasting the direction of prices through the study of past market data, primarily price and volume. Behavioral economics and quantitative analysis incorporate technical analysis, which being an aspect of active management stands...



Value at risk
Value at risk
In financial mathematics and financial risk management, Value at Risk is a widely used risk measure of the risk of loss on a specific portfolio of financial assets...



Variance gamma process
Variance gamma process
In the theory of stochastic processes, a part of the mathematical theory of probability, the variance gamma process , also known as Laplace motion, is a Lévy process determined by a random time change. The process has finite moments distinguishing it from many Lévy processes. There is no diffusion...

 / spr

Vasicek model
Vasicek model
In finance, the Vasicek model is a mathematical model describing the evolution of interest rates. It is a type of "one-factor model" as it describes interest rate movements as driven by only one source of market risk...



Volatility
Volatility (finance)
In finance, volatility is a measure for variation of price of a financial instrument over time. Historic volatility is derived from time series of past market prices...




Physics (phs)

Boltzmann factor
Boltzmann factor
In physics, the Boltzmann factor is a weighting factor that determines the relative probability of a particle to be in a state i in a multi-state system in thermodynamic equilibrium at temperature T...



Brownian motion
Brownian motion
Brownian motion or pedesis is the presumably random drifting of particles suspended in a fluid or the mathematical model used to describe such random movements, which is often called a particle theory.The mathematical model of Brownian motion has several real-world applications...

 / (U:C)

Brownian ratchet
Brownian ratchet
In the philosophy of thermal and statistical physics, the Brownian ratchet, or Feynman-Smoluchowski ratchet is a thought experiment about an apparent perpetual motion machine first analysed in 1912 by Polish physicist Marian Smoluchowski and popularised by American Nobel laureate physicist Richard...



Cosmic variance
Cosmic variance
Cosmic variance is the statistical uncertainty inherent in observations of the universe at extreme distances. It is based on the idea that it is only possible to observe part of the universe at one particular time, so it is difficult to make statistical statements about cosmology on the scale of...



Critical phenomena
Critical phenomena
In physics, critical phenomena is the collective name associated with thephysics of critical points. Most of them stem from the divergence of thecorrelation length, but also the dynamics slows down...



Diffusion-limited aggregation
Diffusion-limited aggregation
Diffusion-limited aggregation is the process whereby particles undergoing a random walk due to Brownian motion cluster together to form aggregates of such particles. This theory, proposed by Witten and Sander in 1981, is applicable to aggregation in any system where diffusion is the primary means...



Fluctuation theorem
Fluctuation theorem
The fluctuation theorem , which originated from statistical mechanics, deals with the relative probability that the entropy of a system which is currently away from thermodynamic equilibrium will increase or decrease over a given amount of time...



Gibbs state
Gibbs state
In probability theory and statistical mechanics, a Gibbs state is an equilibrium probability distribution which remains invariant under future evolution of the system...



Information entropy
Information entropy
In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits...



Lattice model
Lattice model (physics)
In physics, a lattice model is a physical model that is defined on a lattice, as opposed to the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are...



Master equation
Master equation
In physics and chemistry and related fields, master equations are used to describe the time-evolution of a system that can be modelled as being in exactly one of countable number of states at any given time, and where switching between states is treated probabilistically...

 / Mar (U:D)

Negative probability
Negative probability
In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" where he introduced the concept of negative energies and negative probabilities:...




Nonextensive entropy
Nonextensive entropy
Entropy is considered to be an extensive property, i.e., that its value depends on the amount of material present. Constantino Tsallis has proposed a nonextensive entropy, which is a generalization of the traditional Boltzmann-Gibbs entropy....



Partition function
Partition function (mathematics)
The partition function or configuration integral, as used in probability theory, information science and dynamical systems, is an abstraction of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann...



Percolation theory
Percolation theory
In mathematics, percolation theory describes the behavior of connected clusters in a random graph. The applications of percolation theory to materials science and other domains are discussed in the article percolation.-Introduction:...

 / rgr (L:B)

Percolation threshold
Percolation threshold
Percolation threshold is a mathematical term related to percolation theory, which is the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist while above it, there exists a giant component of the order of system size...

 / rgr

Probability amplitude
Probability amplitude
In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density.For example, if the probability amplitude of a quantum state is \alpha, the probability of measuring that state is |\alpha|^2...



Quantum Markov chain
Quantum Markov chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability...

 / Mar

Quantum probability
Quantum probability
Quantum probability was developed in the 1980s as a noncommutative analog of the Kolmogorovian theory of stochastic processes. One of its aims is to clarify the mathematical foundations of quantum theory and its statistical interpretation....



Scaling limit
Scaling limit
In physics or mathematics, the scaling limit is a term applied to the behaviour of a lattice model in the limit of the lattice spacing going to zero. A lattice model which approximates a continuum quantum field theory in the limit as the lattice spacing goes to zero corresponds to finding a second...



Statistical mechanics
Statistical mechanics
Statistical mechanics or statistical thermodynamicsThe terms statistical mechanics and statistical thermodynamics are used interchangeably...



Statistical physics
Statistical physics
Statistical physics is the branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic...



Vacuum expectation value
Vacuum expectation value
In quantum field theory the vacuum expectation value of an operator is its average, expected value in the vacuum. The vacuum expectation value of an operator O is usually denoted by \langle O\rangle...




Genetics (gnt)

Ewens's sampling formula
Ewens's sampling formula
In population genetics, Ewens' sampling formula, describes the probabilities associated with counts of how many different alleles are observed a given number of times in the sample.-Definition:...



Hardy–Weinberg principle

Population genetics
Population genetics
Population genetics is the study of allele frequency distribution and change under the influence of the four main evolutionary processes: natural selection, genetic drift, mutation and gene flow. It also takes into account the factors of recombination, population subdivision and population...




Punnett square
Punnett square
The Punnett square is a diagram that is used to predict an outcome of a particular cross or breeding experiment. It is named after Reginald C. Punnett, who devised the approach, and is used by biologists to determine the probability of an offspring's having a particular genotype...



Ronald Fisher
Ronald Fisher
Sir Ronald Aylmer Fisher FRS was an English statistician, evolutionary biologist, eugenicist and geneticist. Among other things, Fisher is well known for his contributions to statistics by creating Fisher's exact test and Fisher's equation...




Stochastic process (spr)

Anomaly time series
Anomaly time series
In atmospheric sciences and some other applications of statistics, an anomaly time series is the time series of deviations of a quantity from some mean. Similarly a standardized anomaly series contains values of deviations divided by a standard deviation...



Arrival theorem

Beverton–Holt model

Burke's theorem
Burke's theorem
In probability theory, Burke's theorem is a theorem in queueing theory by Paul J. Burke while working at Bell Telephone Laboratories that states for an M/M/1, M/M/m or M/M/∞ queue in the steady state with arrivals a Poisson process with rate parameter λ then:# The departure process is a Poisson...



Buzen's algorithm
Buzen's algorithm
In queueing theory, a discipline within the mathematical theory of probability, Buzen's algorithm is an algorithm for calculating the normalization constant G in the Gordon–Newell theorem. This method was first proposed by Jeffrey P. Buzen in 1973. Once G is computed the probability distributions...



Disorder problem
Disorder problem
In the study of stochastic processes in mathematics, a disorder problem has been formulated by Kolmogorov. Specifically, the problem is use ongoing observations on a stochastic process to decide whether or not to raise an alarm that the probabilistic properties of the process have changed.An...



Erlang unit
Erlang unit
The erlang is a dimensionless unit that is used in telephony as a statistical measure of offered load or carried load on service-providing elements such as telephone circuits or telephone switching equipment. It is named after the Danish telephone engineer A. K...



G-network

Gordon–Newell theorem
Gordon–Newell theorem
In queueing theory, a discipline within the mathematical theory of probability, the Gordon–Newell theorem is an extension of Jackson's theorem from open queueing networks to closed queueing networks of exponential servers. We cannot apply Jackson's theorem to closed networks because the queue...



Innovation
Innovation (signal processing)
In time series analysis — as conducted in statistics, signal processing, and many other fields — the innovation is the difference between the observed value of a variable at time t and the optimal forecast of that value based on information available prior to time t...



Jump diffusion

M/M/1 model
M/M/1 model
In queueing theory, a discipline within the mathematical theory of probability, a M/M/1 queue represents the queue length in a system having a single server, where arrivals are detemined by a Poisson process and job service times have an exponential distribution. The model name is written in...



M/M/c model
M/M/c model
In the mathematical theory of random processes, the M/M/c queue is a multi-server queue model. It is a generalisation of the M/M/1 queue.Following Kendall's notation it indicates a system where:*Arrivals are a Poisson process...




Mark V Shaney
Mark V Shaney
Mark V Shaney is a fake Usenet user whose postings were generated by using Markov chain techniques. The name is a play on the words "Markov chain". Many readers were fooled into thinking that the quirky, sometimes uncannily topical posts were written by a real person.-History:Bruce Ellis did the...



Markov chain Monte Carlo
Markov chain Monte Carlo
Markov chain Monte Carlo methods are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution. The state of the chain after a large number of steps is then used as a sample of the...



Markov switching multifractal
Markov switching multifractal
In financial econometrics, the Markov-switching multifractal is a model of asset returns that incorporates stochastic volatility components of heterogeneous durations. MSM captures the outliers, log-memory-like volatility persistence and power variation of financial returns...



Oscillator linewidth
Oscillator linewidth
The concept of a linewidth is borrowed from laser spectroscopy. The linewidth of a laser is a measure of its phase noise. The spectrogram of a laser is produced by passing its light through a prism. The spectrogram of the output of a pure noise-free laser will consist of a single infinitely thin...



Poisson hidden Markov model
Poisson hidden Markov model
In statistics, Poisson hidden Markov models are a special case of hidden Markov models where a Poisson process has a rate which varies in association with changes between the different states of a Markov model...



Population process
Population process
In applied probability, a population process is a Markov chain in which the state of the chain is analogous to the number of individuals in a population , and changes to the state are analogous to the addition or removal of individuals from the population.Although named by analogy to biological...



Product form solution
Product form solution
In probability theory, a product form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components...

 / Mar

Quasireversibility
Quasireversibility
In probability theory, specifically queueing theory, quasireversibility is a property of some queues. The concept was first identified by Richard R. Muntz and further developed by Frank Kelly. Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates...



Queueing theory
Queueing theory
Queueing theory is the mathematical study of waiting lines, or queues. The theory enables mathematical analysis of several related processes, including arriving at the queue, waiting in the queue , and being served at the front of the queue...



Recurrence period density entropy
Recurrence period density entropy
Recurrence period density entropy is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal.- Overview :...



Variance gamma process
Variance gamma process
In the theory of stochastic processes, a part of the mathematical theory of probability, the variance gamma process , also known as Laplace motion, is a Lévy process determined by a random time change. The process has finite moments distinguishing it from many Lévy processes. There is no diffusion...

 / fnc

Wiener equation


Geometric probability (geo)

Boolean model

Buffon's needle
Buffon's needle
In mathematics, Buffon's needle problem is a question first posed in the 18th century by Georges-Louis Leclerc, Comte de Buffon:Buffon's needle was the earliest problem in geometric probability to be solved; it can be solved using integral geometry...



Geometric probability
Geometric probability
Problems of the following type, and their solution techniques, were firststudied in the 19th century, and the general topic became known as geometric probability....



Hadwiger's theorem
Hadwiger's theorem
In integral geometry , Hadwiger's theorem characterises the valuations on convex bodies in Rn. It was proved by Hugo Hadwiger.-Valuations:...




Integral geometry
Integral geometry
In mathematics, integral geometry is the theory of measures on a geometrical space invariant under the symmetry group of that space. In more recent times, the meaning has been broadened to include a view of invariant transformations from the space of functions on one geometrical space to the...



Random coil
Random coil
A random coil is a polymer conformation where the monomer subunits are oriented randomly while still being bonded to adjacent units. It is not one specific shape, but a statistical distribution of shapes for all the chains in a population of macromolecules...



Stochastic geometry
Stochastic geometry
In mathematics, stochastic geometry is the study of random spatial patterns. At the heart of the subject lies the study of random point patterns...



Vitale's random Brunn–Minkowski inequality


Historical (hst)

History of probability
History of probability
Probability has a dual aspect: on the one hand the probability or likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins...



Newton–Pepys problem
Newton–Pepys problem
The Newton–Pepys problem is a probability problem concerning the probability of throwing sixes from a certain number of dice.In 1693 Samuel Pepys and Isaac Newton corresponded over a problem posed by Pepys in relation to a wager he planned to make...



Problem of points
Problem of points
The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory. One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is...



Subjectivism#Subjectivism in probability / Bay


Sunrise problem
Sunrise problem
The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?"The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs....



The Doctrine of Chances
The Doctrine of Chances
The Doctrine of Chances was the first textbook on probability theory, written by 18th-century French mathematician Abraham de Moivre and first published in 1718. De Moivre wrote in English because he resided in England at the time, having fled France to escape the persecution of Huguenots...




Miscellany (msc)

B-convex space
B-convex space
In functional analysis, the class of B-convex spaces is a class of Banach space. The concept of B-convexity was defined and used to characterize Banach spaces that have the strong law of large numbers by Anatole Beck in 1962; accordingly, "B-convexity" is understood as an abbreviation of Beck...



Conditional event algebra
Conditional event algebra
A conditional event algebra is an algebraic structure whose domain consists of logical objects described by statements of forms such as "If A, then B," "B, given A," and "B, in case A." Unlike the standard Boolean algebra of events, a CEA allows the defining of a probability function, P, which...



Error function
Error function
In mathematics, the error function is a special function of sigmoid shape which occurs in probability, statistics and partial differential equations...



Goodman–Nguyen–van Fraassen algebra

List of mathematical probabilists


Nuisance variable
Nuisance variable
In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest...



Probabilistic encryption
Probabilistic encryption
Probabilistic encryption is the use of randomness in an encryption algorithm, so that when encrypting the same message several times it will, in general, yield different ciphertexts...



Probabilistic logic
Probabilistic logic
The aim of a probabilistic logic is to combine the capacity of probability theory to handle uncertainty with the capacity of deductive logic to exploit structure. The result is a richer and more expressive formalism with a broad range of possible application areas...



Probabilistic proofs of non-probabilistic theorems
Probabilistic proofs of non-probabilistic theorems
Probability theory routinely uses results from other fields of mathematics . The opposite cases, collected below, are relatively rare; however, probability theory is used systematically in combinatorics via the probabilistic method. They are particularly used for non-constructive proofs.-Analysis:*...



Pseudocount
Pseudocount
A pseudocount is an amount added to the number of observed cases in order to change the expected probability in a model of those data, when not known to be zero. Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value...




Counters of articles

"Core": 455 (570)

"Around": 198 (200)


"Core selected": 311 (358)

"Core others": 144 (212)

Here k(n) means: n links to k articles. (Some articles are linked more than once.)
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK