Jensen's inequality

# Jensen's inequality

Discussion

Encyclopedia
{{For|Jensen's inequality for analytic functions|Jensen's formula}} In mathematics
Mathematics
Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity...

, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function
Convex function
In mathematics, a real-valued function f defined on an interval is called convex if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph is a convex set...

of an integral
Integral
Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus...

to the integral of the convex function. It was proved by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean after convex transformation; it is a simple corollary that the opposite is true of concave transformations. Jensen's inequality generalizes the statement that the secant line
Secant line
A secant line of a curve is a line that intersects two points on the curve. The word secant comes from the Latin secare, to cut.It can be used to approximate the tangent to a curve, at some point P...

of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function, $t f\left(x_1\right) + \left(1-t\right) f\left(x_2\right),$ while the graph of the function is the convex function of the weighted means, $f\left(t x_1 + \left(1-t\right) x_2\right).$ There are also converses of the Jensen's inequality, which estimate the upper bound of the integral of the convex function. In the context of probability theory
Probability theory
Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single...

, it is generally stated in the following form: if X is a random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...

and $\varphi$ is a convex function, then $\varphi\left\left(\mathbb\left\{E\right\}\left\left[X\right\right]\right\right) \leq \mathbb\left\{E\right\}\left\left[\varphi\left(X\right)\right\right].$

## Statements

The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language of measure theory
Measure (mathematics)
In mathematical analysis, a measure on a set is a systematic way to assign to each suitable subset a number, intuitively interpreted as the size of the subset. In this sense, a measure is a generalization of the concepts of length, area, and volume...

or (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to its full strength.

### Finite form

For a real convex function
Convex function
In mathematics, a real-valued function f defined on an interval is called convex if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph is a convex set...

{{nowrap|$\varphi$}}, numbers x1x2, ..., xn in its domain, and positive weights ai, Jensen's inequality can be stated as: $\varphi\left\left(\frac\left\{\sum a_i x_i\right\}\left\{\sum a_j\right\}\right\right) \le \frac\left\{\sum a_i \varphi \left(x_i\right)\right\}\left\{\sum a_j\right\} \qquad\qquad \left(1\right)$ and the inequality is reversed if {{nowrap|$\varphi$}} is concave
Concave function
In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap or upper convex.-Definition:...

, which is$\varphi\left\left(\frac\left\{\sum a_i x_i\right\}\left\{\sum a_j\right\}\right\right) \geq \frac\left\{\sum a_i \varphi \left(x_i\right)\right\}\left\{\sum a_j\right\}.\qquad\qquad\left(2\right)$ As a particular case, if the weights ai are all equal, then (1) and (2) become $\varphi\left\left(\frac\left\{\sum x_i\right\}\left\{n\right\}\right\right) \le \frac\left\{\sum \varphi \left(x_i\right)\right\}\left\{n\right\} \qquad\qquad \left(3\right)$$\varphi\left\left(\frac\left\{\sum x_i\right\}\left\{n\right\}\right\right) \geq \frac\left\{\sum \varphi \left(x_i\right)\right\}\left\{n\right\} \qquad\qquad \left(4\right)$ For instance, the function log(x) is concave (note that we can use Jensen's to prove convexity or concavity, if it holds for two real numbers whose functions are taken), so substituting $\scriptstyle\varphi\left(x\right)\,=\,\log\left(x\right)$ in the previous formula (4) establishes the (logarithm of) the familiar arithmetic mean-geometric mean inequality: $\frac\left\{x_1 + x_2 + \cdots + x_n\right\}\left\{n\right\} \geq \sqrt\left[n\right]\left\{x_1 x_2 \cdots x_n\right\}.$ The variable x may, if required, be a function of another variable (or set of variables) t, so that xi = g(ti). All of this carries directly over to the general continuous case: the weights ai are replaced by a non-negative integrable function f(x), such as a probability distribution, and the summations are replaced by integrals.

### Measure-theoretic and probabilistic form

Let (Ω, Aμ) be a measure space, such that μ(Ω) = 1. If g is a real
Real number
In mathematics, a real number is a value that represents a quantity along a continuum, such as -5 , 4/3 , 8.6 , √2 and π...

-valued function that is μ-integrable, and if $\varphi$ is a convex function
Convex function
In mathematics, a real-valued function f defined on an interval is called convex if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph is a convex set...

on the real line, then: $\varphi\left\left(\int_\Omega g\, d\mu\right\right) \le \int_\Omega \varphi \circ g\, d\mu.$ In real analysis, we may require an estimate on $\varphi\left\left(\int_a^b f\left(x\right)\, dx\right\right)$ where $a,b$ are real numbers, and $f:\left[a,b\right]\to\mathbb\left\{R\right\}$ is a non-negative real
Real number
In mathematics, a real number is a value that represents a quantity along a continuum, such as -5 , 4/3 , 8.6 , √2 and π...

-valued function that is Lebesgue-integrable. In this case, the Lebesgue measure of $\left[a,b\right]$ need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get $\varphi\left\left(\int_a^b f\left(x\right)\, dx\right\right) \le \int_a^b \varphi\left(\left(b-a\right)f\left(x\right)\right)\frac\left\{1\right\}\left\{b-a\right\} \,dx.$ The same result can be equivalently stated in a probability theory
Probability theory
Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single...

setting, by a simple change of notation. Let $\scriptstyle\left(\Omega, \mathfrak\left\{F\right\},\mathbb\left\{P\right\}\right)$ be a probability space
Probability space
In probability theory, a probability space or a probability triple is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind...

, X an integrable real-valued random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...

and $\varphi$ a convex function
Convex function
In mathematics, a real-valued function f defined on an interval is called convex if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph is a convex set...

. Then: $\varphi\left\left(\mathbb\left\{E\right\}\left\left[X\right\right]\right\right) \leq \mathbb\left\{E\right\}\left\left[\varphi\left(X\right)\right\right].$ In this probability setting, the measure μ is intended as a probability $\scriptstyle\mathbb\left\{P\right\}$, the integral with respect to μ as an expected value
Expected value
In probability theory, the expected value of a random variable is the weighted average of all possible values that this random variable can take on...

$\scriptstyle\mathbb\left\{E\right\}$, and the function g as a random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...

X.

### General inequality in a probabilistic setting

More generally, let T be a real topological vector space
Topological vector space
In mathematics, a topological vector space is one of the basic structures investigated in functional analysis...

, and X a T-valued integrable random variable. In this general setting, integrable means that there exists an element $\scriptstyle\mathbb\left\{E\right\}\\left\{X\\right\}$ in T, such that for any element z in the dual space
Dual space
In mathematics, any vector space, V, has a corresponding dual vector space consisting of all linear functionals on V. Dual vector spaces defined on finite-dimensional vector spaces can be used for defining tensors which are studied in tensor algebra...

of T: $\scriptstyle\mathbb\left\{E\right\}|\langle z, X \rangle|\,<\,\infty$, and $\scriptstyle\langle z, \mathbb\left\{E\right\}\\left\{X\\right\}\rangle\,=\,\mathbb\left\{E\right\}\\left\{\langle z, X \rangle\\right\}$. Then, for any measurable convex function φ and any sub-σ-algebra $\scriptstyle\mathfrak\left\{G\right\}$ of $\scriptstyle\mathfrak\left\{F\right\}$: $\varphi\left\left(\mathbb\left\{E\right\}\left\left[X|\mathfrak\left\{G\right\}\right\right]\right\right) \leq \mathbb\left\{E\right\}\left\left[\varphi\left(X\right)|\mathfrak\left\{G\right\}\right\right].$ Here $\scriptstyle\mathbb\left\{E\right\}\\left\{\cdot|\mathfrak\left\{G\right\} \\right\}$ stands for the expectation conditioned
Conditional expectation
In probability theory, a conditional expectation is the expected value of a real random variable with respect to a conditional probability distribution....

to the σ-algebra $\scriptstyle\mathfrak\left\{G\right\}$. This general statement reduces to the previous ones when the topological vector space T is the real axis, and $\scriptstyle\mathfrak\left\{G\right\}$ is the trivial σ-algebra $\scriptstyle\\left\{\varnothing, \Omega\\right\}$. In case that the sub-sigma algebra is generated by a measurable function $Y$ the statement can be given as{{clarify|reason=explain or wikilink for notation|date=October 2011}}$\varphi\left\left(\mathbb\left\{E\right\}\left\left[X|Y\right\right]\circ Y\right\right) \leq \mathbb\left\{E\right\}\left\left[\varphi\left(X\right)|Y\right\right]\circ Y$ or$\varphi\circ\mathbb\left\{E\right\}\left\left[X|Y\right\right] \leq \mathbb\left\{E\right\}\left\left[\varphi\circ X|Y\right\right].$

## Proofs

Jensen's inequality can be proved in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case where X is a real number (see figure). Assuming a hypothetical distribution of X values, one can immediately identify the position of $\scriptstyle\mathbb\left\{E\right\}\\left\{X\\right\}$ and its image $\scriptstyle\varphi\left(\mathbb\left\{E\right\}\\left\{X\\right\}\right)$ in the graph. Noticing that for convex mappings $\scriptstyle Y\,=\,\varphi\left(X\right)$ the corresponding distribution of Y values is increasingly "stretched out" for increasing values of X, it is easy to see that the distribution of Y is broader in the interval corresponding to X > X0 and narrower in X < X0 for any X0; in particular, this is also true for $\scriptstyle X_0 \,=\, \mathbb\left\{E\right\}\\left\{ X \\right\}$. Consequently, in this picture the expectation of Y will always shift upwards with respect to the position of $\scriptstyle\varphi\left(\mathbb\left\{E\right\}\\left\{ X \\right\} \right)$, and this "proves" the inequality, i.e. $\mathbb\left\{E\right\}\\left\{Y\\right\} = \mathbb\left\{E\right\}\\left\{ \varphi\left(X\right) \\right\} \geq \varphi\left(\mathbb\left\{E\right\}\\left\{ X \\right\} \right),$ with equality when φ(X) is not strictly convex, e.g. when it is a straight line, or when X follows a degenerate distribution (i.e. is a constant). The proofs below formalize this intuitive notion.

### Proof 1 (finite form)

If λ1 and λ2 are two arbitrary positive real numbers such that λ1 + λ2 = 1 then convexity of $\scriptstyle\varphi$ implies $\varphi\left(\lambda_1 x_1+\lambda_2 x_2\right)\leq \lambda_1\,\varphi\left(x_1\right)+\lambda_2\,\varphi\left(x_2\right)\text\left\{ for any \right\}x_1,\,x_2.$ This can be easily generalized: if λ1, λ2, ..., λn are positive real numbers such that λ1 + ... + λn = 1, then $\varphi\left(\lambda_1 x_1+\lambda_2 x_2+\cdots+\lambda_n x_n\right)\leq \lambda_1\,\varphi\left(x_1\right)+\lambda_2\,\varphi\left(x_2\right)+\cdots+\lambda_n\,\varphi\left(x_n\right),$ for any x1, ..., xn. This finite form of the Jensen's inequality can be proved by induction
Mathematical induction
Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers...

: by convexity hypotheses, the statement is true for n = 2. Suppose it is true also for some n, one needs to prove it for n + 1. At least one of the λi is strictly positive, say λ1; therefore by convexity inequality: \begin\left\{align\right\} \varphi\left\left(\sum_\left\{i=1\right\}^\left\{n+1\right\}\lambda_i x_i\right\right) & = \varphi\left\left(\lambda_1 x_1+\left(1-\lambda_1\right)\sum_\left\{i=2\right\}^\left\{n+1\right\} \frac\left\{\lambda_i\right\}\left\{1-\lambda_1\right\} x_i\right\right) \\ & \leq \lambda_1\,\varphi\left(x_1\right)+\left(1-\lambda_1\right) \varphi\left\left(\sum_\left\{i=2\right\}^\left\{n+1\right\}\left\left( \frac\left\{\lambda_i\right\}\left\{1-\lambda_1\right\} x_i\right\right)\right\right). \end\left\{align\right\} Since $\scriptstyle \sum_\left\{i=2\right\}^\left\{n+1\right\} \lambda_i/\left(1-\lambda_1\right)\, =\,1$, one can apply the induction hypotheses to the last term in the previous formula to obtain the result, namely the finite form of the Jensen's inequality. In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be rewritten as: $\varphi\left\left(\int x\,d\mu_n\left(x\right) \right\right)\leq \int \varphi\left(x\right)\,d\mu_n\left(x\right),$ where μn is a measure given by an arbitrary convex combination
Convex combination
In convex geometry, a convex combination is a linear combination of points where all coefficients are non-negative and sum up to 1....

of Dirac deltas: $\mu_n=\sum_\left\{i=1\right\}^n \lambda_i \delta_\left\{x_i\right\}.$ Since convex functions are continuous
Continuous function
In mathematics, a continuous function is a function for which, intuitively, "small" changes in the input result in "small" changes in the output. Otherwise, a function is said to be "discontinuous". A continuous function with a continuous inverse function is called "bicontinuous".Continuity of...

, and since convex combinations of Dirac deltas are weakly
Weak topology
In mathematics, weak topology is an alternative term for initial topology. The term is most commonly used for the initial topology of a topological vector space with respect to its continuous dual...

dense
Dense set
In topology and related areas of mathematics, a subset A of a topological space X is called dense if any point x in X belongs to A or is a limit point of A...

in the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure.

### Proof 2 (measure-theoretic form)

Let g be a real-valued μ-integrable function on a probability space Ω, and let φ be a convex function on the real numbers. Since φ is convex, at each real number x we have a nonempty set of subderivative
Subderivative
In mathematics, the concepts of subderivative, subgradient, and subdifferential arise in convex analysis, that is, in the study of convex functions, often in connection to convex optimization....

s, which may be thought of as lines touching the graph of φ at x, but for which at or below the graph of φ at all points. Now, if we define$x_0:=\int_\Omega g\, d\mu,$ because of the existence of subderivatives for convex functions, we may choose an a and b such that$ax + b \leq \varphi\left(x\right)$, for all real x and$ax_0+ b = \varphi\left(x_0\right)$. But then we have that$\varphi\circ g \left(x\right) \geq ag\left(x\right)+ b$ for all x. Since we have a probability measure, the integral is monotone with μ(Ω)=1 so that$\int_\Omega \varphi\circ g\, d\mu \geq \int_\Omega \left(ag + b\right)\, d\mu$

### Proof 3 (general inequality in a probabilistic setting)

Let X be an integrable random variable that takes values in a real topological vector space T. Since $\scriptstyle\varphi:T \mapsto \mathbb\left\{R\right\}$ is convex, for any $x,y \in T$, the quantity $\frac\left\{\varphi\left(x+\theta\,y\right)-\varphi\left(x\right)\right\}\left\{\theta\right\},$ is decreasing as θ approaches 0+. In particular, the subdifferential of φ evaluated at x in the direction y is well-defined by $\left(D\varphi\right)\left(x\right)\cdot y:=\lim_\left\{\theta \downarrow 0\right\} \frac\left\{\varphi\left(x+\theta\,y\right)-\varphi\left(x\right)\right\}\left\{\theta\right\}=\inf_\left\{\theta \neq 0\right\} \frac\left\{\varphi\left(x+\theta\,y\right)-\varphi\left(x\right)\right\}\left\{\theta\right\}.$ It is easily seen that the subdifferential is linear in y and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term for θ = 1, one gets $\varphi\left(x\right)\leq \varphi\left(x+y\right)-\left(D\varphi\right)\left(x\right)\cdot y.\,$ In particular, for an arbitrary sub-σ-algebra $\scriptstyle\mathfrak\left\{G\right\}$ we can evaluate the last inequality when $\scriptstyle x\,=\,\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\},\,y=X-\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}$ to obtain $\varphi\left(\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right)\leq \varphi\left(X\right)-\left(D\varphi\right)\left(\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right)\cdot \left(X-\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right).$ Now, if we take the expectation conditioned to $\scriptstyle\mathfrak\left\{G\right\}$ on both sides of the previous expression, we get the result since: $\mathbb\left\{E\right\}\\left\{\left\left[\left(D\varphi\right)\left(\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right)\cdot \left(X-\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right)\right\right]|\mathfrak\left\{G\right\}\\right\}=\left(D\varphi\right)\left(\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\}\right)\cdot \mathbb\left\{E\right\}\\left\{ \left\left( X-\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\} \right\right) |\mathfrak\left\{G\right\}\\right\}=0,$ by the linearity of the subdifferential in the y variable, and the following well-known property of the conditional expectation
Conditional expectation
In probability theory, a conditional expectation is the expected value of a real random variable with respect to a conditional probability distribution....

: $\mathbb\left\{E\right\}\\left\{ \left\left(\mathbb\left\{E\right\}\\left\{X|\mathfrak\left\{G\right\}\\right\} \right\right) |\mathfrak\left\{G\right\}\\right\}=\mathbb\left\{E\right\}\\left\{ X |\mathfrak\left\{G\right\}\\right\}.$

### Form involving a probability density function

Suppose Ω is a measurable subset of the real line and f(x) is a non-negative function such that $\int_\left\{-\infty\right\}^\infty f\left(x\right)\,dx = 1.$ In probabilistic language, f is a probability density function
Probability density function
In probability theory, a probability density function , or density of a continuous random variable is a function that describes the relative likelihood for this random variable to occur at a given point. The probability for the random variable to fall within a particular region is given by the...

. Then Jensen's inequality becomes the following statement about convex integrals: If g is any real-valued measurable function and φ is convex over the range of g, then $\varphi\left\left(\int_\left\{-\infty\right\}^\infty g\left(x\right)f\left(x\right)\, dx\right\right) \le \int_\left\{-\infty\right\}^\infty \varphi\left(g\left(x\right)\right) f\left(x\right)\, dx.$ If g(x) = x, then this form of the inequality reduces to a commonly used special case: $\varphi\left\left(\int_\left\{-\infty\right\}^\infty x\, f\left(x\right)\, dx\right\right) \le \int_\left\{-\infty\right\}^\infty \varphi\left(x\right)\,f\left(x\right)\, dx.$

### Alternative finite form

If $\Omega$ is some finite set $\\left\{x_1,x_2,\ldots,x_n\\right\}$, and if $\mu$ is a counting measure
Counting measure
In mathematics, the counting measure is an intuitive way to put a measure on any set: the "size" of a subset is taken to be the number of elements in the subset, if the subset is finite, and ∞ if the subset is infinite....

on $\Omega$, then the general form reduces to a statement about sums: $\varphi\left\left(\sum_\left\{i=1\right\}^\left\{n\right\} g\left(x_i\right)\lambda_i \right\right) \le \sum_\left\{i=1\right\}^\left\{n\right\} \varphi\left(g\left(x_i\right)\right)\lambda_i,$ provided that $\lambda_1 + \lambda_2 + \cdots + \lambda_n = 1, \lambda_i \ge 0.$ There is also an infinite discrete form.

### Statistical physics

Jensen's inequality is of particular importance in statistical physics when the convex function is an exponential, giving: $e^\left\{\langle X \rangle\right\} \leq \left\langle e^X \right\rangle,$ where angle brackets denote expected value
Expected value
In probability theory, the expected value of a random variable is the weighted average of all possible values that this random variable can take on...

s with respect to some probability distribution
Probability distribution
In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....

in the random variable
Random variable
In probability and statistics, a random variable or stochastic variable is, roughly speaking, a variable whose value results from a measurement on some type of random process. Formally, it is a function from a probability space, typically to the real numbers, which is measurable functionmeasurable...

X. The proof in this case is very simple (cf. Chandler, Sec. 5.5). The desired inequality follows directly, by writing $\left\langle e^X \right\rangle e^\left\{\langle X \rangle\right\} \left\langle e^\left\{X - \langle X \rangle\right\} \right\rangle$ and then applying the inequality$e^X \geq 1+X \,$ to the final exponential.

### Information theory

If p(x) is the true probability distribution for x, and q(x) is another distribution, then applying Jensen's inequality for the random variable Y(x) = q(x)/p(x) and the function $\varphi$(y) = −log(y) gives $\Bbb\left\{E\right\}\\left\{\varphi\left(Y\right)\\right\} \ge \varphi\left(\Bbb\left\{E\right\}\\left\{Y\\right\}\right)$ $\Rightarrow \int p\left(x\right) \log \frac\left\{p\left(x\right)\right\}\left\{q\left(x\right)\right\} \, dx \ge - \log \int p\left(x\right) \frac\left\{q\left(x\right)\right\}\left\{p\left(x\right)\right\} \, dx$ $\Rightarrow \int p\left(x\right) \log \frac\left\{p\left(x\right)\right\}\left\{q\left(x\right)\right\} \, dx \ge 0$ $\Rightarrow - \int p\left(x\right) \log q\left(x\right) \, dx \ge - \int p\left(x\right) \log p\left(x\right) \, dx,$ a result called Gibbs' inequality
Gibbs' inequality
In information theory, Gibbs' inequality is a statement about the mathematical entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality....

. It shows that the average message length is minimised when codes are assigned on the basis of the true probabilities p rather than any other distribution q. The quantity that is non-negative is called the Kullback–Leibler divergence of q from p.

### Rao–Blackwell theorem

{{main|Rao–Blackwell theorem}} If L is a convex function, then from Jensen's inequality we get $L\left(\Bbb\left\{E\right\}\\left\{\delta\left(X\right)\\right\}\right) \le \Bbb\left\{E\right\}\\left\{L\left(\delta\left(X\right)\right)\\right\} \quad \Rightarrow \quad \Bbb\left\{E\right\}\\left\{L\left(\Bbb\left\{E\right\}\\left\{\delta\left(X\right)\\right\}\right)\\right\} \le \Bbb\left\{E\right\}\\left\{L\left(\delta\left(X\right)\right)\\right\}. \,$ So if δ(X) is some estimator
Estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule and its result are distinguished....

of an unobserved parameter θ given a vector of observables X; and if T(X) is a sufficient statistic for θ; then an improved estimator, in the sense of having a smaller expected loss L, can be obtained by calculating $\delta_1 \left(X\right) = \Bbb\left\{E\right\}_\left\{\theta\right\}\\left\{\delta\left(X\text{'}\right) \,|\, T\left(X\text{'}\right)= T\left(X\right)\\right\}, \,$ the expected value of δ with respect to θ, taken over all possible vectors of observations X compatible with the same value of T(X) as that observed. This result is known as the Rao–Blackwell theorem.

NEWLINE
NEWLINE
• Karamata's inequality
Karamata's inequality
In mathematics, Karamata's inequality, named after Jovan Karamata, also known as the majorization inequality, is a theorem in elementary algebra for convex and concave real-valued functions, defined on an interval of the real line...

for a more general inequality.
• NEWLINE
• Law of averages
Law of averages
The law of averages is a lay term used to express a belief that outcomes of a random event will "even out" within a small sample.As invoked in everyday life, the "law" usually reflects bad statistics or wishful thinking rather than any mathematical principle...

• NEWLINE
• The operator Jensen inequality of Hansen and Pedersen.
NEWLINE {{refimprove|date=October 2011}}