Inverse distribution

< List of probability distributions

inverse distribution example
Graph of the inverse normal distribution [1].

The inverse distribution is a distribution of the reciprocal of a random variable. For example, for a distribution of random variable X, the inverse distribution would be

Y = 1/X.

Inverse distributions are particularly relevant in Bayesian prior and posterior distributions, specifically for scale parameters. The term “Inverse probability” refers to the probability of unobserved things, such as the probability distribution of an unobserved variable. Although now considered an obsolete term, the basis of inverse probability- determining an unobserved variable- is a crucial component of inferential statistics. The main challenge of inverse probability is finding a probability distribution for an unobserved variable which is now known as Bayesian probability. In the algebra of random variables, inverse distributions are a special case of ratio distributions where the numerator random variable has a degenerate distribution.

Types of inverse distribution

Inverse distributions include:

Inverse probability distribution vs. inverse distribution function

In addition to specific inverse probability distributions, there are several definitions for the term “Inverse distribution”, each with specific meanings based on the context. These definitions include:

  • Sampling up to a certain number of successes, known as inverse sampling
  • Distributions where frequencies are reciprocal quantities, such as the factorial distribution;
  • A technique of working backwards to find x-values by using the distribution function F(x).

While all definitions are valid, the term Inverse Distribution Function in elementary statistics typically refers to finding probabilities, rather than the probability distribution itself. For example:

  • The inverse-normal distribution is used to find the percentile of a normal random variable.
  • The inverse-gamma distribution is used to find the percentile of a gamma random variable.
  • The inverse-chi-squared distribution is used to find the percentile of a chi-squared random variable.

Inverse distributions are a versatile tool that can solve various probability and statistics problems. They can determine the probability of a random variable being less than or equal to a specific value or find its percentile. Inverse distributions can also simulate random variables, like simulating a normal distribution by taking the reciprocal of an inverse-normal distribution. Overall, inverse distributions are an integral part of solving problems in probability and statistics.

Usage in Bayesian probability

Bayesian probability is a powerful way to think about probability that takes into account our prior beliefs on an event’s likelihood. In addition to data analysis, Bayesian probability also considers what we already know or believe.

Consider the example of flipping a coin. If you don’t know anything about the coin, you might assign a 50/50 probability to each side landing face up. But if you know the coin is weighted, you might assign a higher probability to one side landing face up.

Bayesian probability can also update our beliefs with new evidence. For instance, if you flip a coin 10 times and it comes up heads 8 times, you might adjust your belief about the probability of heads landing face up. You could now assign a higher probability to heads landing face up, such as 60%.

This tool is useful for making decisions that involve weighing evidence and prior beliefs. Bayesian probability can be used for making predictions, diagnosing diseases, assessing risks, and other decisions in various situations.

For example, suppose that a screening test for a specific genetic abnormality is highly effective. For those with the abnormality, it gives 99% true positive results. For those who don’t have it, it gives 95% true negative results. Also suppose that only 0.001% of the population carry this genetic aberration.

Bayes Rule and inverse probability allow us to calculate the probability that someone randomly selected carries the genetic abnormality, given a positive test. Our hypothesis in this case is the genetic abnormality, and the positive test is the condition. To calculate the probability of a positive test, we sum two terms using the Bayes Rule formula, which can be written as P(H|D) = (P(D|H) P(H) / P(D).

For this example, the first term is the likelihood that the hypothesis is true (the genetic abnormality exists) times P(H), the probability of a positive test given that the hypothesis is true. The second term is the probability of a positive test given no genetic abnormality, times the likelihood of no genetic abnormality. Thus:

P(D) = P(D|H) P(H) + P(D|H) P(H), which equals 0.990.00001 + 0.010.99999 or 0.0100098.

Plugging these numbers in for Bayes rule, we get:

P(H|D) = [0.99*0.00001] / 0.0100098

A positive test only gives a 0.09 percent likelihood that a person has this rare genetic abnormality.

References

[1] Wingedsubmariner, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

Scroll to Top