< List of probability distributions

In statistics, a **marginal distribution** refers to the distribution of values for a single variable in a dataset while ignoring the other related variables. Although the definition sounds complex, the concept is straightforward. When analyzing a larger set of related variables, researchers may choose to focus on one variable to answer a specific question. For example, in a study on car sales, if both car type and gender were measured, a researcher might want to know the distribution of car types without considering gender – this would be a marginal distribution.

**Marginal variables** belong to a subset of retained variables. This terminology originates from the fact that marginal variables can be calculated by adding up the values in a table along rows or columns and writing down the sum in the table’s margins [1]. The marginal distribution – the distribution of the marginal variables – may be found by removing the unwanted variables’ distributions (the discarded variables) via marginalization, focusing only on the marginal sums in the table. After removal, the discarded variables are said to have been *marginalized out*.

## Marginal distribution vs. conditional distribution

The marginal distribution of a subset of random variables is the probability distribution of those variables within the subset. It shows the probabilities of different variable values within the subset, independent of the values of the other variables. This differs from a *conditional distribution*, which provides probabilities based on the values of the other variables.

Marginal distributions follow a couple of rules. First, they must be derived from bivariate data, which involves two variables, such as *X* and *Y*. Marginal distributions arise when you are only interested in examining the distribution of *one *of the random variables, either *X* or *Y*. The probability table above lists the sum probabilities of one variable (in this case, “*j*“) in the bottom row and the sum probabilities of the other (“*i*” in this example) in the right column, resulting in two marginal distributions.

A *conditional distribution* refers to a specific subset of a larger dataset of interest. In a dice rolling question, an example might be examining the probability of rolling a two or a six. The image above depicts two highlighted sub-populations or subsets, and thus two distinct conditional distributions. The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable [2].

## How to find marginal distribution probability

The most straightforward way to understand and compute marginal distributions is by using a two-way contingency table, which happens to be the origin of the term’s name. As an example, the following table shows preferences for pet ownership between men and women.

Cats | Fish | Dogs | ||

Men | 2 | 4 | 6 | 12 |

Women | 5 | 3 | 2 | 10 |

7 | 7 | 8 | 22 |

Cats | Fish | Dogs | ||

Men | 2 | 4 | 6 | 12 |

Women | 5 | 3 | 2 | 10 |

7 | 7 | 8 | 22 |

Consider the following example question: What is the marginal distribution probability of pet preference for men and women? The solution can be obtained by referring to the table above and following these steps:

- Calculate the total number of people from the given data (22 people in this case, the sum of 12 and 10 from the right-hand column).
- Count the number of people who prefer each type of pet, and convert the count into a probability:
- People who prefer cats: 7/22 = .32
- People who prefer dogs: 8/22 = .36
- People who prefer fish: 7/22 = .32

These are our marginal probabilities (which should add up to 1).

## References

[1] Trumpler, Robert J. & Harold F. Weaver (1962). *Statistical Astronomy*. Dover Publications. pp. 32–33.

[2] Chapter2pt5 (PPT). Conditional distributions.