< List of probability distributions
The multinomial distribution is a generalization of the binomial distribution in probability theory. It can model events such as the probability of counts for each side of a k-sided dice rolled n times. For n independent trials, where each trial results in a success for only one of k categories, with each category having a given fixed probability of success, the multinomial distribution provides the probability of all possible combinations of successful outcomes for the various categories.
An example of an experiment that would follow a multinomial distribution is rolling a fair six-sided die multiple times, recording multiple possible outcomes. Each roll of the die has six possible outcomes (1, 2, 3, 4, 5, or 6), with each outcome having a probability of 1/6. If you were to roll the die multiple times and record the number of occurrences for each outcome, the resulting data would follow a multinomial distribution. In comparison, the binomial distribution would only be concerned with one outcome, such as rolling a 2 or rolling a 6.
Many real life examples can be found in marketing. For example, suppose a market research company conducts a survey to determine consumers’ preferred brands among four competitors: Brand A, Brand B, Brand C, and Brand D. The company randomly selects a sample of consumers and asks them to choose their favorite brand. The distribution of the consumers’ preferences would follow a multinomial distribution, as there are more than two categories (brands) in this case, and the total count of consumers choosing each brand would represent the different outcomes of the multinomial distribution.
Multinomial distribution formula and example
Use this probability mass function (PMF) to find probabilities for a multinomial distribution:

where
- n = number of events
- n1 = number of outcomes, event 1
- n2 = number of outcomes, event 2
- n3 = number of outcomes, event x
- p1 = probability event 1 happens
- p2 = probability event 2 happens
- px = probability event x happens
Example question: During a series of 6 card game matches, player A has a 20% chance of winning each game, player B has a 30% chance, and player C has a 50% chance. What is the probability that player A will win 1 game, player B will win 2 games, and player C will win the remaining 3 games?
Using the data from the question, we can fill in the parts of the formula as follows:
- n = 12 (6 games total).
- n1 = 1 (Player A wins).
- n2 = 2 (Player B wins).
- n3 = 3 (Player C wins).
- p1 = 0.20 (probability that Player A wins).
- p2 = 0.30 (probability that Player B wins).
- p3 = 0.50 (probability that Player C wins).
Plugging into the formula, we get

Comparison to binomial and other distributions
A multinomial experiment — which results in a multinomial distribution — is almost the same as a binomial experiment with one big difference: a binomial experiment can have two outcomes, while a multinomial experiment can have multiple outcomes. They share similar properties:
- Fixed number of n trials.
- Independent trials (i.e., one trial has no effect on the next).
- Constant probability of success for each trial.
- A random variable Y= the number of successes.
When rolling a die ten times, resulting in six possible outcomes (1, 2, 3, 4, 5, or 6), you are conducting a multinomial experiment. However, if you were to roll the die ten times while observing how many times you roll a three, this would be categorized as a binomial experiment. In this case, three would represent success, while the other outcomes (1, 2, 4, 5, and 6) would be considered failures.
- When k = 2 and n = 1, the multinomial distribution reduces to the Bernoulli distribution.
- When k = 2 and n is bigger than> 1, it becomes the binomial distribution.
- When k > 2 and n = 1, it becomes the categorical distribution.
The multinomial distribution is often used to characterize categorical variables. Suppose a random variable Z has k categories, then each category can be coded as an integer, which leads to Z ∈ {1, 2, · · · , k} [1].
History of the multinomial distribution
The history of the multinomial distribution can be traced back to the 18th century. The idea of the multinomial distribution was first introduced by Swiss mathematician Jakob Bernoulli in his work “Ars Conjectandi” published in 1713 [2], where he discussed the probabilities of outcomes in dice games. However, it was French mathematician Pierre-Simon Laplace who formally defined the multinomial distribution in the early 19th century [3]. He studied and applied the distribution to various problems in probability theory, including those related to astronomy and social statistics.
Throughout the 19th and 20th centuries, the multinomial distribution was further studied and refined by various mathematicians and statisticians. Some notable contributors include Francis Galton [4], who used the distribution to model the frequency of traits in a population in the context of genetics, and Ronald Fisher [5], who used the multinomial distribution in his work on experimental design and statistical inference.
Today, the multinomial distribution is an essential tool in modern statistical analysis with applications across numerous fields, including biology, economics, psychology, political science, and many others. It is used to model the probability of different outcomes in situations where there are more than two categories or classes.
References
[1] Chen, Y. (2020). Stat512: Statistical inference. Lecture 7: Multinomial distribution. Retrieved May 15, 2023 from: http://faculty.washington.edu/yenchic/20A_stat512/Lec7_Multinomial.pdf
[2] Katz, V. & Swetz, F. Mathematical Treasures – Jacob Bernoulli’s Ars Conjectandi.
[3] Laplace, P. (1812). Théorie Analytique des Probabilités.
[4] Galton, F. (1889). Natural Inheritance.
[5] Fisher, R. A. (1935). The design of experiments.