Scale parameter

< Probability and statistics definitions

A scale parameter is a numerical value associated with a probability distribution that influences the spread or size of the distribution on the horizontal axis. It determines the range, dispersion, or variability of the data within the distribution. In other words, it stretches or compresses the distribution along the horizontal axis without changing its shape or central location.

Scale parameter examples

For example, in a normal distribution, the scale parameter corresponds to the standard deviation (σ), which controls the width of the distribution. A larger parameter results in a wider distribution, indicating greater variability in the data, while a smaller parameter leads to a narrower distribution, signifying less variability. The fact that the scale parameter equals the standard deviation is only true for the standard normal probability distribution. In most other distribution types, the scale will not equal the standard deviation.

An animation demonstrates the impact of a scale parameter on a combination of two normal probability distributions. A larger parameter broadens the distribution, causing its modes to decrease. Conversely, a smaller parameter has the opposite effect, resulting in a narrower distribution with higher modes [1].

As the scale parameter increases, the distribution becomes more dispersed. Conversely, a smaller parameter leads to a more compressed distribution. The effects of such as parameter can be summarized as follows:

  • Equal to 0; results in a single, vertical line at 0 (a spike).
  • Between 0 and 1; compresses the distribution horizontally.
  • Equal to 1; leaves the distribution unaltered.
  • Greater than 1; expands the graph horizontally.

Scale parameter history

The history of the scale parameter is closely tied to the development of probability distributions and statistical theory. While it’s challenging to trace a specific origin, we can identify key milestones in the evolution of the scale parameter concept.

  1. Bernoulli distribution (1700s): The earliest known probability distribution, developed by Jacob Bernoulli, laid the foundation for understanding variability in data. Although this distribution does not have a scale parameter, it set the stage for future developments.
  2. Normal distribution (1800s): The normal or Gaussian distribution, introduced by Abraham de Moivre and later refined by Carl Friedrich Gauss and Adolphe Quetelet, was the first continuous probability distribution to feature a scale parameter. In this case, the standard deviation (σ) represents the scale parameter, determining the width of the distribution.
  3. Exponential distribution (1900s): The exponential distribution, which models the time between events in a Poisson process, also has a scale parameter. This parameter, often denoted as λ (lambda), controls the rate at which events occur.
  4. Gamma distribution (1900s): The gamma distribution, used to model waiting times and other processes, includes a scale parameter (θ) that affects the dispersion of the distribution.
  5. Generalizations of probability distributions (20th century): Throughout the 20th century, researchers continued to develop new probability distributions, many of which included scale parameters. The concept became widely recognized in various fields, such as finance, engineering, and natural sciences.

In summary, the history of the scale parameter is intertwined with the evolution of probability distributions and statistical theory. As researchers developed more sophisticated distributions to model real-world phenomena, they identified scale parameters as a key component in controlling the spread or size of these distributions.

References

[1] Walwal20, CC BY 4.0 https://creativecommons.org/licenses/by/4.0, via Wikimedia Commons

Scroll to Top