Chi-Square Test & Distribution

< Hypothesis testing

Chi-square tests come in various forms, but all use the chi-square statistic and distribution for different purposes. The two most common are:

  1. A chi-square test for independence: evaluates whether distributions of categorical variables differ from one another.
  2. A chi-square goodness of fit test: assesses whether sample data aligns with a given population.

This table compares the key differences between the two tests:

FeatureChi-square test for independenceChi-square goodness of fit test
PurposeTo test whether two categorical variables are related to each otherTo test whether a categorical variable fits a particular distribution
AssumptionsThe variables are independent and the expected frequencies are greater than or equal to 5 in each cellThe variable fits the specified distribution
AdvantagesRelatively simple to understand, interpret, and runCan test a wide variety of distributions
DisadvantagesOnly tests two categorical variablesOnly tests one categorical variable

Contents:

  1. Chi square test for independence
  2. Chi square goodness of fit test
  3. Chi-Square distribution
  4. Helmert’s Distribution
  5. Semi-Normal distribution

1. Chi-square test for independence

This test is used to evaluate the connection between two categorical variables. For instance, it can be applied to assess the relationship between gender and political party affiliation.

To perform this test, create a contingency table that displays the frequency of each combination of values for the two variables. Using the gender and political party affiliation example, the table would show the number of men and women identifying as Democrats, Republicans, and Independents.

After creating the contingency table, use a chi-square test to calculate the p-value, which indicates the probability of obtaining the observed results if the null hypothesis (no relationship between the variables) is true. If the p-value is less than or equal to the significance level, you can reject the null hypothesis and conclude a significant relationship exists between the two variables.

The chi-square statistic

The chi square statistic formula
The chi square statistic formula for contingency tables.

The chi-square statistic is used in the chi-square test to determine the relationship between two categorical variables. In statistics, variables can be numerical (countable) or non-numerical (categorical). The chi-square statistic is a single value that represents the difference between observed counts and expected counts, assuming no relationship exists between the variables in the population.

The formula for the chi-square statistic is:

χ² = Σ[(O – E)² / E]

Here, “χ²” represents the chi-square statistic, “O” denotes the observed value, “E” signifies the expected value, and “Σ” indicates the summation over all data items in the dataset. The subscript “c” refers to the degrees of freedom.

Calculating the chi-square statistic by hand can be lengthy and tedious due to the summation. Instead, it is more practical to use technology, such as SPSS or Excel, to compute the chi-square test and p-value.

The chi-square statistic’s interpretation depends on the context and hypothesis being tested. However, the core idea remains the same: comparing expected values with collected values. A common form of the chi-square statistic is used for contingency tables:

χ² = Σ[(Oᵢ – Eᵢ)² / Eᵢ]

Here, “i” represents the “ith” position in the contingency table. A low chi-square value indicates a high correlation between the two datasets. If the observed and expected values were equal (i.e., no difference), the chi-square value would theoretically be zero, which is unlikely in real life.

Determining whether a chi-square test statistic is large enough to indicate a statistically significant difference can be challenging. To assess the significance, you can compare the calculated chi-square value to a critical value from a chi-square table. If the chi-square value exceeds the critical value, a significant difference exists.

Alternatively, you can use a p-value. First, state the null and alternate hypotheses. Then, generate a chi-square curve for your results along with a p-value (e.g., using Excel). Small p-values (below 5%) generally suggest that the difference is significant.

Note that the chi-square statistic can only be used with numerical data, not percentages, proportions, means, or other similar statistical values. For instance, if you have 10% of 200 people, you must convert that percentage to a number (20) before conducting a test statistic.

chi-square test in excel
Complete table for the example below.

The chi-square formula can be challenging to work with, primarily due to the need for summing a large number of values. The most straightforward approach is to create a table. I created the above one in Excel.

Example: A survey of 256 visual artists was conducted to determine their zodiac signs. The results were as follows: Aries (29), Taurus (24), Gemini (22), Cancer (19), Leo (21), Virgo (18), Libra (19), Scorpio (20), Sagittarius (23), Capricorn (18), Aquarius (20), Pisces (23). Test the hypothesis that zodiac signs are evenly distributed among visual artists.

  1. Create a table with columns for “Categories,” “O,” “E,” “(O – E),” “(O – E)²,” and “(O – E)² / E.”
  2. Fill in your categories. In this case, there are 12 zodiac signs.
  3. Enter the counts (number of items in each category) in column 2. These counts are the observed counts (O) provided in the question.
  4. Calculate the expected value for column 3 (E). Since we expect the 12 zodiac signs to be evenly distributed among all 256 people, the expected value is 256/12 = 21.333.
  5. Subtract the expected value E from the observed value O and place the result in the “Residual” column. For example, for Aries, the calculation is 29 – 21.333 = 7.667.
  6. Square the results from Step 5 and place the values in the (O – E)² column.
  7. Divide the values in Step 6 by the expected value E and place the results in the final column.
  8. Sum all the values in the last column. This sum is the chi-square statistic, which is 5.094 in this example.

Back to top

Chi-square p-value

A chi-square test provides a p-value, which indicates whether your test results are statistically significant. To perform a chi-square test and obtain the p-value, you need two pieces of information:

  1. Degrees of freedom: This is calculated as the number of categories minus 1.
  2. The alpha level (α): Chosen by the researcher, the typical alpha level is 0.05 (5%), but other levels like 0.01 or 0.10 may also be used.

In elementary or AP statistics courses, the degrees of freedom (df) and the alpha level are often provided in questions, so you generally don’t need to determine them yourself. However, if you need to calculate the degrees of freedom, count the categories and subtract 1.

Degrees of freedom are presented as a subscript next to the chi-square (Χ²) symbol. For example, Χ²₆ represents a chi-square with 6 degrees of freedom, while Χ²₄ denotes a chi-square with 4 degrees of freedom.

Testing the Chi-Square Hypothesis

Example question: Conduct a chi-square hypothesis test with the following characteristics:

  • 11 Degrees of Freedom
  • Chi-square test statistic of 5.094
  1. Take the chi-square statistic and find the corresponding p-value in the chi-square table. Note: The chi-square table doesn’t provide exact values for every possibility. By using a calculator, you can obtain an exact value. In this case, the precise p-value is 0.9265.
  2. Use the p-value to determine whether to accept or reject the null hypothesis. Generally, small p-values (1% to 5%) lead to rejecting the null hypothesis. However, this large p-value (92.65%) indicates that the null hypothesis should not be rejected. In other words, this test does not show statistical significance.

Chi-square goodness of fit test

This test is used to examine whether a categorical variable conforms to a particular distribution, such as determining if survey results from 1000 people fit a normal distribution.

To conduct this test, create a frequency distribution for the categorical variable, displaying the frequency of each value. For the survey example, the distribution would show the number of people who chose each answer option.

With the frequency distribution, use a chi-square test to calculate the p-value, which indicates the probability of obtaining the observed results if the null hypothesis (the categorical variable fits the specified distribution) is true. If the p-value is less than or equal to the significance level, you can reject the null hypothesis and conclude that the categorical variable does not fit the specified distribution.

Chi-square distribution

The chi-squared distribution with k degrees of freedom is the distribution of a sum of the squares of k independent, standard normal random variables. It’s called a chi squared because it describes the summation of squares of the normally distributed random variables. Other names for the chi-squared distribution include Helmert’s distribution and the semi-normal distribution.

The degrees of freedom (k) are equal to the number of samples being summed. For example, if you have taken 11 samples from the normal distribution, then df = 11. The degrees of freedom (11 in this example) are equal to the mean of the chi-squared distribution.

The probability density function is:

Where

  • k = degrees of freedom
  • Γ (k/2) = the gamma function.
chi-square distribution
PDF for the chi-squared distribution.

Chi square distributions are always right skewed, with a long tail that trails to the right. As the degrees of freedom increase, the chi-squared distribution will look more and more like a normal distribution.

The cumulative density function (CDF) is given by

cdf for chi-square

Related Distributions

The chi-squared distribution is a special case of the gamma distribution. A chi-squared distribution with degrees of freedom is equal to a gamma distribution with a = k/ 2 and b = 0.5 (or β = 2).

A chi-bar-squared distribution is a mixture of chi-square distributions, mixed over their degrees of freedom.

Uses of The Chi-Squared Distribution

The chi-squared distribution has a variety of different uses, including:

  • Confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation [].
  • Independence of two criteria of classification of qualitative variables.
  • Relationships between categorical variables (contingency tables).
  • Sample variance study when the underlying distribution is normal.
  • Tests of deviations of differences between expected and observed frequencies (one-way tables).
  • The chi-square goodness of fit test.

References

[1] Johns Hopkins. http://ocw.jhsph.edu/courses/fundepiii/PDFs/Lecture17.pdf

Back to top

About Helmert’s Distribution

Helmert’s distribution is another name for the chi-square distribution. It is named after F.R. Helmert, who proved the general reproductive property of chi-square distributions [1].

Helmert’s most noted contribution was establishing that if a set of independent, normally distributed random variables X1, X2, …, Xn, then

helmert's distribution
  1. Is distributed as a chi-square variable with n – 1 degrees of freedom
  2. This chi-square variable is statistically independent of “X-bar” (the sample mean).

Helmert also proved that the sample mean, and sample variance are independent in 1785 [2].  Helmert’s transformation allows us to obtain a set of k – 1 new independent and identically distributed (i.i.d) normally distributed samples with μ = 0 and the same variance of the original distribution. Given a set of xk normally distributed i.i.d. variables, the transformation to yj random variables can be defined as [3]

chi-square transformation formula

Because of this important contribution to sampling theory, Kruskal [4] recommended naming the joint distribution of the two random variables “Helmert’s Distribution”. However, the recommendation did not take, and we still refer to Helmert’s result as the chi-square.

Helmert’s distribution is important in sampling theory because it leads to [4]

  1. Statistical independence of sample mean (μ) and sample standard deviation (σ).
  2. Separate single distributions of μ and σ.
  3. Distribution of Student’s t.

Helmert obtained his distribution by setting up a pair of linear transformations. These transform the joint distribution of individual observation errors into a joint distribution of errors for μ and σ; any dummy variables could be integrated out.

References

[1] Helmert, F. R. (1876). Die Genauigkeit der Formel von Peters zue Berechnung des wahrscheinlichen Beobachtungsfehlers directer Beobachtungen gleicher Genauigkeit, Astronomische Nachrichten, 88, columns 113-120;  

[2] Helmert, F. R. (1875). ~ber die Berechnung der wahrscheinlichen Fehlers aus einer endlichen Anzahl wahrer Beobachtungsfehler, Zeitschrift fur angewandte Mathe[1]matik und Physik, 20, 300-303.

[3] Wu, M. et al. (2019). Differentiable Antithetic Sampling for Variance Reduction in Stochastic Variational Inference. Online: https://arxiv.org/pdf/1810.02555.pdf

[4]  Kruskal, W. Helmert’s Distribution. The American Mathematical Monthly. Vol. 53, No. 8 (Oct., 1946), pp. 435-438 (4 pages). Published By: Taylor & Francis, Ltd.

Back to top

The other chi-squared distribution: the Semi-Normal distribution

According to Haight (1958), the “Semi-Normal distribution” is another name for the Helmert distribution[1], which is itself another name for the chi-square distribution. The name “semi-normal” was coined by Steffensen [2] in a 1937 article titled “On the semi-normal distribution” which appeared in volume 20 of Skandinavisk Aktuarietidskrift.

semi normal chi-square entry in Haight
Haight’s entry for Semi-normal distribution.

Steffensen’s work found the distribution of the quotient of the semi-normally distributed random variables;  he suggested the use of the distribution of a multiple of a chi random variable (i.e., cχv) where v is “sufficiently large” [3].

Steffenson’s work on the semi-normal distribution was recognized in Journal of the Royal Statistical Society’s Recent Advances in Mathematical Statistics [4].

References

[1] Haight, F. (1958). Index to the Distributions of Mathematical Statistics. National Bureau of Standards Report.

[2] Steffensen, J. F. (1937). On the semi-normal distribution. Skandinavisk Aktuarietidskrift, 20: ½, pp. 60-74.

[3] [2] Johnson, Kotz, and Balakrishnan, (1994), Continuous Univariate Distributions, Volumes I and II, 2nd. Ed., John Wiley and Sons.

[4] Hartley, H. O. (1939). Recent Advances in Mathematical Statistics. Journal of the Royal Statistical Society102(3), 406–444. https://doi.org/10.2307/2980066

Back to top

History of chi-square

The chi-square test was first developed by Karl Pearson in the early 20th century. Pearson was a British statistician who is considered to be one of the founders of modern statistics. Pearson developed the chi-square test to test the goodness of fit of a frequency distribution to a theoretical distribution. For example, he could use the chi-square test to test whether the results of a survey fit a normal distribution.

Pearson also developed the chi-square test for independence. This test is used to test whether two categorical variables are related to each other. For example, he could use the chi-square test for independence to test whether there is a relationship between gender and political party affiliation.

The chi-square test is a widely used statistical test. It is used in a variety of fields, including business, economics, medicine, and social science.

Some of the key events in the history of the chi-square test:

  • 1900: Karl Pearson publishes a paper on the chi-square test; 1914: Pearson publishes a book on the chi-square test.
  • 1925: R.A. Fisher introduces the chi-square goodness of fit test; 1935: Fisher introduces the chi-square test for independence.
  • 1950s: The test becomes widely used in statistical analysis.
  • 1970s: The test is extended to handle multiple variables.
  • 1980s: The test is extended to handle non-independence.
  • 1990s: The test is extended to handle continuous data.
  • 2000s: The test is extended to handle multivariate data.

The chi-square test is a powerful statistical tool that can be used to analyze a wide variety of data. It is a valuable tool for anyone who works with data.

Back to top

Scroll to Top