Inference for Categorical Data

ch8_inf_for_categorical_data

Inference for categorical data

In August of 2012, news outlets ranging from the Washington Post to the Huffington Post ran a story about the rise of atheism in America. The source for the story was a poll that asked people, "Irrespective of whether you attend a place of worship or not, would you say you are a religious person, not a religious person or a convinced atheist?" This type of question, which asks people to classify themselves in one way or another, is common in polling and generates categorical data. In this lab we take a look at the atheism survey and explore what’s at play when making inference about population proportions using categorical data.

The survey

To access the press release for the poll, conducted by WIN-Gallup International: GLOBAL INDEX OF RELIGION AND ATHEISM

Take a moment to review the report then address the following questions.

Exercise 1

In the first paragraph, several key findings are reported. Do these percentages appear to be sample statistics (derived from the data sample) or population parameters?

Exercise 2

The title of the report is "Global Index of Religiosity and Atheism". To generalize the report's findings to the global human population, what must we assume about the sampling method? Does that seem like a reasonable assumption?

The data

Turn your attention to Table 6 (pages 14 and 15), which reports the sample size and response percentages for all 57 countries. While this is a useful format to summarize the data, we will base our analysis on the original data set of individual responses to the survey.

In [1]:
# for Mac OS users only!
# if you run into any SSL certification issues, 
# you may need to run the following command for a Mac OS installation.
# $/Applications/Python 3.x/Install Certificates.command
# if this does not fix the issue, run this command instead.
import os, ssl
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and
    getattr(ssl, '_create_unverified_context', None)): 
    ssl._create_default_https_context = ssl._create_unverified_context
    
import pandas as pd
import io
import requests
# how to read a csv file from a github account
df_url = 'https://raw.githubusercontent.com/vaksakalli/stats_tutorials/master/atheism.csv'
url_content = requests.get(df_url).content
# print(url_content)
atheism = pd.read_csv(io.StringIO(url_content.decode('utf-8')))

Exercise 3

What does each row of Table 6 correspond to? What does each row of atheism correspond to?

To investigate the link between these two ways of organizing this data, take a look at the estimated proportion of atheists in the United States. Towards the bottom of Table 6, we see that this is 5%. We should be able to come to the same number using the atheism data.

Exercise 4

Using the command below, create a new dataframe called us12 that contains only the rows in atheism associated with respondents to the 2012 survey from the United States. Next, calculate the proportion of atheist responses. Does it agree with the percentage in Table 6? If not, why?
In [2]:
us12 = atheism[(atheism.nationality == 'United States') & (atheism.year == 2012)]

Inference on proportions

As was hinted at in Exercise 1, Table 6 provides statistics, that is, calculations made from the sample of 51,927 people. What we'd like, though, is insight into the population parameters. You answer the question, "What proportion of people in your sample reported being atheists?" with a statistic; while the question "What proportion of people on earth would report being atheists" is answered with an estimate of the parameter.

The inferential tools for estimating population proportion are analogous to those used for means in the last chapter: the confidence interval and the hypothesis test.

Exercise 5

Write out the conditions for inference to construct a 95% confidence interval for the proportion of atheists in the United States in 2012. Are you confident all conditions are met?

If the conditions for inference are reasonable, we can either calculate the standard error and construct the interval by hand, or we can use scipy.stats, a Python library of useful statistical functions offered by SciPy.

In [3]:
import numpy as np
from scipy.stats import norm

conf_lvl = 0.95
z_value = norm.ppf((1-(1-conf_lvl)/2))  # the Z value for the specified confidence level
prbs = us12.response.value_counts(normalize = True)  # the probabilities for the response of atheist and non-atheist
se = np.sqrt(prbs.prod()/len(us12))  # the standard error

# construct a 95% confidence interval for the proportion of atheists in the United States in 2012
ci1 = prbs['atheist'] - z_value * se
ci2 = prbs['atheist'] + z_value * se

# print the results
print(f'Number of successes (atheist) = {len(us12)*prbs[1]}')
print(f'Number of failures (non-atheist) = {len(us12)*prbs[0]}')
print(f'Standard error = {se}')
print(f'{int(conf_lvl*100)}% confidence interval = {ci1, ci2}')
Number of successes (atheist) = 50.0
Number of failures (non-atheist) = 952.0
Standard error = 0.006878629122390021
95% confidence interval = (0.0364183342579056, 0.0633820649436912)

Although formal confidence intervals and hypothesis tests don't show up in the report, suggestions of inference appear at the bottom of page 6: "In general, the error margin for surveys of this kind is ± 3-5% at 95% confidence".

Exercise 6

Based on the output, what is the margin of error for the estimate of the proportion of the proportion of atheists in US in 2012?

Exercise 7

Calculate confidence intervals for the proportion of atheists in 2012 in two other countries of your choice, and report the associated margins of error. Be sure to note whether the conditions for inference are met. It may be helpful to create new data sets for each of the two countries first, and then use these data sets to construct the confidence intervals.

How does the proportion affect the margin of error?

Imagine you've set out to survey 1000 people on two questions: are you female? and are you left-handed? Since both of these sample proportions were calculated from the same sample size, they should have the same margin of error, right? Wrong! While the margin of error does change with sample size, it is also affected by the proportion.

Think back to the formula for the standard error: ${SE}$ = $\sqrt{p(1−p)/n}$. This is then used in the formula for the margin of error for a 95% confidence interval: ${ME}$ = ${1.96}$ x ${SE}$ = ${1.96}$ x $\sqrt{p(1−p)/n}$. Since the population proportion ${p}$ is in this ME formula, it should make sense that the margin of error is in some way dependent on the population proportion. We can visualize this relationship by creating a plot of ${ME}$ vs. ${p}$.

The first step is to define an array p that is a sequence from 0 to 1 with each number separated by 0.01. We can then create a array of the margin of error (me) associated with each of these values of p using the familiar approximate formula (${ME}$ = ${2}$×${SE}$). Lastly, we plot the two vectors against each other to reveal their relationship.

In [4]:
import numpy as np
n = 1000
p = np.arange(0, 1, 0.01)
me = 2 * np.sqrt(p * (1 - p)/n)

import matplotlib.pyplot as plt
%matplotlib inline 
%config InlineBackend.figure_format = 'retina'
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (10,5)
plt.plot(p, me)
plt.xlabel('Population Proportion')
plt.ylabel('Margin of Error')
plt.show();

Success-failure condition

The textbook emphasizes that you must always check conditions before making inference. For inference on proportions, the sample proportion can be assumed to be nearly normal if it is based upon a random sample of independent observations and if both ${np}$ ≥ ${10}$ and ${n}$(${1 - p}$) ≥ ${10}$. This rule of thumb is easy enough to follow, but it makes one wonder: what’s so special about the number 10?

The short answer is: nothing. You could argue that we would be fine with 9 or that we really should be using 11. What is the "best" value for such a rule of thumb is, at least to some degree, arbitrary. However, when ${np}$ and ${n}$(${1 − np}$) reaches 10 the sampling distribution is sufficiently normal to use confidence intervals and hypothesis tests that are based on that approximation.

We can investigate the interplay between ${n}$ and ${p}$ and the shape of the sampling distribution by using simulations. To start off, we simulate the process of drawing 5000 samples of size 1040 from a population with a true atheist proportion of 0.1. For each of the 5000 samples we compute ${\hat{p}}$ and then plot a histogram to visualize their distribution.

In [5]:
p = 0.1
n = 1040
p_hats = np.zeros(5000)

for i in range(5000):
    samp = np.random.choice(['atheist', 'non_atheist'], size = n, replace = True, p = [p, 1-p])
    p_hats[i] = sum(samp == 'atheist')/n

plt.hist(p_hats)
plt.xlabel('p_hats')
plt.ylabel('Frequency')
plt.title('p = 0.1, n = 1040')
plt.show();

These commands build up the sampling distribution of ${\hat{p}}$ using the familiar for loop. You can read the sampling procedure for the first line of code inside the for loop as, "take a sample of size ${n}$ with replacement from the choices of atheist and non-atheist with probabilities ${p}$ and ${1 - p}$ , respectively." The second line in the loop says, "calculate the proportion of atheists in this sample and record this value." The loop allows us to repeat this process 5,000 times to build a good representation of the sampling distribution.

Exercise 9

Describe the sampling distribution of sample proportions at ${n = 1040}$ and ${p = 0.1}$. Be sure to note the center, spread, and shape.
Hint: Remember that Python has functions such as mean to calculate summary statistics.

Exercise 10

Repeat the above simulation three more times but with modified sample sizes and proportions: for ${n = 400}$ and ${p = 0.1}$, ${n = 1040}$ and ${p = 0.02}$, and ${n = 400}$ and ${p = 0.02}$. Plot all four histograms together using subplot. Describe the three new sampling distributions. Based on these limited plots, how does ${n}$ appear to affect the distribution of ${\hat{p}}$? How does ${p}$ affect the sampling distribution?

Exercise11

If you refer to Table 6, you'll find that Australia has a sample proportion of 0.1 on a sample size of 1040, and that Ecuador has a sample proportion of 0.02 on 400 subjects. Let's suppose for this exercise that these point estimates are actually the truth. Then given the shape of their respective sampling distributions, do you think it is sensible to proceed with inference and report margin of errors, as the reports does?

On Your Own

    The question of atheism was asked by WIN-Gallup International in a similar survey that was conducted in 2005. (We assume here that sample sizes have remained the same.) Table 4 on page 12 of the report summarizes survey results from 2005 and 2012 for 39 countries.


  1. Answer the following two questions. As always, write out the hypotheses for any tests you conduct and outline the status of the conditions for inference.

  1. Is there convincing evidence that Spain has seen a change in its atheism index between 2005 and 2012?
    Hint: Create a new data set for respondents from Spain. Form confidence intervals for the true proportion of athiests in both years, and determine whether they overlap.

  2. Is there convincing evidence that the United States has seen a change in its atheism index between 2005 and 2012?

  • If in fact there has been no change in the atheism index in the countries listed in Table 4, in how many of those countries would you expect to detect a change (at a significance level of 0.05) simply by chance?
    Hint: Look in the textbook index under Type 1 error.

  • Suppose you’re hired by the local government to estimate the proportion of residents that attend a religious service on a weekly basis. According to the guidelines, the estimate must have a margin of error no greater than 1% with 95% confidence. You have no idea what to expect for ${p}$. How many people would you have to sample to ensure that you are within the guidelines?
    Hint: Refer to your plot of the relationship between ${p}$ and margin of error. Do not use the data set to answer this question.
  • This lab was adapted by Vural Aksakalli and Imran Ture from OpenIntro by Andrew Bray and Mine Çetinkaya-Rundel.

    www.featureranking.com