Non-Response Stress Test
Discover how survey response rates can shape perceived public opinion through dynamic simulations with real historical data.
How to Use This Tool
Test how selective non-response could have changed the survey’s headline numbers
How to Use This Tool
Test how selective non-response could have changed the survey’s headline numbers
1. Select an Outcome
Choose which survey question to stress-test: Presidential Approval, Vote Intention (JFK vs. Goldwater), or Tax Cut Support. The base rate shown for each reflects the raw unweighted proportion from the November 1963 Harris–Newsweek survey — this is the headline figure you’re asking “could nonresponse have changed?”
2. Set the Sliders
Two sliders define your nonresponse scenario. Number of Respondents to Replace sets how many actual respondents you assume were non-responders who answered differently. Approval Among Replacements sets what those hypothetical non-responders would have said. A live preview updates as you drag so you can see the projected result before running.
3. Run the Simulation
Click Run Simulation to compute the adjusted estimate. The simulation replaces the specified number of respondents with your hypothetical non-responders and recalculates the headline rate. A Monte Carlo uncertainty band is also generated — 10,000 draws from a binomial model — showing the range of plausible outcomes under your scenario.
4. Read the Threat Assessment
The Non-Response Threat Assessment compares the shift caused by your scenario to the poll’s reported margin of error — the yardstick the public sees in headlines. It classifies the threat level and flags the most serious outcome: a majority flip, where the headline conclusion reverses entirely. The detailed analysis section breaks down effect size and direction.
5. Save and Compare Scenarios
Each run is automatically saved. Expand the Pinned Scenarios drawer at the bottom of the screen to review and compare previous runs side by side. This makes it easy to build up a set of scenarios — for example, testing mild, moderate, and extreme nonresponse assumptions — and compare their threat levels without re-entering settings manually.
6. Reset and Iterate
Click Reset to return the sliders to their defaults and clear the current results. A useful pattern: anchor one slider (e.g., fix the replacement approval rate at 30%) and sweep the other across its range across multiple runs to map out where the threat level changes. The Model Builder lets you test whether model-based predictors are robust to similar nonresponse assumptions.