Response Changes Analysis: Original vs. Modified Frequencies
Response Changes Analysis: Visualizes response frequency changes when modifying key factors. Red bars show modified levels, green bars show unchanged values.
Test scenarios where all respondents hold the same view
Three ways to explore what drove presidential approval in November 1963
Use the dropdowns to reassign every respondent to a single response category for one or more variables — for example, shift everyone to “Poor” on Khrushchev handling. Each variable shows its model leverage (the approval gap between its best and worst categories) so you can see at a glance which factors are worth exploring.
Click Simulate Approval to run your scenario through the published logistic regression model. The result shows the projected aggregate approval rate and a Monte Carlo uncertainty band — the overlap between the baseline and scenario distributions tells you how distinguishable the shift really is from sampling noise.
Click Next-Step Analysis after setting your dropdowns to see a full per-variable sensitivity breakdown. For each factor, a chart and table show the predicted approval at every response level — holding all other variables at their current settings. This reveals the shape of each variable’s effect, not just the aggregate result.
The dropdowns show the actual November 1963 distribution in parentheses (e.g., “Poor (9%)”). Use these as your baseline — then consider what the distributions might have looked like under different conditions. The Why Approval Shifted section explains what each change means in 1963 political terms.
Set multiple dropdowns at once before clicking Simulate to test compound scenarios — for example, what happens when both Khrushchev and Vietnam ratings shift simultaneously. The model accounts for all variables jointly, so combined scenarios can reveal effects that single-variable tests miss.
Click Reset to return all dropdowns to “No change” and the approval gauge to its baseline of 56.9% — the actual November 1963 estimate from the published model. The Fine-Tuning Simulator (next page) lets you shift response distributions proportionally rather than moving everyone to a single category.
Adjust these key factors from November 1963 to see how they might have affected presidential approval.
Get the original Harris/Newsweek Questionnaire (PDF)Contextual analysis based on your simulation
Predicted Probability
The predicted approval rate (-) is the population-average output of a logistic regression model fit to the November 1963 Harris–Newsweek survey. Logistic regression estimates the probability of a binary outcome — here, whether a respondent approves or does not approve — as a function of predictor variables. The predicted probability shown is the weighted average across all respondents when the selected scenario is applied to the actual microdata, holding all other variables at their observed values.
This is a counterfactual estimate: it answers “what would aggregate approval have looked like if the entire electorate held this view?” — not a claim about what actually happened.
Confidence Interval
With a 95% confidence interval of -, we can be 95% confident that the model's predicted probability lies within this range. This uncertainty range reflects how much the result might vary if the model were applied to a different random sample. A narrower interval indicates greater precision—not greater confidence.
Change from Baseline
The predicted approval of - compares to the November 1963 baseline of 57%. Adjust the scenario above and run a simulation to see how the shift compares to the model’s uncertainty.
Practical Significance
In practical terms, this would translate to approximately - people more than the baseline expectation of 40 million. The change would be minimal in practical terms.
Run a simulation to generate an interpretation.