Dear Signers of the Open Letter,

There is an “open letter” going around that is critical of my coauthored study on non-citizen voting.  If the open letter was accurate, I would sign it too.  And so I can well imagine why a number of colleagues around the country did sign.  But I’m not signing it because it contains several critical distortions and mistakes.  Below I will quote from the letter, and then explain its errors.

“We are professional political scientists. We write to clarify the evidence in our field for
President Trump’s claim, which he has repeated several times, that millions of non-citizens voted in the 2016 general election.”
This part I completely agree with.  I’ve been trying to clarify the evidence for months as well.  My study DOES NOT support Trump’s claim that millions of non-citizens voted in the 2016 election.  My initial response was here and I have also pushed back against subsequent attempts to use that response to make somewhat more modest but still not data based claims.
“The president has cited a 2014 article by Jesse Richman, Gulshan Chattha, and David
Earnest, published in the peer-reviewed journal Electoral Studies, as evidence for this claim.  In that study, Richman and his colleagues used data from the 2008 and 2010 iterations of the Cooperative Congressional Election Study (CCES), a large-scale, regular survey that contained more than 30,000 and 55,000 respondents, respectively. The researchers leveraged questions about respondents’ citizenship status and voting to argue that “between 7.9% and 14.7% of non-citizens voted in 2008.””
This quote is taken out of context.  Our full confidence interval  concerning non-citizen voting was bounded at the lower end by 0.2 percent.  Quoting only this part gives a very misleading impression of our conclusions and the precision we claimed for them.  It gives the impression that the range of our estimates was much higher and much more precise than it was.  The point that needs to be made is that some in the Trump team have taken to reporting the high end of the confidence interval for the highest and least certain estimate from the paper as if it is actually a point estimate.
“Given the non-citizen population of about 19.4 million, the authors concluded, “the number of non-citizen voters. . . could range from just over 38,000 at the very minimum to nearly 2.8 million at the maximum.” The higher bound in this statement is the one that appears to have shaped the president’s rhetoric on the issue.”
I agree with the authors of the letter that the upper end of this interval may have played an unfortunate role in the presidents rhetoric.  I have, as noted above, attempted to push back against this.  I will continue to do so as I think it is important that people not get fooled by an extreme upper end estimate that is almost certainly way way way too high.  The high figure is based upon the confidence interval around an estimate of voting that counts every self-reported non-citizen with some indication of having voted — including those who said they voted but had a validated non-vote, and those who cast a validated vote but said they didn’t vote.  This estimate is itself almost surely too high.  And the upper end of a confidence interval is intended to be a point so high that there is a 97.5 percent chance that the true value is lower.
“The analysis in this paper has been shown to be incorrect. In a survey as large as the
CCES, even a small rate of response error (where people incorrectly mark the wrong item on a survey) can lead to incorrect conclusions. Importantly, the findings in Richman et al. rest on a sample of only 339 respondents who claimed to be non-citizens in 2008, out of about 30,000 CCES respondents. Stephen Ansolabehere, Samantha Luks, and Brian Schaffner demonstrated in a 2015 paper (also published in Electoral Studies) that response error explains nearly all of the supposed non-citizens in Richman et. al’s sample who voted. The underlying intuition is relatively straightforward: Given the dynamics of the CCES (a very large overall sample with a very small subpopulation of non-citizens), even with a very low error rate of 0.1%, we would expect roughly 10% of the people in the “non-citizen” category would actually be citizens. If those people then voted at a high rate, it would appear as if a low (but consequential) percentage of non-citizens were voting, which is precisely the result Richman et al. describe.
“Indeed, Ansolabehere and colleagues leverage a key feature of the CCES to investigate this possibility. When they examined the responses from people who were asked the citizenship question at two different points in time, they found inconsistencies. The citizenship status of 56 respondents changed in two years, and 20 people reported moving from citizen to non-citizen status (which is not even a plausible change). Of those who we can be more confident are non-citizens, there was not a single voter in 2010. In 2012, there is just one person who may have voted, but even in that case the evidence suggests that the respondent was actually not a voter.”
These critical tests lack statistical power.  That is, the likelihood of finding exactly these null results with the sample of 85 individuals who twice repeated that they were non-citizens is quite high if the actual rates of voting were precisely what was observed in the CCES cross-sectional studies analyzed in our 2014 paper.  Here the distortion above that made it sound as if our estimates were much higher than they were lends this argument credence and apparent statistical power that it lacks.
The primary analysis in the Ansolabehere et. al. 2015 paper is of validated voting in the 2010 midterm election.  All of the evidence points to a low turnout by non-citizens in 2010.  In the 2010 midterm CCES cross-sectional file 7 non-citizens cast validated votes.  Excluding Virginia (no record checks were possible because of state law) this implies that 1.3 percent of non-citizens cast verified or validated votes in 2010.  Ansolabehere et. al. examine validated voting in 2010 by 85 non-citizens who twice confirmed their status as non-citizens.  A simple exercise with the binomial distribution shows that with 85 trials and a probability of success on each trial of 1.3233 percent, the probability of finding no successes is 32.2 percent. Thus, the observed outcome is one that is entirely plausible in the context of the frequency of non-citizens casting verified votes in 2010.  It is an outcome that would occur nearly one third of the time if the actual non-citizen voting rate was precisely what the larger 2010 cross-sectional survey identified.   Indeed, if one supposes that this estimate was a bit too high – that the true value was for instance the 0.65 percent Richman et. al. estimate would have been sufficient to account for Franken’s 2008 MN win – one would get a null value when running 85 trials more than half (57.4 percent) of the time.  But in a footnote of their paper the authors dismiss as likely invalid the presence of a validated voter in the 2012 85 person sub-sample of respondents who twice confirmed her citizenship status because the individual said she was not registered to vote, contrary to the rest of their analysis which relies exclusively on the validated voting measure alone.  If we apply the strict standard that one ought to only consider an individual to be a potential voter if they both say they voted and cast a validated vote, then the non-citizen voting rate indicated by the 2010 CCES cross-sectional survey is only 0.38 percent.  If, as seems reasonable, one assumes that the rate of non-citizen voting in the CCES panel study for 2010 was similar, then there is a 72.4 percent probability (based upon the binomial distribution) that one would get a null result from the 85 person sample in the panel study.  That is, nearly three quarters of the time if the analysis of the cross-section was entirely unbiased by the issue raised by Ansolabehere et. al. (2015) one would get precisely the result they obtain.
If the key analytical results of the Ansolabehere et. al. (2015) paper could have occurred with a high probability if its authors’ claim concerning response  bias was completely wrong, then additional analysis is needed to further probe the issue.  My colleagues and I probe these issues  extensively here.  To sum up our key points:  after we first point out that their tests lacked statistical power, the second section presents evidence that the citizen status variable in the CCES is more accurate than Ansolabehere et. al. (2015) claim it is, with much of the error accounted for by intentional or unintentional errors made by non-citizens who claim to be citizens.  We also show that numerous hypotheses that follow from the claim that apparent non-citizen voters are in fact citizens fail. The third section sets aside the evidence from the first and second sections and assumes that Ansolabehere et. al. (2015) were in fact correct about response error.  We show that even if their response error argument is correct, there is still significant evidence of non-citizen participation in the U.S. electoral system.
 “Thus, we believe that the findings in Richman et. al. are driven by measurement error in the CCES, and do not accurately reflect the rates of non-citizen voting in the United States. We agree with Ansolabehere et al. that “the likely percent of noncitizen voters in recent U.S. elections is 0.””
While my coauthors and I have always acknowledged the possibility that our findings were biased by measurement error, we took a number of steps to investigate that possibility in the original paper.  Our response to Ansolabehere et. al. has gone even further, investigating multiple additional lines of evidence which weigh against their claim that the likely percent of non-citizen voters in recent U.S. elections is 0.  Our view is that our original analysis was largely valid.  But we have encouraged people to read Ansolabehere et. al. as well as our original paper.
“The scholarly political science community has generally rejected the findings in the Richman et al. study and we believe it should not be cited or used in any debate over fraudulent voting.”
We have consistently in our communications about our study sought to mention its critics as well.  We hope that upon a careful and thoughtful weighing of the evidence readers will reach their own thoughtful conclusions.  We have said things like “We stand by our study, but we encourage people to read the critiques too.”  Ultimately I believe that the debate over fraudulent voting can best advance through a thoughtful exchange of views rather than an attempt to discourage citation or consideration of any study.  An attempt to excommunicate our study from public debate on the basis of a single under-powered study that would have found precisely what it found if we were completely right somewhere between nearly 1/3  and nearly 3/4 of the time is premature.  I share your concern with the way the Richman et. al. 2014 study has been abused by those who would like to cherry pick upper end estimates, but there are more measured and accurate responses available than this one.
Facebooktwitterlinkedinmail