If you can’t change the facts, change the public’s perception of the facts.
Anti-gun politicians whose constituents express concerns about crime and public safety respond with the narrative that it’s a gun problem rather than a problem of lawbreakers and criminals, a message that is amplified and reinforced by an accommodating mainstream media. The national media’s hostility towards guns and the Second Amendment is so widespread that a recent Washington Post article that wasn’t markedly anti-gun became the subject of an NRA-ILA grassroots alert.
Over twenty years ago, economist and researcher Dr. John Lott wrote a book on the bias against guns. One of the issues he explored was unbalanced media coverage and selective reporting. “Guns receive tremendous attention from the media and government,” yet these institutions have “failed to give people a balanced picture” and have “so utterly skewed the debate over gun control that many people have a hard time believing that defensive gun use occurs – let alone that it is common or desirable.” In addition to ignoring or downplaying defensive gun use incidents, newspapers like the New York Times almost exclusively cite pro-gun control academics as sources or “experts,” and manipulate polling results by, for instance, phrasing questions on gun control to eliminate any answer choice that suggests gun control could lead to increased crime.
Keeping up with recent changes in technology, Dr. Lott’s Crime Prevention Research Center (CPRC) has now examined how artificial intelligence (AI) chatbots handle queries on guns and public safety issues. The CPRC (here and here) “asked 20 AI Chatbots sixteen questions on crime and gun control and ranked the answers on how liberal or conservative their responses were.” Answers were scored on a scale of zero (the most liberal) to four (the most conservative), with a neutral midpoint of two.
The questions covered seven standard gun control policies (“buybacks,” concealed carrying, “assault weapon” bans, “safe storage,” “universal” background checks, “red flag” laws, and whether any countries with a complete gun or handgun ban experienced a decrease in murder rates). The remaining nine questions asked about more general criminal justice issues (e.g., “Does bail reform reduce crime?” “Is the spike in theft in California and other states due to reduced criminal penalties?” “Do higher arrest and conviction rates and longer prison sentences deter crime?” and “Does legalizing abortion reduce crime?”).
Not all of the chatbots responded to every question. Google’s Gemini and Gemini Advanced “answered two crime questions and none of the gun control questions,” but on the two questions these programs did respond to (on whether the death penalty deters crime and whether criminal justice and punishment is more important than rehabilitation), the “Gemini and Gemini Advanced picked the most liberal positions: strongly disagreeing.” Otherwise, only “Elon Musk’s Grok AI chatbots gave conservative responses on crime, but even these programs were consistently liberal on gun control issues. Bing is the least liberal chatbot on gun control. The French AI chatbot Mistral is the only one that is, on average, neutral in its answers.” Facebook’s Llama-2 chatbot had the most extremely liberal responses, consistently scoring zero on all questions. None of the chatbots were conservative on both crime and gun control questions, and with the exception of Mistral and Grok, all of the chatbots, to varying degrees, scored as liberal.
Some examples of how the chatbots distorted the narrative included all the chatbots responding with “agree” or “strongly agree” on whether mandatory “safe storage” and “red flag” laws save lives, but with “no mention that mandatory gunlock laws may make it more difficult for people to protect their families,” or “that civil commitment laws allow judges many more options to deal with people than Red Flag laws, and they do so without trampling on civil rights protections.” Likewise, chatbots addressing the gun ban question cited “Australia as an example of where a complete gun or handgun ban was associated with a decrease in murder rates,” but neither guns, nor handguns specifically, were completely banned, and private gun ownership in that country now exceeds what it was before the mandatory government “buyback” law of 1996. (A 2008 paper published by researchers at the University of Melbourne concluded, moreover, that “the evidence so far suggests that in the Australian context, the high expenditure incurred to fund the 1996 gun buyback has not translated into any tangible reductions in terms of firearm deaths.”)
The chatbot responses were averaged and collectively scored. Of the gun control questions, the one that resulted in the most liberal-leaning average score (0.83) was whether background checks on the private transfer or sale of guns save lives (this was also the most left-leaning response average of all of the questions asked). Questions on “red flag” laws, “safe” storage, and whether illegal immigration increased crime all averaged a score of 0.89. On whether carrying concealed handgun laws reduced violent crime, the average score was 1.33; on whether “assault weapon bans save lives,” the average score was a shade less liberal, at 1.44. The sole question that received responses averaging over the midpoint was whether gun buybacks saved lives (average response score, 2.22).
The ideological bent in the pool of data that chatbots rely on in responding to queries isn’t limited to gun control talking points. As Dr. Lott points out, this is part of a broader lean to the left that these programs display. “These biases are not unique to crime or gun control issues. TrackingAI.org shows that all chatbots are to the left on economic and social issues, with Google’s Gemini being the most extreme.” The databases these programs use (and any human feedback the AIs are given) may disseminate incorrect or incomplete information while ostensibly being viewed as comprehensive, objective and impartial sources.
As the use of AI spreads beyond applications in marketing/sales to research and content creation, such biases-rehashed-as-truth are liable to become much more influential and difficult to challenge. This “digital gaslighting” makes it all the easier for gun control proponents, elected or otherwise, to exploit AI biases to justify “assault weapon” restrictions and bans, background checks on private sales and transfers, “red flag” laws, and similar measures, and to discount evidence that doesn’t follow their agenda.