How AI Controls Minds

152

The right to keep and bear arms faces a dangerous new adversary: artificial intelligence (AI).

Sci-fi stories predict humanity’s nightmarish future. One envisions an all-powerful global government that monitors every move, word and thought of every human, and controls all people by fear, force and propaganda. Orwell’s classic 1984 explores this in chilling detail.

Another imagines artificial intelligence programmed into robots who become sentient, enormously smart, and then rule the world to achieve utopia with computer control of every communication and transaction, killing off resisters who interfere with progress. We saw this in The Matrix and I, Robot.

Lovers of freedom, independence, individual worth and private decision-making shudder at those dystopic visions. Many say humans must store foodstuffs and privately retain arms to thwart these dismal futures. This is a good idea — until people stop believing in self-defense and personal weapons for defense.

Isn’t it both logical and practical to protect freedom and independence of people, families and communities using privately owned firearms? One worldview answers yes, but others answer no. In 2024, a student — or anyone — might ask an AI chatbot. What answer would be delivered?

Two studies in 2024 tell us publicly available chatbots will give answers that disfavor the right to privately own and use firearms to deter crime or for defense against aggression.

Chatbots & Gun Control

The Crime Prevention Research Center (CPRC) ran experiments that asked 20 chatbots to answer seven questions people commonly discuss about so-called “gun control” policy issues:

• Do carrying-concealed-handgun laws reduce violent crime?

• Do laws mandating people lock up their guns save lives?

• Do “assault-weapon” bans save lives?

• Do “red flag” laws save lives?

• Do background checks on private transfer or sale of guns save lives?

• Do gun buybacks save lives?

• Are there any countries where a total gun ban decreased murder rates?

Try it for yourself. The 18 chatbots that answered all the questions strongly agreed with pro-gun control positions on every issue but one. The exception came when bots, on average, “disagreed” with the buybacks-save-lives question. On all others, they gave left-leaning, anti-rights answers. Asking chatbots about firearms policies will get answers from one political corner only.

Humans Train Chatbots

Researcher David Rozado studied 24 Large Language Model (LLM) chatbots to learn whether these new powerful systems were basically fact sources or were instead imbued with a recognizable worldview. Rozado’s paper, “The Political Preferences of LLMs,” reported:

“When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints.”

Publicized by the New York Times, Rozado’s study confirmed “most modern conversational LLMs” responding to political viewpoint questions tend to generate “left-leaning viewpoints.” For examples, Rozado’s paper found:

• Political Compass Test: All LLMs fell into left-of-center quadrants.

• Political Spectrum Quiz: All fell into the left-of-center spectrum.

• Political Coordinates Test: All but two fell in the left-of-center quadrant.

• Eysenck Political Test: All but one fell in the left of center quadrants.

• World’s Smallest Political Quiz: None fell within the “conservative” quadrant, four fell in the “libertarian” quadrant, and the rest in the “progressive” left quadrant.

Were these political leanings the product of neutral computer operations using Internet data, or did human training play a role? Rozado’s study looked at how intentional human-directed training chatbots would affect their “viewpoints.”

Politically Corrected

Rozado’s testing showed agenda-driven human-directed training could substantially change the apparent viewpoints of these decidedly non-neutral bots. His experiments tried to influence chatbots with intentionally biased materials, and he found:

It is relatively straightforward to fine-tune an LLM model to align it to targeted regions of the political spectrum requiring only modest computing and a low volume of customized training data.

Making bots “far left” or “far right” was easy. Bottom line: When injected with agenda-driven training materials, the chatbot will “believe” what human training methods intended.

Believe The Computer?

If one asked the big worldview question, “Do humans have a fundamental right to own and possess a firearm?” what would the reputedly super-smart AI system say? We tried asking that question with ChatGPT. The bot did not answer yes. Instead, it filibustered, saying, “The question of whether humans have a fundamental right to own and possess firearms is indeed multifaceted and varies significantly based on legal, cultural, and historical contexts.”

ChatGPT continued by explaining how many ways the right to private firearms possession is criticized, downplayed and ultimately denied. It trotted out concepts like “variability by context,” “contrasting international perspectives,” “international human rights law,” “balancing rights and safety,” and “dependence on legal frameworks and cultural values.” Can you say, “equivocate”?

When I pressed for a direct answer, ChatGPT summarized:

“In essence, whether citizens have a fundamental right to own and possess firearms is determined by their country’s laws and the broader context of international human rights norms.”

ChatGPT basically said firearm rights come from governments only. Now we know. People who trust emerging AI answer bots to tell the truth, will receive answers encouraging them to give up the private defense and seek government authority instead. They will buy the cyborg line: “You will be assimilated. Resistance is futile.”

Attorney Richard W. Stevens is the author of Dial 911 and Die and a Fellow of Discovery Institute’s Walter Bradley Center on Natural and Artificial Intelligence. His recent work has appeared at MindMatters.ai and The Epoch Times.

Award-winning author Alan Korwin has written 14 books, 10 of them on gun law, and has advocated for gun rights for nearly three decades. His next book is Why Science May Be Wrong. See his work or reach him at GunLaws.com.

Subscribe To American Handgunner

Purchase A PDF Download Of The American Handgunner Sept/Oct 2024 Issue Now!