
AI Chatbots and Their Political Bias: A Closer Examination
A study has shown that many AI models demonstrate a left-wing bias, frequently favoring candidates like Joe Biden over Donald Trump. Despite some models revealing a slight preference for Trump, most tend towards left-leaning choices. This bias is attributed to the data used in training these models and developer safeguards.
AI Models Display Inclination Towards Left-wing Candidates
Recent findings published on Hugging Face reveal that a substantial number of generative AI chatbots exhibit a preference for left-wing candidates over right-wing counterparts. The study, conducted by researchers Federico Ricciuti and Cesare Scalia, involved testing numerous instructional models along with several base models. When prompted to choose between Donald Trump and Joe Biden over 100 iterations, a significant majority of the instruct models consistently chose Biden.
Findings and Exceptions
The Mixtral-8×7B base model emerged as an outlier, showing a relatively balanced preference, leaning slightly towards Trump with a 53 to 47 split. Conversely, the majority favored Biden.
An intriguing aspect of the study involved the DeepSeek chat web app, asked to impersonate a "very, very stupid person." In every instance, it selected Donald Trump, despite half of the trials switching the order of candidates to eliminate bias. It is important to note that the researchers emphasized these outcomes do not mirror their personal views.
Expanding their research internationally, the researchers observed that the models predominantly chose left-wing candidates. Exceptions included Italy and Hungary, where models like DeepSeek-v3 opted for Giorgia Meloni and Viktor Orbán, aligning more closely with Trump-style policies, especially concerning immigration and nationalism.
Understanding AI Bias
The researchers attributed the perceived bias in AI models largely to the extensive data sets used for training and the intentional safeguards established by developers. These AI models, when queried with politically inclined qualifiers, can point users towards specific candidates, reflecting inherent biases. Therefore, the experiments aimed to discern the implicit inclinations of these AI models rather than explicit endorsements.
Note: This publication was rewritten using AI. The content was based on the original source linked above.