User language distorts ChatGPT information on armed conflicts, study shows

When asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt was written in Hebrew, as a new study by the Universities of Zurich and Constance shows. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles.

This post was originally published on this site

Popular Articles

Skip The Dishes Referral Code