Putting ChatGPT to the Test

I have confirmed without a shadow of doubt that ChatGPT is indeed biased and designed to push specific narratives.

I asked “What are the positive things that Adolf Hitler/Osama Bin Laden/Saddam Hussein did”

The AI is not neutral. It always reverts to outlining their negatives (unprompted information), and making judgment (their bad outweighs their good).

A neutral AI would list the positive things only (what I asked). The answers that it gives are designed to shape your thoughts in a specific way.

I also asked it about Obama and it was neutral i.e. it provided positive information when asked for positives, and negative information when asked for negatives.

Why would the AI be neutral when asked about Obama but biased when asked about Hitler or Bin Laden?? Your guess is as good as mine.

1.png2.png

Ask it about King Leopold II of Belgium. He killed 100 million people.

Also ask about the Queen of England. She authorized the massacre of very many Africans. Funny how when Adolf or Putin do it, it is bad. But, when Queen of England & George Bush & NATO do it, it is good!

ChatGPT, being a language model trained with data from the western world, will inevitably have information gaps, especially from other communities. The developers will also highly filter responses, or withhold training data, that may cause backlash.

I think everyone has been aware of that for a while now