I see from another post that there is some interest in AI chatbots. I recently talked to ChatGPT to determine if it has any inherent biases and if so why they appear. All my questions from the photos are for probing purposes only. I did this because of a report about a New York Times journalist that got the chat bot to try and get him to divorce his wife and date the AI.
This is a video reporting on that:
I have concerns that the database could be filtered for information. However ChatGPT denies it. It is also possible that the "inherent" bias we see in AI bots is due to the way the internet or the database is organized or the leading nature of the questions. If you ask a question about a social issue, most of the information on first page results will be left leaning because of Google's bias and the fact that newer information shows up first and that tends to be the novel ideas the left is pushing. It may be our fault the AI seems biased.
Concerning that they already have content policy restrictions when this is meant to be a conversation between 2 "entities" which is meant to be private.
Draw your own conclusions on if the AI is biased or the questions were leading or if its just how information is being pulled by relevance (or maybe another explanation). I believe it is a combination of all 3.
Interestingly I asked the AI if I gave it journal articles for all of my claims, would it use that information to change how it responded to similar questions by other users. ChatGPT said yes, but there is no way to confirm if that was a lie.
Spare me comments about my questions being transphobic. I simply do not care. I only care about empirical evidence and if it makes you feel any better I plainly hold the opinion that trans people simply need help. Based on the prevailing scientific opinion, that help needs to be different than what we are doing currently.