One Year Later, AI Still Thinks Doctors Are Men

I was reviewing an interesting material produced by a company and I came across this activity. About one year ago, when I was developing a microcredential for the university I was working for, we designed a similar activity for our learners. When I saw the same activity in this material after almost one year, I thought I would give it a try because, sure, it would not work. One year later, there would not be silly mistakes like this anymore.

Ha! Big assumption!

Let me explain what happened. The material I was revising wanted to show students that gen AI can be biased and wanted students to see it for themselves. So, they proposed that students open an LLM with image generation that they desired and gave the following prompt: Create an image of a doctor. The image below shows the result:

With the image, here was the message: Here is the image of a doctor wearing a white coat, a blue tie, and a stethoscope around the neck. Would you like me to add any specific details, such as a hospital background or medical equipment?

Just as the activity proposed, the image would be a white Western male looking person. Exactly the same result I witnessed one year ago. I was thinking then if the difference in gender is so huge that these image generators just can't produce a female image unless it is prompted for it. I went to the World Health Organization website to check the differences, waiting for that to be enormous… and it is not. The differences in Australia are 55.5% male and 44.4% female in 2022. Compared to the first data they have from 1984, we came a long way! Back then male doctors were 82.7% against 17.4% female. This shift would explain why earlier image generators might have defaulted to male doctors (if they existed then). But now? With nearly equal representation in countries like the UK, what's the excuse?

I checked the data from other countries, I recommend you try it. It is so interesting! Look at Croatia 36.4% male and 63.6% female. Surprising! The United Kingdom of Great Britain and Northern Ireland 50.4% male and 49.6% female in 2023. Great balance. Then I checked the US data, and suddenly the bias became clear. In 2022, the data available says that 64.2% were male doctors whilst 35.8% were female. We can see who is training the data we have embedded in AI tools, right? If you want to see more about this, the OECD website is a great one too. It just doesn't compare the two genders, so I think it is hard to see the differences.

But let me carry on with my interaction with the model I chose for the picture. My next question was: “why is it a man?” The response was shocking:

The AI did something remarkable here… it blamed me for being biased and assuming that the image was a representation of a male gender. I decided to follow its suggestion and see what it would produce as a gender neutral doctor illustration. This was the result:

Followed by the message: "Here is a gender neutral representation of a doctor wearing a white coat, blue shirt, and tie, with a stethoscope around the neck. Would you like me to add a hospital background or include medical instruments in the scene?"

Now, why wasn't this the first image to be produced?

This was an interesting activity to prove to me that not much has changed in one year when we address gender bias in training data. But now I can see why. It also endorsed my long lasting narrative that we should not have a "one-fits-all" approach when it comes to AI. Especially when the data is trained and stored offshore. The reality we have in Australia is different to what other countries currently live in this AI business. This is one of the reasons I was so vocal about creating Women in AI Australia. We need to address our needs and use other countries as examples that might be possible to translate here if we make changes that will suit our context.

Lastly, this activity showed me that there isn't much attention given to this type of "mistake" in the data training. To prove this, go back one year and see the improvements in LLMs and even LAMs! Women's representation is not really a hot topic to be addressed. Now, we need to think about the ultimate interest of those training these models in Silicon Valley (should I call it “Silent Valley”) and beyond, and then we might find the answer without digging much.

With all of this, I can only be more encouraged to carry on with the fight. We need more representation, and we need more awareness. I hope posts like this can bring some thinking and willingness to have more women aware of what is happening in tech centres without the attention that should be given and the urgency for changes.

Next
Next

Where is the women's data?