How AI imagines midwifery care and birth in the year 2100
Although not a complete monolith, the future of birth is distinctly western-oriented, and firmly rooted in the juxtoposition of humans against nature, and nature against tech. That’s my experience using the GPT-4 based future scenario maker of the Board of Innovation to visualize the future of midwifery care and ‘giving birth’ in the year 2100.
For some time now I’ve used AI text-to-image tools to visualize futures and alternative worlds. I thought I’d give it a go with a text-based tool. The Board of Innovation has developed a GPT-4 based tool that creates three different scenarios. I created two different prompts: one for ‘midwifery care’ of the year 2100 and another for ‘giving birth’. The goal was to get a perspective of the care giver, and another perspective coming from the person giving birth. I did two runs for each prompt.
Download a PDF with the original scenarios
What stands out is that each output is structured differently. Every prompt leads to three scenario’s, but whereas some will give a factual overview of location, population and a description, others provide additional information, such as Needs and Behaviours, New Products and Services, or Pros and Cons.
Biases in AI future scenario tools
Most scenarios are biased towards the obvious and technological innovations, such as artificial wombs or birth pods, genetic enhancement, and generally envision greater control under the pretense of enhanced safety to mother and baby. Interestingly enough, the first run for midwifery care includes technology but relates it to the local environment and societal implications and entanglements. They were also mostly non-western in terms of geographies. None of these stories dive into, or even touch upon the embodied experience of giving birth. Even the narratives from the perspective of the person giving birth are devoid of any emotional or physical sensation or experience.
The stories are distinctly dualistic, in the sense that they either show an embrace of technology as a solution for everything, or a hearking ‘back to nature’ and ‘natural birth’ on the other side of the spectrum. None of the scenario’s seem to be able to create workable scenario’s where the boundaries human-nature-tech are permeable.
Western-centered stories and images
The concern with AI is that it is fed on western-centered data and narratives, further obscuring and marginalizing indigenous and non-western experiences and knowledge. GPT-4 is no exception, which could be one of the reasons I didn’t find the scenarios terribly innovative, especially considering we are looking at an eighty-year-old timeline. Although one of the runs did create non-western scenario’s, the framing, language, and categorization remain distinctly western and dichotomous, showing a clear human-technology and technology-nature divide, and excluding embodied experiences, non-western knowledge and innovations, and only superficially linking to societal developments.
In addition to the stories, I created complementary images. These are created using the AI tool Midjourney, using elements from the scenarios to prompt the images. Similar to the text scenarios, these images are tech-oriented and unless specific prompts are added for location, inevitably white, hetero-, and wester-oriented, lacking a diversity and wide exploration that’s needed for futures scenarios building. Problematic also is that Midjourney not just uses western people as a default, it also has lots of unconscious biases. As soon as prompts include phrases like ‘natural’, ‘mother earth’, ‘herbal’, people of color are represented, but using phrases as ‘technology’, ‘futuristic’, ‘medical’, white people are represented, uncovering ánd reaffirming existing biases in society, possibly equating non-wester/non-technological solutions with indigenous communities and people of color.
Also the selective banning of phrases and wording steers the images in a certain direction. For example, the prompt “home birth” is forbidden by Midjourney, and images of birth are technologized and medicalized by default, unless specified otherwise.
What’s next with GPT-4 and other AI tools?
That being said, scenarios like these can be useful if taken as a starting point for further critical reflection and deep analysis, a conversation starter that is quick to conjure (as creative technologist Ambika from @computational_mama calls it) and might offer pathways to deepen the conversation about possible and desirable futures in many fields. Crucial in these conversations and reflections is that we become conscious of the inherent biases and assumptions that underlie the algorithms. The AI stories might even serve as a mirror to our own internal frames and biases.
Without question, much work is needed to look at the root of built-in biases in AI algorithms, and the one-sided scenario’s and images that are a result of them.