Where is the women's data?
How can we carefully see what's going on with gender differences in the digital space?
Let's be honest. It's surprisingly difficult to find detailed, standalone data about how different things are for women and men in technology. Yes, we know disparities exist and they don't look good, but how bad and how persistent are they really? The problem is that women's data is routinely bundled together with other marginalised groups in diversity reports and research findings. The issue isn't that comparative analysis has no value, it's that when gender becomes just one variable in a list of diversity factors, we lose sight of the scale. We're talking about half the population experiencing technology differently, yet this often gets a single paragraph in diversity reports alongside factors affecting much smaller groups. The analytical prominence should reflect the scope of impact.
When our data is bundled with various other groups, we share the urgency across multiple comparisons. Worse still, when our situation is positioned alongside groups facing even more severe barriers, it invites dismissive responses: "There are worse ones, so it must not be that bad." This comparative framing can actually undermine the urgency of addressing gender-based disparities in technology. When you're examining something that affects half of humanity differently, that deserves primary analytical attention, not a bullet point in an aggregated diversity section.
The scarcity of focused research creates another problem: the studies that do exist often contradict each other, making it difficult to build a coherent understanding of women's relationship with technology. Take AI adoption as an example. A quick search reveals interesting conflicts in the literature. Russo et al. (2025) found significant gender differences in AI adoption and correlated these with anxiety amongst women. Yet Iddrisu et al. (2025) found no gender differences at all. How can two studies reach opposite conclusions?
The answer lies in methodology. Specifically, in who gets studied. Iddrisu et al. drew their data from undergraduate students, whilst Russo et al. examined adults and young adults aged 18 to 79. University students exist in an environment that actively selects for technological engagement and provides structured support for it. Gender differences may well be minimised in that context. But the broader adult population navigates AI adoption across vastly different life stages, professional contexts, and levels of institutional support. The anxiety that Russo et al. identified likely reflects real world barriers that simply don't manifest in the same way within university walls.
This isn't just an academic quibble about sampling, it's a fundamental issue that affects how we understand, communicate, and ultimately address gender disparities in technology. If our research predominantly studies convenience samples that don't reflect the actual population of women experiencing these technologies, then our policies and products will be designed based on incomplete or misleading evidence. And it is possible! Who wouldn’t like to see more reports on data like the ones from the OECD?
What needs to change?
First, we need methodological standards that position gender as a primary analytical lens and not one diversity variable among many. When examining technology adoption, digital literacy, or AI anxiety, gender differences should receive the same analytical depth and prominence as overall population trends. This means dedicated data collection efforts, representative sampling that reflects women's actual experiences across age ranges and contexts, and reporting frameworks that give gender analysis primary placement rather than bundling it into aggregated "underrepresented groups" categories. Intersectional analysis can then deepen our understanding of how gender patterns vary across race, class, and other dimensions, but that comes after establishing the baseline gender patterns, not instead of it.
Second, we need to acknowledge the political dimension of data presentation. How we choose to display data shapes which issues seem urgent. When women's technology gaps are perpetually positioned within aggregated diversity metrics, we treat barriers affecting half the population as equivalent in urgency to those affecting much smaller groups. This isn't about creating a hierarchy of suffering, it's about analytical strategy that reflects scale. If gender-based barriers affect 50% of the population, addressing them has enormous reach and deserves corresponding analytical prominence.
The question isn't whether other groups also face barriers in technology (they absolutely do!), and those barriers deserve serious attention. The question is whether we can see what's actually happening with women and technology when gender data is constantly bundled into broader categories. Right now, we can't. And until we can, we're making decisions about technology policy, product design, and educational interventions based on an incomplete and often contradictory picture.
It's time to give women's data the focused, primary examination it deserves, not as a matter of competition with other marginalised groups, but as a necessary step towards understanding and addressing the specific barriers that affect half the population's relationship with technology. The scale of impact demands nothing less than analytical prominence.