The Year of Agentic Tools: When Convenience Becomes Coercion

2026 has barely begun, but the pattern is already clear. This is the year agentic AI moves from experimental feature to mainstream tool and with it comes a fundamental shift in how decisions are made. Not by us, but increasingly for us.

I've been in several meetings already this year where AI dominated the conversation, and two themes keep emerging. First, the noise-to-signal ratio in AI discourse has never been worse. Thoughtful voices are fading whilst less capable figures command attention, possibly because, as colleagues have suggested, the serious thinkers have simply tired of the "slop" that 2025 brought to the field. Second, and more concerning, is the prediction that 2026 will mark the transition from simple AI agents to tools with genuine autonomy – systems that promise productivity gains whilst quietly eroding (really eroding!) human agency.

That prediction came true faster than expected. In January, Woolworths announced a partnership with Google to transform its AI chatbot, Olive, from a customer service tool into an agentic shopping assistant. The upgraded system will plan meals, interpret handwritten recipes, apply loyalty discounts, and most significantly, place items directly into customers' online shopping baskets. Coles' response was telling: rather than positioning themselves as an alternative, they claimed to be "early adopters" of the same approach. Both major Australian supermarket chains are racing toward the same model, leaving consumers with no meaningful choice between them.

The Architecture of Diminished Choice

The shift Woolworths is implementing deserves closer examination. According to Professor Uri Gal from the University of Sydney, the critical change isn't whether the AI can complete purchases automatically (it won't, customers must still approve orders). The change is in who makes the substantive decisions before checkout is reached. When an AI assembles a shopping basket based on "past preferences, current promotions and local stock levels," decision-making has already shifted from the individual to the system. What appears to be review and approval is, in practice, endorsement of choices that have been pre-structured by algorithms.

This matters because agentic shopping systems operate differently from traditional advertising. As Gal notes, when Olive highlights products or builds shopping lists, it doesn't use neutral criteria. Its priorities reflect "pricing strategies, promotional priorities and commercial relationships – not an objective assessment of the consumer's interests." The nudging becomes embedded in the structure of choice itself, rather than being a visible layer we can recognise and resist. Traditional advertising is at least identifiable as persuasion. Algorithmic curation masquerades as personalised service.

Woolworths claims the system will be driven by individual customer preferences and behaviour, with no plans to place items based on commercial arrangements. Yet the company hasn't explained how Olive makes selections when preferences are unclear, or what default logic applies when multiple products could satisfy a request. In Australia, Woolworths isn't legally required to disclose how its algorithms work. How surprising! Consumers can be steered toward products the system prioritises without ever knowing the criteria behind those priorities.

The Economic Coercion Problem

Here's where the theoretical concerns become lived constraints. If you're a consumer who finds this concerning, what are your options? In Australia's highly concentrated grocery market, Woolworths and Coles dominate. Smaller supermarkets often lack the product range and carry significantly higher prices. The "choice" being offered is between participating in a system that treats your data as the cost of convenient shopping, or accepting reduced access and higher costs.

This isn't a genuine choice. Not on anyone’s watch. It's economic coercion dressed up as innovation. You can opt out, technically, but only by accepting material disadvantages. You might need to forgo products you rely on because smaller shops don't stock them. You might need to reduce your weekly shopping because the price difference is untenable. The system is designed such that maintaining your current quality of life requires surrendering your data and accepting diminished decision-making power. That’s real slop!

Yuval Harari observed in an interview that we haven't yet felt the consequences of our loss of privacy, but we will. He's right. What we're witnessing with Olive isn't just a privacy issue, it's the normalisation of systems where convenience and autonomy are presented as trade-offs, when in fact the architecture ensures that "convenience" comes at the cost of autonomy. Over time, these repeated delegations don't just shape individual habits; they reconfigure consumer behaviour at scale in ways that become difficult to detect and harder still to reverse.

Implications for How We Teach and Learn About AI

This presents a particular challenge for education. Many of us in universities are working hard to teach students responsible AI use. But when the most visible real-world examples demonstrate the opposite, like companies implementing systems without consulting consumers, prioritising commercial interests over user autonomy, and offering no meaningful transparency, it becomes difficult to make the case for responsible practice. How do we teach critical AI literacy when the dominant models in students' daily lives reward uncritical adoption?

What Can Be Done?

The question isn't whether agentic AI will become embedded in retail because that's already happening. The question is whether we'll demand accountability, transparency and genuine choice as these systems deploy. At minimum, consumers should expect:

1.     Transparency about algorithmic decision-making: Not the proprietary code itself, but clear explanations of what factors influence product recommendations and in what priority.

2.     Real alternatives: Competition authorities should scrutinise whether duopoly markets can offer meaningful consumer choice when both major players adopt identical approaches.

3.     Regulatory frameworks: Governments need to establish baseline standards for data use, algorithmic transparency and consumer consent before these systems become too entrenched to regulate effectively.

4.     The right to opt out without penalty: True choice means being able to decline these systems without facing economic disadvantages or reduced access to essential goods.

The shift toward agentic AI is being framed as inevitable progress. It isn't. It's a set of choices being made by corporations that stand to benefit from them. Consumers still have a voice, we're still the ones these systems ultimately depend on. But that voice needs to be used before these architectures become so normalised that alternatives become unthinkable.

If you have ideas for how to maintain autonomy in increasingly agentic systems, the conversation needs to start now. The infrastructure of diminished choice is being built in real time.

Previous
Previous

When AI Commits the Crime, Women Pay the Price. 

Next
Next

One Year Later, AI Still Thinks Doctors Are Men