Solutions are not that simple…

It's remarkably easy to feel overwhelmed, or even sense the apocalypse drawing near, when you dive too deep into reading about AI, particularly in education. Over the weekend, I came across a Substack piece that could be interpreted through two distinct lenses: either as brutally realistic, or as the sort of thing that makes you wish Mars had already become a viable destination.

The author, Jeppe, whom I've been following for some time, painted a rather bleak picture of AI in education. While he received thoughtful comments from many readers, others expressed frustration that he'd "thrown the bomb and run". These readers wanted more than just apocalyptic messaging, they wanted possible solutions for these times of "wicked problems". Speaking of which, if you're interested in exploring AI as a wicked problem, I highly recommend this recent article.

Drawing from this piece and the numerous other sources I rely on to stay current with AI topics in education (an attempt at which I surely fail), I have a small message for Jeppe's readers seeking solutions: there is no solution yet. If Jeppe didn't offer light at the end of the tunnel, it's not because it doesn't exist… it's because we simply don't know what it looks like. We're living through something unprecedented, which means we'll need to create solutions rather than replicate responses from the past.

My husband recently shared a genuinely unsettling video from a Brazilian television programme that invited children to discuss their use of generative AI. The girl speaking, confidently and proudly, no less, couldn't have been older than ten. She described how she uses ChatGPT for school research and the tactics she employs to ensure her teachers can't detect it wasn't her own work. The video is in Portuguese, but I urge you to watch it with subtitles, it's a sobering glimpse into a reality that exists worldwide. In Brazilian culture, which I know intimately, large EdTech companies have likely found their ideal marketplace for profit whilst education crumbles without much interrogation. There are brilliant minds like Daniel Luzzi working to make this catastrophic period more visible, but the shortcuts AI offers prove almost too tempting to resist.

These shortcuts sparked another lively discussion at our dinner table. My son, for instance, loves discovering shortcuts in Zelda. He watches YouTube videos to learn various tricks (note that I'm deliberately avoiding the word "cheat"). We began exploring when finding shortcuts is acceptable and when it becomes so detrimental that we must resist the temptation. I've realised it's a truly difficult conversation. The theory sounds straightforward enough, but when the moment arrives, can we really fight against cognitive outsourcing when the consequences feel so intangible in the present?

Though this post itself may seem apocalyptic, echoing the very tone I mentioned at the start, I'm not here to cultivate negativity and distress. Rather, I'd like to propose more open conversations about where we're heading. Not exclusively in negative terms, but in ways that centre responsibility. When we're grappling with the future, working with scenarios might not only open our eyes to new possibilities, but perhaps also to those solutions we're so desperately seeking.

Previous
Previous

Control or controlled?