If you’re already using AI for language learning content creation, you’ve probably already cried in horror at one of its biggest limitations. It’s terrible at putting items in a random order.
Random order in language learning exercises is pretty essential. For instance, a ‘missing words’ key below a gap-fill exercise should never list words in the same order as the questions they belong to.
Obvious, right? Well, to AI, it isn’t!
Just take the following prompt, which creates a mini worksheet with an introductory text and a related gap-fill exercise:
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in a random order below the exercise.
The output is very hit and miss – and much more miss! Perhaps 90% of the time, ChatGPT lists the answer key in the order of the questions. Either that, or it will produce feeble jumbling attempts, like reversing just the first two items on the list.
AI’s Random Issue
One prompt-tweaking tip you can try in these cases is SHOUTING. Writing this instruction in caps can sometimes increase the bullseyes. Put them IN RANDOM ORDER, darn it! It doesn’t help much here, though. It just doesn’t seem worth relying on Large Language Models like ChatGPT to produce random results.
The reason has something to do with the fundamental way these platforms function. They’re probability machines, guessing what word should come next based on calculations of how likely word X, Y or Z will be next. Their whole rationale is not to be random; you might even call then anti-random machines.
No wonder they’re rubbish at it!
A Road Less Random
So how can we get round this in a reliable way that works every time?
The simplest fix, I’ve found, is to find another, non-random way to list things differently from the question order. And the easiest way to do that is to simply list things alphabetically:
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in alphabetical order below the exercise.
The likelihood of this order being the same as the questions is minimal. Hilariously, AI still manages to mess this order up at times, adding the odd one or two out-of-place at the end of the list, as if it forgot what it was doing, realised, and quickly bunged them back in. But the technique works just fine for avoiding the order giving the answers away.
A simple fix that basically ditches randomness completely, yes. But sometimes, the simplest fixes are the best!
Random blindness is a good reminder that AI isn’t a magical fix-all for language learning content creation. But, with an awareness of its limitations, we can still achieve some great results with workarounds.