AI isn’t just for chat – it’s also great at making static language learning content. And as AI gains ground as a content creation assistant, prompt engineering – the art of tailoring your requests – becomes an ever more important skill.
As you’d expect, frameworks and best practice guides abound for constructing the perfect prompt. They’re generally all about defining your request with clarity, in order to minimise AI misfires and misunderstandings. Perhaps the most well-known and effective of these is R-T-F – that’s role, task, format. Tell your assistant who it is, what to do, and how you want the data to look at the end of it.
Recently, however, I’ve been getting even more reliable MFL content with another prompt framework: C-A-R-E. That is:
Some of these steps clearly align with R-T-F. Context is a broader take on role, action matches to task and result roughly to format. But the kicker here is the addition of example(s). A wide-ranging academic investigation into effective prompting recently flagged “example-driven prompting” as an important factor in improving output, and for good reason: the whole concept of LLMs is built on constructing responses from training data. It’s built on the concept of parroting examples.
Crafting AI prompts with C-A-R-E
As far as language content is concerned, C-A-R-E prompting is particularly good for ‘fixed format’ activity creation, like gap-fills or quizzes. There’s a lot of room for misinterpretation when describing a word game simply with words; a short example sets AI back on track. For example:
Providing short examples seems like an obvious and intuitive step, but it’s surprising how infrequently we tend to do it in our AI prompts. The gains are so apparent, that it’s worth making a note to always add a little C-A-R-E to your automatic content creation.
If you’ve been struggling to get reliable (or just plain sensible!) results with your AI language learning content, give C-A-R-E a try – and let us know how it goes in the comments!