AI isn’t just for chat – it’s also great at making static language learning content. And as AI gains ground as a content creation assistant, prompt engineering – the art of tailoring your requests – becomes an ever more important skill.
As you’d expect, frameworks and best practice guides abound for constructing the perfect prompt. They’re generally all about defining your request with clarity, in order to minimise AI misfires and misunderstandings. Perhaps the most well-known and effective of these is R-T-F – that’s role, task, format. Tell your assistant who it is, what to do, and how you want the data to look at the end of it.
Recently, however, I’ve been getting even more reliable MFL content with another prompt framework: C-A-R-E. That is:
- Context
- Action
- Result
- Example(s)
Some of these steps clearly align with R-T-F. Context is a broader take on role, action matches to task and result roughly to format. But the kicker here is the addition of example(s). A wide-ranging academic investigation into effective prompting recently flagged “example-driven prompting” as an important factor in improving output, and for good reason: the whole concept of LLMs is built on constructing responses from training data. It’s built on the concept of parroting examples.
Crafting AI prompts with C-A-R-E
As far as language content is concerned, C-A-R-E prompting is particularly good for ‘fixed format’ activity creation, like gap-fills or quizzes. There’s a lot of room for misinterpretation when describing a word game simply with words; a short example sets AI back on track. For example:
– Create a gap-fill activity in French for students around level A2 of the CEFR scale on the topic “Environment”.
– It will consist of ten sentences on different aspects of the topic, with a key word removed from each one for me to fill out. Provide the missing words for me in an alphabetically sorted list at the end as a key.
– As an example, a similar question in English would look like this: “It is very important to look after the ———- for future generations.”
This produces excellent results in Microsoft Copilot / Bing (which we love for the freeness, obviously!) and ChatGPT. For example:

Creating AI language learning content with Microsoft Copilot / Bing Chat
Providing short examples seems like an obvious and intuitive step, but it’s surprising how infrequently we tend to do it in our AI prompts. The gains are so apparent, that it’s worth making a note to always add a little C-A-R-E to your automatic content creation.
If you’ve been struggling to get reliable (or just plain sensible!) results with your AI language learning content, give C-A-R-E a try – and let us know how it goes in the comments!
Thanks. That was very useful. I adapted your instructions to get ChatGPT to do something it doesn’t always do successfully – add highlighting to words in table. This is what I used:
• I am a Catalan learner of British English at A2 level on the CEFR scale using AI to improve my speaking, and you are an expert teacher of English
• Correct my transcript and create a table with the original transcript in a column on the left and the corrected version in the column on the right.
• The original transcript will have the errors highlighted in bold and the corrected version will have the corrections highlighted in bold
• As an example, a similar table in English would look like this:
Original Transcript Corrected version
He live in a small flat. He lives in a small flat.
• This is my transcript:
…………
I’ll try it with Copilot after having tea!
This is a great use of it – will give that a go myself, thanks for sharing! Hope it worked well in Copilot for you.