Malmö Arena, venue for the Eurovision Song Contest 2024. Werner Nystrand/imagebank.sweden.se

Eurovision of Languages – 2024 Edition!

It feels like we only just said goodbye to the last one, and another Eurovision Song Content has rolled around again. Once a veritable garden of languages, all competing broadcasters were re-granted a free choice of song language in 1999. Sadly (for linguaphiles) that’s meant English lyrics for the most part.

But linguistic diversity has found a way, too, and not just thanks to those hardy regulars like France, Italy, Portugal and Spain that almost never disappoint with home-language lyrics. The 2023 edition saw the welcome return of tongues long-missed on the Eurovision stage, like Finnish and Russian.

So how does 2024 measure up against that pretty high bar?

The Eurovision Language Contest 2024

Big Firsts

Notably, we have two language debuts at this year’s contest. Azerbaijan, entering since 2008 without a word of Azeri, finally treats us to a few words of this beautiful Turkic language in the entry Özünlə apar (take me with you). And from Australia, a competing member of the family since 2015, we have the uplifting song One Milkali (One Blood) featuring lyrics in Yankunytjatjara, a Pama-Nyungan language from Western Australia. Azeri and Yankunytjatjara may not feature as their full entry texts, but it is a beautiful thing to celebrate new languages on the Eurovision stage!

As an aside, as one commenting fan dubbed it, it’s that moment when Yankunytjatjara makes it to Eurovision before Scottish Gaelic and Welsh. We UK fans live in hope…

There’s a first for Armenian, too. While we’ve heard the language in previous entries, 2024 is the first time it will be the sole language of an Armenian entry. Jako has a world music fusion vibe, and a simple message of be yourself, which is a noble sentiment in any language.

Many Happy Returns

The it’s been TOO long! prize must go to Norway this year. Norway has sent a song with Swahili lyrics (2010) more recently than it has one på norsk (2006). The latter, Christine Guldbrandsen’s Alvedansen, didn’t even do particularly badly, so heaven knows what put them off.

This year, though, Norwegian folk metallists Gåte were the surprise vanquishers of fan favourites Keiino, pipping them to the Norwegian ticket with the song Ulveham and breaking the Norwegian drought. Its beautifully haunting arrangement builds on traditional Kulning calls from the mountain herds of Norway, featuring lyrics drawn from Telemark dialect.

While the return of Finnish was last year’s joy, its loss this year is tempered by the return of its close cousin, Estonian. The collaboration between 5miinust and Puuluup will present (Nendest) narkootikumidest ei tea me (küll) midagi (the crazily-titled We (sure) know nothing about (these) drugs), the first time Estonia has presented its national language since back-to-back eesti keel in 2012 and 2013. Incidentally, it wasn’t all English for Estonia in the interim – they achieved a solid top ten in 2018 with a song in Italian, of all tongues.

Going Dutch, Again

Dutch had fared similarly poorly in the anglophone takeover too – until recently. After one of many mid-noughties semifinal failures, the Netherlands ditched its national language following the 2010 contest. It took until 2022 for Dutch to pop up again, with considerable success – De diepte ended up of the left side of the scoreboard in the Torino contest. Two years later, Dutch is back again, this time with Joost Klein and Europapa.

Lithuania has also shied away from using its home tongue on the Eurovision stage. It took 21 years for the language to be heard again after a mediocre result in English and Lithuanian in 2001. But that return made the 2022 final, with Monika Liu scoring a solid result just outside the top ten. This year, Silvester Belt is aiming to do even better with the catchy Luktelk (Wait).

Greece will be looking to mirror that national language return to success, too. Greece’s last two attempts with full or partial Greek lyrics ended in very rare semifinal failure for the country, in 2016 and 2018. Marina Satti aims to be the first Greek-singing finalist since 2013, with a self-ironising, catchy, ethnopop banger.

Doubling Up

French and Spanish fans have an extra bite at the language cherry this year, and from perhaps surprising sources. Thanks to the return of Luxembourg to the contest – after an incredible 31 years away – we have a song with mixed French and English lyrics in the tally. As for Spanish, we can thank the Sammarinese win of Spanish rockers Megara for the fact that this year’s entry from the microstate will be in Spanish, not Italian or English.

Mixed Bag from the Balkans

We can always count on the Balkans for some non-anglophone fun at Eurovision. This year, we have, interestingly, two proper-name songs in Serbian Ramonda and Slovene Veronika. Only Albanian and Croatian lose out to English entries (although Croatia is doing very well for that as a pre-contest bookies’ favourite!).

The Hardy Annuals

And of course, we have our stalwarts, our indefatigable linguistic champions – France, Italy, Portugal and Spain. They’ve kept the national language flags flying almost without fail throughout the modern free-language era, and we should celebrate each of them for that. Italy in particular is a veritable feast of lyrics, with the hugely talented Angelina Mango firing them out in a fast-paced three minutes. Little wonder that she is also one of this year’s hot favourites for the top.

We might almost add Ukraine to this list, having not only sent, but won in Ukrainian in recent years. Ukraine opts for a cool mix this year with the duo Alyona Alyona and Jerry Heil.

And for the Germanists…

No consolation for the Germanists, this year – again. 2012 was the last time German – or at least a dialect of it – formed part of a Eurovision song lyric. That honour goes to Austria’s Woki mit dem Popo (pretty much shake your bumbum in Upper Austrian dialect), which failed to make the final that year.

Can you believe it’s been that long? Me neither. But there’s small consolation in the fact that Germany had a stonker of a song in their national final this year. Galant’s Katze (cat) may have fallen at the final hurdle, but it has all the makings of a cult classic.

Which are your favourite non-English entries this year? And which language do you yearn to hear again on the Eurovision stage? Let us know in the comments!

A neon style image of a robot with a speech bubble to illustrate the idea of Swedish proverbs as language learning material

Proverbs and Language Learning : From Folk Wisdom to Classroom

I’ve been crash-learning Swedish (well, side-stepping into it from Norwegian) more and more intensively of late. And one of the most pleasant linguistic detours I’ve made has been through the lush valleys of Swedish proverbs.

Proverbs and sayings have always been a favourite way in of mine when working on a language, and for several good reasons. Firstly, they’re short, and usually easier to remember by design so people could easily memorise and recite them. Secondly, they’re very often built around high-frequency structures (think X is like Y, better X than Y) that serve as effective language models.

Birds in a forest, a favourite trope of proverbs!

Bättre en fågel i handen än tio i skogen (Better one bird in the hand than ten in the forest)

But there’s another big pay-off to learning through proverbs that is more than the sum of their words. They pack a lot of meaning into a short space – drop them in and you’re calling to the conversation all the nuance they carry. Think of the grass is always greener… You don’t even need to mention the second, missing part of that English proverb, and it already calls to mind countless shared parables of misplaced dissatisfaction. And since they’re based on those parables and folk histories that ‘grew up’ alongside your target language, proverbs can grant us some fascinating cultural insights, too.

In short, master proverbs and you’ll sound like you really know what you’re talking about in the target language.

Finding Proverbs

For many target languages, you’ll likely be able to source some kind of proverbs compendium in a good bookshop, as they’re as much of interest to native speakers as they are to learners. When you do find a good one, compilations of sayings are the epitome of the dip-in-and-out book. I’ve picked up lots of Gaelic constructions and vocab leafing idly through Alexander Nicolson’s Gaelic Proverbs in my spare moments. It was definitely time for me to try the same with some Swedish.

Without a good Swedish bookshop to hand, though, I turned to the Internet in the meantime. A good place to start is to find out what “[your language] proverbs” is in your target language (it’s svenska ordspråk in Swedish), and see what a good search engine throws up.

Tala är silver, tiga är guld.

Tala är silver, tiga är guld (Talking is silver, silence is gold)

Local cultural institutions in particular can be rich sources of articles on folk wisdom like proverbs. There are some lovely sites and articles that introduce the wise words of svenska in digestible chunks. My handful of Swedish favourites below are each written for a native speaker audience. They all give potted backgrounds on the proverbs in Swedish, making for some great extra reading practice.

INSTITUTET FÖR SPRÅK OCH FOLKMINNEN

This folk-minded article is a wonderful introduction to Swedish proverbs, offering not only examples, but also exploring the characteristics of proverbs and what makes them ‘stick’. There’s a special section on sayings from the Gothenburg area too, which adds a nice local flavour.

TIDNINGEN LAND

This article from the Land publication offers 19 common Swedish proverbs in handy list format. Even more handily, it paraphrases each in order to explain their meaning. Great for working out what some of the more archaic words mean without reaching for the Swedish-English dictionary!

NORDISKA MUSEET

Nordiska Museet offers another well-curated list, with not only paraphrasing, but etymological information on the more difficult or outdated words.

The Proverbial AI

You can also tap the vast training banks of AI platforms for proverbial nuggets. Granted, the knowledge of LLMs like ChatGPT and Claude may not be complete – training data is only a subset of material available online – but AI does offer the advantage of activity creation with the material.

Try this prompt for starters:

Create a Swedish proverbs activity to help me practise my Swedish.
Choose five well-known proverbs, and replace a key word in each with a gap. I must choose the correct word for the gap from four alternatives in each case. Make some of the alternatives humorous! Add an answer key at the end of this quiz along with brief explanations of each proverb.

I managed to get some really fun quizzes out of this. Well worth playing around with for self-learning mini-worksheets!

A Swedish proverbs activity created by ChatGPT

A Swedish proverbs activity created by ChatGPT-4

AI platforms can also play a role as ‘proverb visualisers’, which is how I generated the images in this article. Proverbs can often employ some quite unusual imagery; letting picture generators loose on those can be a fantastic way to make them more memorable!

However you come across target language sayings and proverbs, you can learn a lot from these little chunks of wisdom. Do you have a favourite saying in any of the languages you’re studying? Let us know in the comments!

ChatGPT French travel poster

A Second Shot at Perfect Posters – ChatGPT’s Image Tweaker

The big ChatGPT news in recent weeks is about images, rather than words. The AI frontrunner has added a facility to selectively re-prompt for parts of an image, allowing us to tweak sections that don’t live up to prompt expectations.

In essence, this new facility gives us a second shot at saving otherwise perfect output from minor issues. And for language learning content, like posters and flashcards, the biggest ‘minor’ issue – the poor spellings that crop up in AI image generation – makes the difference between useful and useless material.

Rescuing ChatGPT Posters

Take this example. It’s a simple brief – a stylish, 1950s style travel poster for France. Here’s the prompt I used to generate it:

Create a vibrant, stylish 1950s style travel poster featuring Paris and the slogan “La France”.

I wanted the text “La France” at the top, but, as you can see, we’ve got a rogue M in there instead of an N.

ChatGPT generated image of a French travel poster

To target that, I tap the image in the ChatGPT app. It calls up the image in edit mode, where I can highlight the areas that need attention:

ChatGPT image editing window

Then, I press Next, and can re-prompt for that part of the image. I simply restate the slogan instructions:

The slogan should read “La France”.

The result – a correct spelling, this time!

ChatGPT French travel poster

It can take a few goes. Dodgy spelling hasn’t been fixed; we’ve just been given a way to try again without scrapping the entire image. Certain details also won’t be retained between versions, such as the font, in this example. Others may be added, like the highly stylised merging of the L and F in the slogan (a feature, rather than a bug, I think!).

But the overall result is good enough that our lovely 1950s style poster wasn’t a total write-off.

Another case of AI being highly imperfect on its own, but a great tool when enhanced by us human users. It still won’t replace us – just yet!

Image tweaking is currently only available in the ChatGPT app (iOS / Android).

A collage of lots of word and picture cards.

Treating Leeches – Strategies for Suspended Anki Cards

How do you deal with leeches?

I’m not talking about traditional medicine here (not to downplay the modern application of the age-old treatment at all!). The leeches I’m more concerned with on the day-to-day are those Anki cards you forget so persistently that the app takes charge, suspending them from your deck.

It’s an apt description for an item that sucks away your time and motivation. I don’t know about you, but I also get that sinking feeling of failure when “card was a leech” pops up baldly.

Catching Leeches

First of all, fight that feeling. Leeches can creep up for a number of reasons, and your memory lapse is the least of them. Despite the cold rebuke, Anki means well. It suspends the cards to save you wasting any more time on part of your learning strategy that isn’t quite working. So, for now, let them go.

Instead, schedule a review of leeches regularly. Once a month or so seems about right if you’re a prolific language learning user – I always have a couple to deal with in that time span. In the Anki desktop app, head to Browse. Then, there are two ways to list leeches. You can simply highlight Suspended under Card State in the left-hand menu.

Exposing leeches via Suspended Cards in the Anki Browse window

Exposing leeches via Suspended Cards in the Anki Browse window

Otherwise, you can use the fact that Anki tags leech cards with the text leech to draw them out. Highlight one of your decks in the left-hand menu, then in the bar at the top of the Browse panel, add the text tag:leech to narrow the results to that set.

Exposing leeches via tags in the Anki Browse window

Exposing leeches via tags in the Anki Browse window

Now out in the open, we need to think of a rehabilitation strategy for our annoyingly helpful leeches.

Treating the Cause, Not the Problem

It’s tempting to just un-suspend by removing that leech tag, and pop the card right back in the deck. But there’s a reason Anki singled it out – something wasn’t working.

Often, it’s not simply failure to remember. Many of mine aren’t words I’ve forgotten, but words I get mixed up – either with other target language words, or with the wrong English translation. For example, in Greek, I leeched out παραδέχομαι (paradéchomai – admit) with αποδέχομαι (apodéchomai – accept), due to their similarity – same root verb, different prefix.

It’s not always just soundalikes, either, but happens with concepts. Left and right are a case in point in Swahili. I know both words very well – kushoto and kulia – but I’d always say one for the other, to the point that they were marked as leeches.

could recall them – it had just become 50/50 whether I’d say one or the other!

These cases of interference usually arises because there’s a lack of distinguishing information on the vocab card. The easiest way to fix that is to make your cards clearer and more precise. Any defining detail will do, and with language learning, context is key. Short sentences that embed the vocabulary items are perfect. To give the brain more to hang onto, you can expand them from basic X is Y types to X is Y, so/because…, and even make use of allegory and rhyme in your examples.

Taking the Swahili example, there’s a topical hook with those that adds layers of meaning: politics. There’s also a good rhyme for kulia in pia (also). So to my card, I add the sentence (and forgive the unpalatable mention of unpopular politicians here) Boris yuko kulia, na Rishi pia (Boris is on the right, and Rishi too).

And (of course) there’s a wee AI tip for that. If you struggle to fine rhymes – not unreasonable if you’re at an early stage in a language – then just ask your LLM of choice for rhyming pointers, or even entire couplets. It’s one of the things is does a pretty decent job of!

Asking AI for rhyming words in foreign languages.

Asking ChatGPT-4 for rhyming words in foreign languages.

 

 

Leeches are an initially frustrating but ultimately helping feature of the Anki lifestyle! Do you have alternative methods for bashing them? Let us know in the comments!

Two AI robots squaring up to each other

AI Worksheet Wars : Google Gemini Advanced vs. ChatGPT-4

With this week’s release of Gemini Advanced, Google’s latest, premium AI model, we have another platform for language learning content creation.

Google fanfares Gemini as the “most capable AI model” yet, releasing benchmark results that position it as a potential ChatGPT-4 beater. Significantly, Google claims that their new top model even outperforms humans at some language-based benchmarking.

So what do those improvements hold for language learners? I decided to put Gemini Advanced head-to-head with the leader to date, ChatGPT-4, to find out. I used the following prompt on both ChatGPT-4 and Gemini Advanced to create a topic prep style worksheet like those I use before lessons. A target language text, vocab support, and practice questions – perfect topic prep:

Create an original, self-contained French worksheet for students of the language who are around level A2 on the CEFR scale. The topic of the worksheet is “Reality TV in France“.

The worksheet format is as follows:

– An engaging introductory text (400 words) using clear and idiomatic language
– Glossary of 10 key words / phrases from the text (ignore obvious cognates with English) in table format
– Reading comprehension quiz on the text (5 questions)
– Gap-fill exercise recycling the same vocabulary and phrases in a different order (10 questions)
– ‘Talking about it’ section with useful phrases for expressing opinions on the topic
– A model dialogue (10-12 lines) between two people discussing the topic
– A set of thoughtful questions to spark further dialogue on the topic
– An answer key covering all the questions

Ensure the language is native-speaker quality and error-free.

I then laid out the results, with minimal extra formatting, in PDF files (much as I’d use them for my own learning).

Here are the results.

ChatGPT-4

ChatGPT-4, gives solid results, much as expected. I’d been using that platform for my own custom learning content for a while, and it’s both accurate dependable.

The introductory text referenced the real-world topic links very well, albeit a little dry in tone. The glossary was reasonable, although ChatGPT-4 had, as usual, problems leaving out “obvious cognates” as per the prompt instructions. It’s a problem I’ve noticed often, with other LLMs too – workarounds are often necessary to fix these biases.

Likewise, the gap-fill was not “in a different order”, as prompted (and again, exposing a weakness of most LLMs). The questions are in the same order as the glossary entries they refer to!

Looking past those issues – which we could easily correct manually, in any case – the questions were engaging and sensible. Let’s give ChatGPT-4 a solid B!

A French worksheet on Reality TV, created by AI platform ChatGPT-4.

You can download the ChatGPT-4 version of the worksheet from this link.

Gemini Advanced

And onto the challenger! I must admit, I wasn’t expecting to see huge improvements here.

But instantly, I prefer the introductory text. It’s stylistically more interesting; it’s just got the fact that I wanted it to be “engaging”. It’s hard to judge reliably, but I also think it’s closer to a true CEFR A2 language level. Compare it with the encyclopaedia-style ChatGPT-4 version, and it’s more conversational, and certainly more idiomatic.

That attention to idiom is apparent in the glossary, too. There’s far less of that cognate problem here, making for a much more practical vocab list. We have some satisfyingly colloquial phrasal verbs that make me feel that I’m learning something new.

And here’s the clincher: Gemini Advanced aced the randomness test. While the question quality matched ChatGPT-4, the random delivery means the output is usable off the bat. I’m truly impressed by that.

A French worksheet on Reality TV, created by Google's premium AI platform, Gemini Advanced.

You can download the Gemini Advanced version of the worksheet from this link.

Which AI?

After that storming performance by Gemini Advanced, you might expect my answer to be unqualified support for that platform. And, content-wise, I think it did win, hands down. The attention to the nuance of my prompt was something special, and the texts are just more interesting to work with. Big up for creativity.

That said, repeated testing of the prompt did throw up the occasional glitch. Sometimes, it would fail to output the answers, instead showing a cryptic “Answers will follow.” or similar, requiring further prompting. Once or twice, the service went down, too, perhaps a consequence of huge traffic during release week. They’re minor things for the most part, and I expect Google will be busy ironing them out over coming months.

Nonetheless, the signs are hugely promising, and it’s up to ChatGPT-4 now to come back with an even stronger next release. I’ll be playing around with Gemini Advanced a lot in the next few weeks – I really recommend that other language learners and teachers give it a look, too!

If you want to try Google’s Gemini Advanced, there’s a very welcome two-month free trial. Simply head to Gemini to find out more!

An illustration of a cute robot looking at a watch, surrounded by clocks, illustrating AI time-out

Avoiding Time-Out with Longer AI Content

If you’re using AI platforms to create longer language learning content, you’ll have hit the time-out problem at some point.

The issue is that large language models like ChatGPT and Bard use a lot of computing power at scale. To keep things to a sensible minimum, output limits are in place. And although they’re often generous, even on free platforms, they can fall short for many kinds of language learning content.

Multi-part worksheets and graded reader style stories are a case in point. They can stretch to several pages of print, far beyond most platform cut-offs. Some platforms (Microsoft Copilot, for instance) will just stop mid-sentence before a task is complete. Others may display a generation error. Very few will happily continue generating a lengthy text to the end.

You can get round it in many cases by simply stating “continue“. But that’s frustrating at best. And at worst, it doesn’t work at all; it may ignore the last cut-off sentence, or lose its thread entirely. I’ve had times when a quirky Bing insists it’s finished, and refuses, like a surly tot, to pick up where it left off.

Avoiding Time-Out with Sectioning

Fortunately, there’s a pretty easy fix. Simply specify in your prompt that the output should be section by section. For example, take this prompt, reproducing the popular graded reader style of language learning text but without the length limits:

You are a language tutor and content creator, who writes completely original and exciting graded reader stories for learners of all levels. Your stories are expertly crafted to include high-frequency vocabulary and structures that the learner can incorporate into their own repertoire.

As the stories can be quite long, you output them one chapter at a time, prompting me to continue with the next chapter each time. Each 500-word chapter is followed by a short glossary of key vocabulary, and a short comprehension quiz. Each story should have five or six chapters, and have a well-rounded conclusion. The stories should include plenty of dialogue as well as prose, to model spoken language.

With that in mind, write me a story for French beginner learners (A1 on the CEFR scale) set in a dystopian future.

By sectioning, you avoid time-out. Now, you can produce some really substantial learning texts without having to prod and poke your AI to distraction!

There may even be an added benefit. I’ve noticed that the quality of texts output by section may even be slightly higher than with all-at-once content. Perhaps this is connected to recent findings that instructing AI to thing step by step, and break things down, improves results.

If there is a downside, it’s simply that sectioned output with take up more conversational turns. Instead of one reply ‘turn’, you’re getting lots of them. This eats into your per-conversation or per-hour allocation on ChatGPT Plus and Bing, for example. But the quality boost is worth it, I think.

Has the section by section trick improved your language learning content? Let us know your experiences in the comments!

An image of a robot struggling with numbreed blocks. AI has a problem with random ordering.

Totally Random! Getting Round AI Random Blindness in Worksheet Creation

If you’re already using AI for language learning content creation, you’ve probably already cried in horror at one of its biggest limitations. It’s terrible at putting items in a random order.

Random order in language learning exercises is pretty essential. For instance, a ‘missing words’ key below a gap-fill exercise should never list words in the same order as the questions they belong to.

Obvious, right? Well, to AI, it isn’t!

Just take the following prompt, which creates a mini worksheet with an introductory text and a related gap-fill exercise:

I am learning French, and you are a language teacher and content creator, highly skilled in worksheet creation.
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in a random order below the exercise.

The output is very hit and miss – and much more miss! Perhaps 90% of the time, ChatGPT lists the answer key in the order of the questions. Either that, or it will produce feeble jumbling attempts, like reversing just the first two items on the list.

AI’s Random Issue

One prompt-tweaking tip you can try in these cases is SHOUTING. Writing this instruction in caps can sometimes increase the bullseyes. Put them IN RANDOM ORDER, darn it! It doesn’t help much here, though. It just doesn’t seem worth relying on Large Language Models like ChatGPT to produce random results.

The reason has something to do with the fundamental way these platforms function. They’re probability machines, guessing what word should come next based on calculations of how likely word X, Y or Z will be next. Their whole rationale is not to be random; you might even call then anti-random machines.

No wonder they’re rubbish at it!

A Road Less Random

So how can we get round this in a reliable way that works every time?

The simplest fix, I’ve found, is to find another, non-random way to list things differently from the question order. And the easiest way to do that is to simply list things alphabetically:

I am learning French, and you are a language teacher and content creator, highly skilled in worksheet creation.
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in alphabetical order below the exercise.

The likelihood of this order being the same as the questions is minimal. Hilariously, AI still manages to mess this order up at times, adding the odd one or two out-of-place at the end of the list, as if it forgot what it was doing, realised, and quickly bunged them back in. But the technique works just fine for avoiding the order giving the answers away.

A simple fix that basically ditches randomness completely, yes. But sometimes, the simplest fixes are the best!

Random blindness is a good reminder that AI isn’t a magical fix-all for language learning content creation. But, with an awareness of its limitations, we can still achieve some great results with workarounds.

AI prompt engineering - the toolkit for getting better results from your platform of choice.

Better AI Language Learning Content with C-A-R-E

AI isn’t just for chat – it’s also great at making static language learning content. And as AI gains ground as a content creation assistant, prompt engineering – the art of tailoring your requests – becomes an ever more important skill.

As you’d expect, frameworks and best practice guides abound for constructing the perfect prompt. They’re generally all about defining your request with clarity, in order to minimise AI misfires and misunderstandings. Perhaps the most well-known and effective of these is R-T-F – that’s role, task, format. Tell your assistant who it is, what to do, and how you want the data to look at the end of it.

Recently, however, I’ve been getting even more reliable MFL content with another prompt framework: C-A-R-E. That is:

  • Context
  • Action
  • Result
  • Example(s)

Some of these steps clearly align with R-T-F. Context is a broader take on role, action matches to task and result roughly to format. But the kicker here is the addition of example(s). A wide-ranging academic investigation into effective prompting recently flagged “example-driven prompting” as an important factor in improving output, and for good reason: the whole concept of LLMs is built on constructing responses from training data. It’s built on the concept of parroting examples.

Crafting AI prompts with C-A-R-E

As far as language content is concerned, C-A-R-E prompting is particularly good for ‘fixed format’ activity creation, like gap-fills or quizzes. There’s a lot of room for misinterpretation when describing a word game simply with words; a short example sets AI back on track. For example:

– I am a French learner creating resources for my own learning, and you are an expert language learning content creator.
– Create a gap-fill activity in French for students around level A2 of the CEFR scale on the topic “Environment”.
– It will consist of ten sentences on different aspects of the topic, with a key word removed from each one for me to fill out. Provide the missing words for me in an alphabetically sorted list at the end as a key.
– As an example, a similar question in English would look like this: “It is very important to look after the ———- for future generations.”

This produces excellent results in Microsoft Copilot / Bing (which we love for the freeness, obviously!) and ChatGPT. For example:

Creating AI language learning content with Microsoft Copilot / Bing Chat

Creating AI language learning content with Microsoft Copilot / Bing Chat

Providing short examples seems like an obvious and intuitive step, but it’s surprising how infrequently we tend to do it in our AI prompts. The gains are so apparent, that it’s worth making a note to always add a little C-A-R-E to your automatic content creation.

If you’ve been struggling to get reliable (or just plain sensible!) results with your AI language learning content, give C-A-R-E a try – and let us know how it goes in the comments!

Language learning - making sense of the wall of words.

Playing with Words: How ‘The Language Game’ Can Boost Your Language Learning

It doesn’t happen too often, but now and again I come across a linguistics book that has some immediately liftable, transferable insights for language learners, both formal linguists and otherwise. So it was with The Language Game, my star read over a quiet Christmas up in Aberdeenshire this year.

As polyglots and language enthusiasts, we often get lost in the intricate maze of vocabulary lists, grammar rules, and perfect pronunciation. We diligently chase language as a concrete, unchanging entity, forgetting the exhilarating dance of meaning that is the true essence of language.

But what if we’ve been approaching language learning from a slightly skewed perspective?

The Language Game, Morten H. Christiansen and Nick Chater’s paradigm-changing exploration of the improvisational nature of language, suggests that maybe we have. They argue that, much like life itself, language is a constant improvisation and renegotiation of meaning. From the ever-shifting, multifaceted definitions of words like light and live (just think of all the different, often tenuously connected things they have come to mean), language isn’t a fixed system, but a dynamic game we play. At any point, we can recruit existing items in novel ways that suit our immediate needs. This game relies almost completely on context, arising from our in-the-moment desire to communicate rather than adhering to strict, unchanging rules.

What does this mean for us second (third, fourth etc.) language learners? It reminds us that language isn’t a static mountain to be conquered, but a playful river we navigate as it continues to change. The path forward lies not in rote memorisation, but in embracing the creative process of meaning-making in the moment.

Lessons from The Language Game

The Language Game is a compelling, accessibly written book and an easy read even if you don’t have a background in formal linguistics. I really recommend you dip in yourself to benefit from the insights inside it. In the meantime, here are the main polyglot takeaways that I found beneficial – all great rules to learn by as a foreign language enthusiast.

Meaning isn’t set in stone

Ease off on exact dictionary definitions and rigid rules. Focus on using words in context, adapting to the ever-evolving “language games” around you, consuming as much contemporary media as possible.

Context is King

Don’t downplay the role of setting in what words and sentences mean. If something doesn’t make sense, pull back to see the bigger picture, and have a stab at guessing from the context. Always close attention to the social landscape where language unfolds. Words are chameleons, their meaning shifting with the hues of the situation.

Mastery takes repetition

Even the expectation that toddlers incorporate ten new words perfectly into the mental lexicon is on shaky ground. Investigations into the infamous ‘cheem’ experiments reveal that kids grasp new concepts quickly, but lose them quickly without reinforcement.

Let go of the pressure to “gobble up” language in this way. Language use isn’t simply ‘learn it once and remember it forever’. It builds gradually, layer by layer, through repeated exposure and playful experimentation. Fleeting memory may fades, but repeated use cements meaning.

The Language Game is Just Charades

Gestures, context, and playful guessing guide our understanding. Just as children infer meaning from context, so too do we adults when we play charades. The metaphor of charades – using whatever is at hand to produce meaning in the mind of another – extends to everyday communication, too.

Embrace the guessing game – it’s a powerful learning tool. Guessing is good – don’t be afraid to take a leap of faith with a new word. Use it, even if you’re unsure.

Remember, language is a game, and games are meant to be fun. So let’s play!

The Language Game by Morten H. Christiansen and Nick Chater is available as a paperback and Kindle book from Amazon.

A robot tracking resolutions on a tick list.

Setting Language Learning Resolutions – Ambitious But Kind

It’s nearly that day again – you know, the one with all the ones, where we start thinking about new beginnings and a new ‘us’. Of course, there’s nothing literally magical about January 1st. We can, and should, make resolutions and plans whenever we want to achieve something like learning a language.

But isn’t there just something about it that makes goal-setting feel a bit more exciting?

A good coaching friend of mine has a great attitude towards resolutions. Always advocating self-kindness, she insists on avoiding regimented ‘must do’ lists for the new year (or any other time, for that matter). Instead, she suggests creating ‘would like to do’ lists instead. They’re lists that acknowledge that, in an ideal world, we would tick every box – but our worlds aren’t always ideal.

With that in mind, we can mindfully put together lists of what we’d like to tackle, given the time and energy. One solid piece of advice to make those like to goals even likelier is to be concrete about them. Woolly, amorphous targets like ‘improve my French‘ are shaky on two fronts. Firstly, they make the goal down-negotiable on demand (‘I learnt one extra word this year‘ could cover ‘improve’!). Secondly, they’re immeasurable. You can’t track your progress towards something that isn’t defined.

It’s one reason that the CEFR language levels are so good for language resolutions. For example, “Achieve B2 in French” is defined by the competences in the official framework itself. But they’re also officially measurable, as you can aim for accreditation at those levels. “Pass a B2 French exam” is an even tighter bullseye to aim for.

We can also measure progress by effort, as well as result. An easy way to do this is to set a lesson goal. “I’ll do one French lesson every week for the whole year” is a good yardstick for time put into learning, and – you would expect – will deliver that precious improvement too.

Language Resolutions for 2024

So, putting my money where my mouth is (I promise I’m not all talk), here are my main language resolutions for the New Year – and that’s would like to do, not must do!

French – systematically work through TY French Tutor to formalise grammar knowledge
Gaelic – consolidate B1 through group classes; socialise more in Gaelic through interest groups
German – read four books in the language over the course of the year
Greek – consolidate B1 with continued weekly conversation lessons
Norwegian – consume more media in Norwegian; at least one 30-minute podcast weekly; arrange date for the Bergenstest and a tutor to work with towards it
Polish – resume actively working on language with weekly lessons, get back to B1
Swahili – consolidate A2
Swedish – hit B1 by May (Malmö 2024!); a podcast a week and completing the Duolingo Swedish course

And, of course: keep dabbling!

Wishing all Polyglossic’s visitors a very happy, healthy and successful 2024. Thanks for your encouragement and support over this and previous years – I couldn’t do it without you all!