AI Parallel Texts for Learning Two Similar Languages

I’ve seen a fair few social media posts recently about linguist Michael Petrunin’s series of Comparative Grammars for polyglots. They seem to have gone down a storm, not least because of the popularity of triangulation as a polyglot strategy.

They’re a great addition to the language learning bookshelf, since there’s still so little formal course material that uses this principle. Of course, you can triangulate by selecting course books in your base language, as many do with Assimil and other series like the Éditions Ellipse.

Parallel Texts à la LLM

But LLMs like ChatGPT, which already do a great job of the parallel text learning style, are pretty handy for creative comparative texts, too. Taking a story format, here’s a sample parallel text prompt for learners of German and Dutch. It treats each sentence as a mini lesson in highlighting differences between the languages.

I’m learning Dutch and German, two closely related languages. To help me learn them in parallel and distinguish them from each other, create a short story for me in Dutch, German and English in parallel text style. Each sentence should be given in Dutch, German and English. Purposefully use grammatical elements which highlight the differences between the languages, which a student of both does need to work hard to distinguish, in order to make the text more effective.

The language level should be lower intermediate, or B1 on the CEFR scale. Make the story engaging, with an interesting twist. Format the text so it is easy to read, grouping the story lines together with each separate sentence on a new line, and the English in italics.

You can tweak the formatting, as well as the premise – specify that the learner already speaks one of the languages more proficiently than the other, for example. You could also offer a scenario for the story to start with, so you don’t end up with “once upon a time” every run. But the result is quite a compact, step-by-step learning resource that builds on a comparative approach.

ChatGPT creating parallel texts in German and Dutch with an English translation.

ChatGPT creating parallel texts in German and Dutch with an English translation.

Variations and Limitations

I also tried prompting for explanatory notes:

Where the languages differ significantly in grammar / syntax, add an explanatory note (in English) to the sentences, giving details.

This was very hit and miss, with quite unhelpful notes in most runs. In fact, this exposes the biggest current limitation of LLMs: they’re excellent content creators, but still far off the mark in terms of logically appraising the language they create.

It is, however, pretty good at embellishing the format of its output. The following variation is especially impressive in an LLM platform that shows a preview of its code:

I’m learning Spanish and Portuguese, two closely related languages. To help me learn them in parallel and distinguish them from each other, create a short story for me in Spanish, Portuguese and English in parallel text style. Each sentence should be given in Spanish, Portuguese and English. Purposefully use grammatical elements which highlight the differences between the languages, which a student of both does need to work hard to distinguish, in order to make the text more effective.

The language level should be lower intermediate, or B1 on the CEFR scale. Make the story engaging, with an interesting twist.

The output should be an attractively formatted HTML page, using a professional layout. Format the sentences so they are easy to read, grouping the story lines together with each separate sentence on a new line, and the English in italics. Hide the English sentences first – include a “toggle translation” button for the user.

Claude by Anthropic creating an HTML-formatted parallel story in Spanish and Portuguese.

Claude by Anthropic creating an HTML-formatted parallel story in Spanish and Portuguese.

It’s another use case that highlights LLMs’ greatest strength: the creation of humanlike texts. For linguists, it matters not a jot how much (or little) deep understanding there is beneath that. With the language quality now almost indistinguishable from real people-speak, AI texts serve as brilliant ‘fake authentic’ language models.

e-Stories as parallel texts are yet another fun, useful flavour of that!

Robots exchanging gifts. We can exchange - and adapt - digital resources now, with Claude's shareable Artifacts.

Sharing Your Language Learning Games with Claude Artifacts

If Claude’s recent improvements weren’t already impressive enough, Anthropic has only gone and done it again – this time, by making Artifacts shareable.

Artifacts are working versions of the programs and content you, the user, prompt for in Claude. For example, they pop up when you ask the AI to write a language practice game in HTML, running the code it writes as a playable activity. Instant language learning games – no coding required.

Now, you can share your working, fully playable creations, with a simple link.

Instant Spanish Quiz with Claude

Take this simple Spanish quiz (very topical given the forthcoming Euros 2024 final!). I prompted for it as follows:

Create an original, self-contained quiz in Spanish for upper beginner / lower intermediate students of the language, on the topic “Spain in the European Football Championships”. It should be completely self-contained in an HTML page. The quiz should be multiple choice, with ten questions each having four alternative answer buttons – only one is right, and there is always one ‘funny’ alternative answer in the mix too.

Every time the quiz is played, the questions and the answers are in a random order. The student can keep trying answers until they get the right one (obviously after clicking an answer button, it should be disabled). Incorrect buttons turn red – correct ones green. Keep score of the player’s accuracy as they work through the questions (number of correct clicks / total clicks).

Make sure it looks attractive, slick and smart too, with CSS styling included in the HTML page.

If you have Artifacts turned on (see here for more). you should see your working game appear in a new pane. But now, you’ll also see a little Publish link in the bottom-right corner. Click this, and you can choose to make your creation public with an access link.

Publishing your working language activities using a share link with Claude Artifacts

Publishing your working language activities using a share link with Claude Artifacts

Remixing Artifacts

But wait – there’s more. When colleagues access your Artifact, they will see a Remix button in that bottom-right corner.

Remixing Artifacts in Claude

Remixing Artifacts in Claude

By hitting that, they can pick up where you left off and tweak your materials with further prompting. For instance, to keep the quiz format but change the language and topic, they could simply ask:

Now create a version of this quiz for French learners on the topic “France at the Olympic Games”.

It makes for an incredibly powerful way to network your learning resources. It’s also perfectly possible to take advantage of all this using only Claude’s free tier, which gives you 10 or so messages every few hours.

More than enough to knock up some learning games.

Have you created anything for colleagues to adapt and share on in Claude? Let us know in the comments!

A language learning topic menu created by Claude AI.

Claude Artifacts for Visually Inspired Language Learning Content

If you create language learning content – for yourself, or for your students – then you need to check out the latest update to Claude AI.

Along with a competition-beating new model release, Anthropic have added a new feature called Artifacts to the web interface. Artifacts detects when there is self-contained content it can display – like webpage code, or worksheet text – and it pops that into a new window, running any interactive elements on the fly. In a nutshell, you can see what you create as you create it.

This makes it incredibly easy to wrap your learning content up in dynamic formats like HTML and JavaScript, then visually preview and tweak it to perfection before publishing online. This favours interactive elements like inline games, which can be impressively slick when authored by Claude’s Sonnet 3.5; it turns out that model update is a real platform-beater when it comes to coding.

Using Claude’s new Artifacts Feature

You can give Artifacts a whirl for free, as Claude’s basic tier includes a limited number of interactions with its top model every few hours. That’s more than enough to generate some practical, useful material to use straight away.

First of all, you’ll need to ensure that the feature is enabled. After logging into Claude, locate the little scientific flask icon by the text input and click it.

Claude - locating the experimental features toggle

Claude – locating the experimental features toggle

A window should pop up with the option to enable Artifacts if it’s not already on.

Claude - enabling Artifacts.

Claude – enabling Artifacts.

Now it’s on, you just need a prompt that will generate some ‘Artifactable’ content. Try the prompt below for an interactive HTML worksheet with a reading passage and quiz:

Interactive HTML Worksheet Prompt

Create an original interactive workbook for students of French, as a self-contained, accessible HTML page. The target language level should be A2 on the CEFR scale. The topic of the worksheet is “Summer Holidays“. The objective is to equip students with the vocabulary and structures to chat to native speakers about the topic.

The worksheet format is as follows:

– An engaging introductory text (250 words) using clear and idiomatic language
– A comprehensive glossary of key words and phrases from the text in table format
– A gap-fill exercise recycling the vocabulary and phrases – a gapped phrase for each question with four alternative answer buttons for students to select. If they select the correct one, it turns green and the gap in the sentence is filled in. If they choose incorrectly, the button turns red. Students may keep trying until they get the correct answer.

Ensure the language is native-speaker quality and error-free. Adopt an attractive colour scheme and visual style for the HTML page.

With Artifacts enabled, Claude should spool out the worksheet in its own window. You will be able to test the interactive elements in situ – and then ask Claude to tweak and update as required! Ask it to add scoring, make it drag-and-drop – it’s malleable ad infinitum.

An interactive worksheet created by Claude.ai, displaying in the new Artifacts window

An interactive worksheet created by Claude, displaying in the new Artifacts window

Once created, you can switch to the Artifacts Code tab, then copy-paste your page markup into a text editor to save as an .html file. Then, it’s just a case of finding a place to upload it to.

Pulling It Together

After you’re done with the worksheets, you can even ask Claude to build a menu system to pull them all together:

Now create a fun, graphical, colourful Duolingo-style topic menu which I can use to link to this worksheet and others I will create. Use big, bold illustrations. Again, ensure that it is a completely self-contained HTML file.

Here’s the result I got from running that – again, instantly viewable and tweakable:

A language website menu created by Claude.ai, displayed in Claude's Artifacts feature.

A language website menu created by Claude, displayed in the Artifacts feature.

You’ve now got the pieces to start to stitch together into something much bigger than a single worksheet.

Instant website – without writing a line of code!

Have you had chance to play with Claude’s new Artifacts feature yet? Let us know in the comments what you’ve been creating!

ChatGPT French travel poster

A Second Shot at Perfect Posters – ChatGPT’s Image Tweaker

The big ChatGPT news in recent weeks is about images, rather than words. The AI frontrunner has added a facility to selectively re-prompt for parts of an image, allowing us to tweak sections that don’t live up to prompt expectations.

In essence, this new facility gives us a second shot at saving otherwise perfect output from minor issues. And for language learning content, like posters and flashcards, the biggest ‘minor’ issue – the poor spellings that crop up in AI image generation – makes the difference between useful and useless material.

Rescuing ChatGPT Posters

Take this example. It’s a simple brief – a stylish, 1950s style travel poster for France. Here’s the prompt I used to generate it:

Create a vibrant, stylish 1950s style travel poster featuring Paris and the slogan “La France”.

I wanted the text “La France” at the top, but, as you can see, we’ve got a rogue M in there instead of an N.

ChatGPT generated image of a French travel poster

To target that, I tap the image in the ChatGPT app. It calls up the image in edit mode, where I can highlight the areas that need attention:

ChatGPT image editing window

Then, I press Next, and can re-prompt for that part of the image. I simply restate the slogan instructions:

The slogan should read “La France”.

The result – a correct spelling, this time!

ChatGPT French travel poster

It can take a few goes. Dodgy spelling hasn’t been fixed; we’ve just been given a way to try again without scrapping the entire image. Certain details also won’t be retained between versions, such as the font, in this example. Others may be added, like the highly stylised merging of the L and F in the slogan (a feature, rather than a bug, I think!).

But the overall result is good enough that our lovely 1950s style poster wasn’t a total write-off.

Another case of AI being highly imperfect on its own, but a great tool when enhanced by us human users. It still won’t replace us – just yet!

Image tweaking is currently only available in the ChatGPT app (iOS / Android).

Neon robots racing. Can Claude 3 win the AI race with its brand new set of models?

Claude 3 – the New AI Models Putting Anthropic Back in the Game

You’d be forgiven for not knowing Claude. This chirpily-named AI assistant from Anthropic has been around for a while, like its celebrity cousin ChatGPT. But while ChatGPT hit the big time, Claude hasn’t quite progressed beyond the Other Platforms heading in most AI presentations – until now.

What changed everything this month was Anthropic’s release of all-new Claude 3 models – models that not only caught up with ChatGPT-4 benchmarks, but surpassed them. It’s wise to take benchmarks with a pinch of salt, not least because they’re often internal, proprietary measures. But the buzz around this latest release echoed through the newsletters, podcasts and socials, suggesting that this really was big news.

Tiers of a Claude

Claude 3 comes in three flavours. The most powerful, Opus, is the feistiest ChatGPT-beater by far. It’s also, understandably, the most processor-intensive, so available only as a premium option. That cost is on a level with competitors’ premium offerings, at just under £20 a month.

But just a notch beneath Opus, we have Sonnet. That’s Claude 3’s mid-range model, and the one you’ll chat with for free at https://claude.ai/chats. Anthropic reports that Sonnet still pips ChatGPT-4 on several reasoning benchmarks, with users praising how naturally conversational it seems.

Finally, we have a third tier, Haiku. This is the most streamlined of the three in terms of computing power. But it still manages to trounce ChatGPT-3.5 while coming impressively close to most of those ChatGPT-4 benchmarks. And the real clincher?

It’s cheap.

Haiku costs a fraction of the price per token of competing models to developers. That means it’s a lot cheaper to build it into language learning apps, opening up a route for many to incorporate AI into their software. That lower power usage too is a huge win against a backdrop of serious concerns around AI energy demands.

Claude and Content Creation

So how does it measure up in terms of language learning content? I set Claude’s Sonnet model loose on the sample prompt from my recent Gemini Advanced vs. ChatGPT-4 battle. And the verdict?

It more than holds its own.

Here’s the prompt (feel free to adapt and use this for your own worksheets – it creates some lovely materials!):

Create an original, self-contained French worksheet for students of the language who are around level A2 on the CEFR scale. The topic of the worksheet is “Reality TV in France“.

The worksheet format is as follows:

– An engaging introductory text (400 words) using clear and idiomatic language
– Glossary of 10 key words / phrases from the text (ignore obvious cognates with English) in table format
– Reading comprehension quiz on the text (5 questions)
– Gap-fill exercise recycling the same vocabulary and phrases in a different order (10 questions)
– ‘Talking about it’ section with useful phrases for expressing opinions on the topic
– A model dialogue (10-12 lines) between two people discussing the topic
– A set of thoughtful questions to spark further dialogue on the topic
– An answer key covering all the questions

Ensure the language is native-speaker quality and error-free.

Sonnet does an admirable job. If I’m nitpicking, the text is perhaps slightly less fun and engaging than Gemini Advanced. But then, that’s the sort of thing you could sort out by tweaking the prompt.

Otherwise, it’s factual and relevant, with some nice authentic cultural links. The questions make sense and the activities are useful. Claude also followed instructions closely, particularly with the inclusion of an answer key (so often missing in lesser models).

There’s little to quibble over here.

A language learning worksheet created with Claude 3 Sonnet.

A Claude 3 French worksheet. Click here to download the PDF!

Another Tool For the Toolbox

The claims around Claude 3 are certainly exciting. And they have substance – even the free Sonnet model available at https://claude.ai/chats produces content on a par with the big hitters. Although our focus here is worksheet creation, its conversational slant makes it a great option for experimenting with live AI language games, too.

So if you haven’t had a chance yet, go and get acquainted with Claude. Its all-new model set, including a fabulous free option, makes it one more essential tool in the teacher’s AI toolbox.

An illustration of a cute robot looking at a watch, surrounded by clocks, illustrating AI time-out

Avoiding Time-Out with Longer AI Content

If you’re using AI platforms to create longer language learning content, you’ll have hit the time-out problem at some point.

The issue is that large language models like ChatGPT and Bard use a lot of computing power at scale. To keep things to a sensible minimum, output limits are in place. And although they’re often generous, even on free platforms, they can fall short for many kinds of language learning content.

Multi-part worksheets and graded reader style stories are a case in point. They can stretch to several pages of print, far beyond most platform cut-offs. Some platforms (Microsoft Copilot, for instance) will just stop mid-sentence before a task is complete. Others may display a generation error. Very few will happily continue generating a lengthy text to the end.

You can get round it in many cases by simply stating “continue“. But that’s frustrating at best. And at worst, it doesn’t work at all; it may ignore the last cut-off sentence, or lose its thread entirely. I’ve had times when a quirky Bing insists it’s finished, and refuses, like a surly tot, to pick up where it left off.

Avoiding Time-Out with Sectioning

Fortunately, there’s a pretty easy fix. Simply specify in your prompt that the output should be section by section. For example, take this prompt, reproducing the popular graded reader style of language learning text but without the length limits:

You are a language tutor and content creator, who writes completely original and exciting graded reader stories for learners of all levels. Your stories are expertly crafted to include high-frequency vocabulary and structures that the learner can incorporate into their own repertoire.

As the stories can be quite long, you output them one chapter at a time, prompting me to continue with the next chapter each time. Each 500-word chapter is followed by a short glossary of key vocabulary, and a short comprehension quiz. Each story should have five or six chapters, and have a well-rounded conclusion. The stories should include plenty of dialogue as well as prose, to model spoken language.

With that in mind, write me a story for French beginner learners (A1 on the CEFR scale) set in a dystopian future.

By sectioning, you avoid time-out. Now, you can produce some really substantial learning texts without having to prod and poke your AI to distraction!

There may even be an added benefit. I’ve noticed that the quality of texts output by section may even be slightly higher than with all-at-once content. Perhaps this is connected to recent findings that instructing AI to thing step by step, and break things down, improves results.

If there is a downside, it’s simply that sectioned output with take up more conversational turns. Instead of one reply ‘turn’, you’re getting lots of them. This eats into your per-conversation or per-hour allocation on ChatGPT Plus and Bing, for example. But the quality boost is worth it, I think.

Has the section by section trick improved your language learning content? Let us know your experiences in the comments!

An image of a robot struggling with numbreed blocks. AI has a problem with random ordering.

Totally Random! Getting Round AI Random Blindness in Worksheet Creation

If you’re already using AI for language learning content creation, you’ve probably already cried in horror at one of its biggest limitations. It’s terrible at putting items in a random order.

Random order in language learning exercises is pretty essential. For instance, a ‘missing words’ key below a gap-fill exercise should never list words in the same order as the questions they belong to.

Obvious, right? Well, to AI, it isn’t!

Just take the following prompt, which creates a mini worksheet with an introductory text and a related gap-fill exercise:

I am learning French, and you are a language teacher and content creator, highly skilled in worksheet creation.
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in a random order below the exercise.

The output is very hit and miss – and much more miss! Perhaps 90% of the time, ChatGPT lists the answer key in the order of the questions. Either that, or it will produce feeble jumbling attempts, like reversing just the first two items on the list.

AI’s Random Issue

One prompt-tweaking tip you can try in these cases is SHOUTING. Writing this instruction in caps can sometimes increase the bullseyes. Put them IN RANDOM ORDER, darn it! It doesn’t help much here, though. It just doesn’t seem worth relying on Large Language Models like ChatGPT to produce random results.

The reason has something to do with the fundamental way these platforms function. They’re probability machines, guessing what word should come next based on calculations of how likely word X, Y or Z will be next. Their whole rationale is not to be random; you might even call then anti-random machines.

No wonder they’re rubbish at it!

A Road Less Random

So how can we get round this in a reliable way that works every time?

The simplest fix, I’ve found, is to find another, non-random way to list things differently from the question order. And the easiest way to do that is to simply list things alphabetically:

I am learning French, and you are a language teacher and content creator, highly skilled in worksheet creation.
Create a French worksheet for me on the topic “Environmentally-Friendly Travel”. The language level should be A2 on the CEFR scale, with clear language and a range of vocabulary and constructions.
The worksheet starts with a short text in the target language (around 250 words) introducing the topic.
Then, there follows a gap-fill exercise; this consists of ten sentences on the topic, related to the introductory text. A key content word is removed from each sentence for the student to fill in. For instance, ‘je —— en train’ (where ‘voyage’ is removed).
Give a list of the removed words in alphabetical order below the exercise.

The likelihood of this order being the same as the questions is minimal. Hilariously, AI still manages to mess this order up at times, adding the odd one or two out-of-place at the end of the list, as if it forgot what it was doing, realised, and quickly bunged them back in. But the technique works just fine for avoiding the order giving the answers away.

A simple fix that basically ditches randomness completely, yes. But sometimes, the simplest fixes are the best!

Random blindness is a good reminder that AI isn’t a magical fix-all for language learning content creation. But, with an awareness of its limitations, we can still achieve some great results with workarounds.

Does AI have a noun problem? Strategies for avoiding it.

AI Has A Noun Problem : Let’s Fix It!

If you’re using AI for language learning content creation, you might have already spotted AI’s embarrassing secret. It has a noun problem.

Large Language Models like ChatGPT and Bard are generally great for creating systematic learning content. They’re efficient brainstormers, and can churn out lists and texts like there’s no tomorrow. One use case I’ve found particularly helpful is the creation of vocab lists – all the more so since it can spool them off in formats to suit learning tools like Anki.

But the more I’ve used it, the more it’s become apparent. AI has a blind spot that makes these straight-out-the-box vanilla lists much less useful than they could be.

A fixation with nouns.

Test it yourself; ask your platform of choice simply to generate a set of vocab items on a topic. Chances are there’ll be precious few items that aren’t nouns. And in my experience, more often than not, lists are composed entirely of noun items and nothing else.

ChatGPT-4 giving a list of French vocabulary items - all nouns.

ChatGPT-4 giving a list of French vocabulary items – all nouns.

It’s a curious bias, but I think it has something to do with how the LLM conceives key words. The term is somehow conflated with all the things to do with a topic. And nouns, we’re taught at school, are thing words.

Getting Over Your Noun Problem

Fortunately, there’s therapy for your AI to overcome its noun problem. And like most AI refining strategies, it just boils down to clearer prompting.

Here are some tips to ensure more parts-of-speech variety in your AI language learning content:

  1. Explicit Instruction: When requesting vocabulary lists, spell out what you want. Specify a mix of word types – nouns, verbs, adjectives, adverbs, etc. to nudge the AI towards a more balanced selection. When it doesn’t comply, just tell it so! More verbs, please is good start.
  2. Increase the Word Count: Simply widening the net can work, if you’re willing to manually tweak the list afterwards. Increase you vocab lists to 20 or 30 items, and the chances of the odd verb or adjective appearing are greater.
  3. Contextual Requests: Instead of asking for lists, ask the AI to provide sentences or paragraphs where different parts of speech are used in context. This not only gives you a broader range of word types, but also shows them in action.
  4. Ask for Sentence Frames: Instead of single items, ask for sentence frames (or templates) that you can swap words in an out of. For instance, request a model sentence with a missing verb, along with 10 verbs that could fill that spot. “I ____ bread” might be a simple one for the topic food.
  5. Challenge the AI: Regularly challenge the AI with tasks that require a more nuanced understanding of language – like creating stories, dialogues, or descriptive paragraphs. This can push its boundaries and improve its output.

Example Prompts

Bearing those tips in mind, try these prompts for size. They should produce a much less noun-heavy set of vocab for your learning pleasure:

Create a vocabulary list of 20 French words on the topic “Food and Drink”. Make sure to include a good spread of nouns, verbs, adjectives and adverbs. For each one, illustrate the word in use with a useful sentence of about level A2 on the CEFR scale.
Give me a set of 5 French ‘sentence frames’ for learning and practising vocabulary on the topic “Summer Holidays”. Each frame should have a missing gap, along with five examples of French words that could fit in it.
Write me a short French text of around level A2 on the CEFR scale on the topic “Finding a Job in Paris”. Then, list the main content words from the text in a glossary below in table format.

Have you produced some useful lists with this technique? Let us know in the comments!

AI prompt engineering - the toolkit for getting better results from your platform of choice.

Better AI Language Learning Content with C-A-R-E

AI isn’t just for chat – it’s also great at making static language learning content. And as AI gains ground as a content creation assistant, prompt engineering – the art of tailoring your requests – becomes an ever more important skill.

As you’d expect, frameworks and best practice guides abound for constructing the perfect prompt. They’re generally all about defining your request with clarity, in order to minimise AI misfires and misunderstandings. Perhaps the most well-known and effective of these is R-T-F – that’s role, task, format. Tell your assistant who it is, what to do, and how you want the data to look at the end of it.

Recently, however, I’ve been getting even more reliable MFL content with another prompt framework: C-A-R-E. That is:

  • Context
  • Action
  • Result
  • Example(s)

Some of these steps clearly align with R-T-F. Context is a broader take on role, action matches to task and result roughly to format. But the kicker here is the addition of example(s). A wide-ranging academic investigation into effective prompting recently flagged “example-driven prompting” as an important factor in improving output, and for good reason: the whole concept of LLMs is built on constructing responses from training data. It’s built on the concept of parroting examples.

Crafting AI prompts with C-A-R-E

As far as language content is concerned, C-A-R-E prompting is particularly good for ‘fixed format’ activity creation, like gap-fills or quizzes. There’s a lot of room for misinterpretation when describing a word game simply with words; a short example sets AI back on track. For example:

– I am a French learner creating resources for my own learning, and you are an expert language learning content creator.
– Create a gap-fill activity in French for students around level A2 of the CEFR scale on the topic “Environment”.
– It will consist of ten sentences on different aspects of the topic, with a key word removed from each one for me to fill out. Provide the missing words for me in an alphabetically sorted list at the end as a key.
– As an example, a similar question in English would look like this: “It is very important to look after the ———- for future generations.”

This produces excellent results in Microsoft Copilot / Bing (which we love for the freeness, obviously!) and ChatGPT. For example:

Creating AI language learning content with Microsoft Copilot / Bing Chat

Creating AI language learning content with Microsoft Copilot / Bing Chat

Providing short examples seems like an obvious and intuitive step, but it’s surprising how infrequently we tend to do it in our AI prompts. The gains are so apparent, that it’s worth making a note to always add a little C-A-R-E to your automatic content creation.

If you’ve been struggling to get reliable (or just plain sensible!) results with your AI language learning content, give C-A-R-E a try – and let us know how it goes in the comments!

The Polish flag. Photo by Michal Zacharzewski from FreeImages

Język polski i ja / Polish and Me

Not long back, a lively online language learning debate caught my eye. It was around the unassailable prominence of English as a medium for discussion in the polyglot community, and the irony of this within a community of a hundred other choices. Where is the diversity, the German, Japanese, Polish, Spanish articles? After all, we are spoilt for choice.

Of course, it is hard to get round this – not least because we all speak a slightly different set of languages. So, at least for now, English looks to keep its place as the most inclusive choice of language for discussion.

That said, I would personally echo that hope to see more blog and social media content in the languages I learn. Above all, being a blogger myself, it seemed like a good cue to lend a little ballast to the non-English side of things, to be brave, to publish non-English content.

Safe, comfortable English is a difficult spot to get out of, though. As a native English speaker, the reason for my reticence is probably one shared by many of my fellow anglophone enthusiasts: fear of mistakes, of others simply doing it better. That kind of anxiety is self-fulfilling; keep your fledgling skills too tightly caged, and they might just wither away.

Luckily, the chance came along to do a bit of writing along these lines, but with support. That made all the difference.

Good Timing

By complete coincidence, my iTalki Polish tutor Jan set a very appropriate homework task for me recently – a simple blog post, in Polish, about my personal history of learning the language. Writing from experience, like diary-keeping, can be an effective way to engage with, recycle and strengthen your language skills. But in this case, it gave me the opportunity to create something original – and not in English – for Polyglossic.

Now, the natural thing to do would probably have been to do this in one of my stronger languages. German, Norwegian or Spanish. You could say that Polish was simply in the right place at the right time. However, maybe that makes it an even better candidate. My lagging Polish is crying out for a bit of extra writing practice.

Let’s overlook for a moment (pretty please!) the discrepancy of this preface to it in English. Hmm. But for a first non-English post in a site full of them, it only seemed fair – at least for the time being. Baby steps.

Finally, huge thanks to Jan for the prompt and the copious corrections to this during class. Check out his own blog, Polish with John, for some fantastic original resources for learners. Any remaining errors below are completely my own!


Język polski i ja

Na Początku

Interesuję się językiem polskim od wielu lat. W latach dziewięćdziesiątych słuchałem polskiej muzyki w radiu u polskiego sąsiada, Pana Wilsona (jego prawdziwego polskiego nazwiska nie znam) i bardzo chciałem się nauczyć tego pięknego języka.

Ale wtedy nie było łatwo uczyć się polskiego. W bibliotekach nie było wielu materiałów do nauki. Jeśli ktoś chciał się uczyć hiszpańskiego, francuskiego, niemieckiego, dostępna była masa materiałów i książek. Niestety do języka polskiego był tylko jeden, bardzo stary egzemplarz “Teach Yourself Polish”. Było to wydanie z lat czterdziestych oparte na starej metodologii. Zastosowana była metoda gramatyczno-tłumaczeniowa. Pięćdziesiąt lekcji gdzie student musi czytać przykłady, nauczyć się listy słów, a potem zrobić długą listę tłumaczeń. Wtedy uważałem, że to było zupełnie normalne, że tak po prostu uczy się języków. To był błąd.

Brak mówiących

Nie było dostępu do mówiących. Pan Wilson nie lubił mówić po polsku (był starym człowiekiem a miał tragiczną historię i złe doświadczenia z wojskiem), a wszystko, co robiłem, było tłumaczeniem zdań nie mających praktycznego zastosowania. Tak nie da się nauczyć języka obcego.

Nawet słownictwo nie miało sensu dla mnie – słowa z lat czterdziestych, słowa I zwroty takie jak porucznik, pułkownik, polsko-brytyjskie przymierze i tak dalej. Myślę, że książka została napisana dla żołnierzy, którzy pracowali w polakami po wojnie. Po prostu nie mi pasowała. Ciekawe słownictwo, oczywiście, ale nie bardzo przydatne – na początku tylko chciałem rozumieć polskie piosenki! Ale nie było innego wyboru.

Nowy Świat

Wiele lat później, świat się zmienił. Nie tylko jest więcej książek, a też więcej metod, szerszy dostęp do materiałów do mówienia i słuchania w internecie, wszystko, co by mi pomogło jak młodemu studentowi.
Wniosek jest taki: nie da się uczyć się języka obcego bez słuchania i mówienia. Sama książka nie wystarczy.