A language learning topic menu created by Claude AI.

Claude Artifacts for Visually Inspired Language Learning Content

If you create language learning content – for yourself, or for your students – then you need to check out the latest update to Claude AI.

Along with a competition-beating new model release, Anthropic have added a new feature called Artifacts to the web interface. Artifacts detects when there is self-contained content it can display – like webpage code, or worksheet text – and it pops that into a new window, running any interactive elements on the fly. In a nutshell, you can see what you create as you create it.

This makes it incredibly easy to wrap your learning content up in dynamic formats like HTML and JavaScript, then visually preview and tweak it to perfection before publishing online. This favours interactive elements like inline games, which can be impressively slick when authored by Claude’s Sonnet 3.5; it turns out that model update is a real platform-beater when it comes to coding.

Using Claude’s new Artifacts Feature

You can give Artifacts a whirl for free, as Claude’s basic tier includes a limited number of interactions with its top model every few hours. That’s more than enough to generate some practical, useful material to use straight away.

First of all, you’ll need to ensure that the feature is enabled. After logging into Claude, locate the little scientific flask icon by the text input and click it.

Claude - locating the experimental features toggle

Claude – locating the experimental features toggle

A window should pop up with the option to enable Artifacts if it’s not already on.

Claude - enabling Artifacts.

Claude – enabling Artifacts.

Now it’s on, you just need a prompt that will generate some ‘Artifactable’ content. Try the prompt below for an interactive HTML worksheet with a reading passage and quiz:

Interactive HTML Worksheet Prompt

Create an original interactive workbook for students of French, as a self-contained, accessible HTML page. The target language level should be A2 on the CEFR scale. The topic of the worksheet is “Summer Holidays“. The objective is to equip students with the vocabulary and structures to chat to native speakers about the topic.

The worksheet format is as follows:

– An engaging introductory text (250 words) using clear and idiomatic language
– A comprehensive glossary of key words and phrases from the text in table format
– A gap-fill exercise recycling the vocabulary and phrases – a gapped phrase for each question with four alternative answer buttons for students to select. If they select the correct one, it turns green and the gap in the sentence is filled in. If they choose incorrectly, the button turns red. Students may keep trying until they get the correct answer.

Ensure the language is native-speaker quality and error-free. Adopt an attractive colour scheme and visual style for the HTML page.

With Artifacts enabled, Claude should spool out the worksheet in its own window. You will be able to test the interactive elements in situ – and then ask Claude to tweak and update as required! Ask it to add scoring, make it drag-and-drop – it’s malleable ad infinitum.

An interactive worksheet created by Claude.ai, displaying in the new Artifacts window

An interactive worksheet created by Claude, displaying in the new Artifacts window

Once created, you can switch to the Artifacts Code tab, then copy-paste your page markup into a text editor to save as an .html file. Then, it’s just a case of finding a place to upload it to.

Pulling It Together

After you’re done with the worksheets, you can even ask Claude to build a menu system to pull them all together:

Now create a fun, graphical, colourful Duolingo-style topic menu which I can use to link to this worksheet and others I will create. Use big, bold illustrations. Again, ensure that it is a completely self-contained HTML file.

Here’s the result I got from running that – again, instantly viewable and tweakable:

A language website menu created by Claude.ai, displayed in Claude's Artifacts feature.

A language website menu created by Claude, displayed in the Artifacts feature.

You’ve now got the pieces to start to stitch together into something much bigger than a single worksheet.

Instant website – without writing a line of code!

Have you had chance to play with Claude’s new Artifacts feature yet? Let us know in the comments what you’ve been creating!

A robot making clones of its voice - now quick and easy with tools like ElevenLabs.

You, But Fluent – Voice Cloning for Language Learners

I could barely contain my excitement in last week’s post on ElevenLabs’ brilliant text-to-speech voice collection. I’ve had a week of playing around with it now, and if anything, I’m only more enthusiastic about it.

After a bit of deep-delving, it’s the voice clone features that have me hooked right now. ElevenLabs can make a digital version of your voice from just 30 seconds of training speech. And it’s fast. I expected a bit of a wait for audio processing the first time I used it. But no – after reading in a couple of passages of sample text, my digital TTS voice was ready to use within seconds.

For a quick ‘n’ easy tool, it does a brilliant job of picking up general accent. It identified mine as British English, captured most of my Midlands features (it struggled with my really low u in bus, though – maybe more training would help), and it got my tone bang on. Scarily so… I can understand why cybersecurity pundits are slightly nervous about tech like this.

Your Voice, Another Language

The most marvellous thing, though, was using my voice to read foreign language texts. Although not 100% native-sounding – the voice was trained on me reading English, of course – it’s uncannily accurate. Listening to digital me reading German text, I’d say it sounds like a native-ish speaker. Perhaps someone who’s lived in Germany for a decade, and retains a bit of non-native in their speech.

But as far as models go, that’s a pretty high standard for any language learner.

ElevenLabs' TTS interface with the custom voice 'Richard' selected.

ElevenLabs’ TTS interface with the custom voice ‘Richard’ selected, ready to read some German.

The crux of it is that you can have your voice reading practice passages for memory training (think: island technique). There’s an amazing sense of personal connect that comes from that – that’s what you will sound like, when you’ve mastered this.

It also opens up the idea for tailoring digital resources with sound files read by ‘you’. Imagine a set of interactive language games for students, where the voice is their teacher’s. Incredible stuff.

In short, it’s well worth the fiver-a-month starter subscription to play around with it.

A robot reading a script. The text-to-speech voices at ElevenLabs certainly sound intelligent as well as natural!

ElevenLabs Voices for Free, Custom Language-Learning Material

There’s been a lot on the grapevine of late about AI-powered leaps forward in text-to-speech voices. From providing accent models to in-depth speaking games, next-gen TTS is poised to have a huge impact on language learning.

The catch? Much of the brand new tech isn’t available to the average user-on-the-street yet.

That’s why I was thrilled to happen across TTS service ElevenLabs recently. ElevenLabs’ stunning selection of voices powers a number of eLearning and audiobook sites already, and it’s no hype to say that they sound as close to human as you can get right now.

Even better, you can sign up for a free account that gives you 10,000 characters of text-to-speech conversion each month. For $5 a month you can up that to 30,000 characters too, as well as access voice-cloning features. Just imagine the hours of fun if you want to hear ‘yourself’ speak any number of languages!

Using ElevenLabs in Your Own Learning

There’s plenty to do for free, though. For instance, if you enjoy the island technique in your learning, you can get ElevenLabs to record your passages for audio practice / rote memorising. I make this an AI double-whammy, using ChatGPT to help prepare my topical ‘islands’ before pasting them into ElevenLabs.

The ChatGPT > ElevenLabs workflow is also brilliant for dialogue modelling. On my recent Sweden trip, I knew that a big conversational contact point would be ordering at coffee shops. This is the prompt I used to get a cover-all-bases model coffee-shop convo:

Create a comprehensive model dialogue in Swedish to help me learn and practise for the situation “ordering coffee in a Malmö coffee shop”.

Try to include the language for every eventuality / question I might be asked by the coffee shop employee. Ensure that the language is colloquial and informal, and not stilted.

The output will be pasted into a text-to-speech generator, so don’t add speaker names to the dialogue lines – just a dash will suffice to indicate a change of speaker.

I then ran off the audio file with ElevenLabs, and hey presto! Custom real-world social prep. You can’t specify different voices in the same file, of course. But you could run off the MP3 twice, in different voices, then splice it up manually in an audio editor like Audacity for the full dialogue effect. Needless to say, it’s also a great way for teachers to make custom listening activities.

The ElevenLabs voices are truly impressive – it’s worth setting up a free account just to play with the options and come up with your own creative use cases. TTS is set to only get better in the coming months – we’re excited to see where it leads!

A robot interviewing another robot - a great speaking game on ChatGPT!

So Interview Me! Structured Speaking with ChatGPT

The addition of voice chat mode to ChatGPT – soon available even to free users in an impressive, all-new format – opens up tons of possibilities for AI speaking practice. When faced with it for the first time, however, learners can find that it’s all a bit undirected and woolly. To make the most of it for targeted speaking practice, it needs some nudging with prompts.

Since AI crashed into the language learning world, the prompt bank has filled with ways to prime your chatbot for more effective speaking practice and prep. But there’s one activity I’ve been using lately that offers both structure, tailored to your level and topic, and a lot of fun. I call it So Interview Me!, and it involves you playing an esteemed expert on a topic of your choice, with ChatGPT as the prime-time TV interviewer.

So Interview Me!

Here’s an example you can paste into ChatGPT Plus straight away (as text first, then switching to voice mode after the initial response):

Let’s role-play so I can practise my Swedish with you. You play the role of a TV interviewer on a news programme. I play an esteemed expert on the topic of ‘the history of Eurovision’. Conversational turn by turn, interview me in the target language all about the topic. Don’t add any translations or other directions – you play the interviewer and no other role. Wind up the interview after about 15 turns. Keep the language quite simple, around level B1 on the CEFR scale. Are you ready? Start off by introducing me and asking the first question!

The fun of it is that you are the star of the show. You can completely throw yourself into it, interacting with your interviewer with all the gusto and gumption of a true expert. Or you can have some fun with it, throwing it off with silly answers and bending the scenario to your will (maybe you turn out not to be the expert!).

Either way, it’s a brilliant one to wind up and set going before you start the washing up!

A musical, emotive robot. OpenAI's new model GPT-4o will make digital conversations even more natural.

GPT-4o – OpenAI Creates A Perfect Fit For Language Learners

Just a couple of weeks after the excitement around Hume.ai, OpenAI has joined the emotive conversational bandwagon with a stunning new release of its GPT-4o model.

GPT-4o is a big deal for language learners because it is multimodal in much more powerful ways than previous models. It interacts with the world more naturally across text, audio and vision in ways that mimic our own interactions with language speakers. Demos have included the model reacting to the speaker’s appearance and expression, opening a path to more realistic digital conversation practice than ever.

As with Hume, its voice capabilities have been updated with natural-sounding emotion and intonation, along with a deeper understanding of the speaker’s tone. It even does a better job at sarcasm and irony, long the exclusive domain of human speakers. Heck, it can even sing now. Vocal, emotional nuance – at least simulated – does seem to be the latest big leap forward in AI, transforming the often rather staid conversations into something uncannily humanlike. And as with many of these developments, it almost feels like it was made with us linguists in mind.

Perhaps surprisingly, there’s no wait to try the new model this time, at least in text mode. OpenAI have rolled it out almost immediately, including to free users. That suggests a quite confidence in how impressed users will be with it.

As for the multimodal capabilities, we’ll have to wait a little longer, unfortunately – chat updates are being propagated more gradually, although you may already the next time you open chat mode, you may already get the message that big changes are coming. Definitely a case of watch this space – and I don’t know about you, but I’m already impatiently refreshing my ChatGPT app with increasing frequency!

A picture of a robot heart - conversation with emotion with Hume.ai

Conversation practice with emotion : Meet Hume.ai

If the socials are anything to go by, so many of us language learners are already using AI platforms for conversation practice – whether text-typed, or spoken with speech-enabled platforms like ChatGPT.

Conversational interaction is something that LLMs – large language models – were created for. In fact, language learning and teaching seem like an uncannily good fit for AI. It’s almost like it was made for us.

But there’s one thing that’s been missing up to now – emotional awareness. In everyday conversation with other humans, we use a range of cues to gauge our speaking partner’s attitude, intentions and general mood. AI – even when using speech recognition and text-to-speech – is flat by comparison. It can only simulate true conversational interplay.

A new LLM is set to change all that. Hume.ai has empathy built-in. It uses vocal cues to determine the probable mindset of the speaker for each utterance. For each input, it selects a set of human emotions, and weights them. For instance, it might decide that what you said was 60% curious, 40% anxious and 20% proud. Then, mirroring that, it replies with an appropriate intonation and flex.

The platform already supports over 50 languages. You can try out a demo in English here, and prepare to be impressed – its guesses can be mind-bogglingly spot-on. Although it’s chiefly for developer access right now, the potential usefulness to language learning is so clear that we should hopefully see the engine popping up in language platforms in the near future!

ChatGPT French travel poster

A Second Shot at Perfect Posters – ChatGPT’s Image Tweaker

The big ChatGPT news in recent weeks is about images, rather than words. The AI frontrunner has added a facility to selectively re-prompt for parts of an image, allowing us to tweak sections that don’t live up to prompt expectations.

In essence, this new facility gives us a second shot at saving otherwise perfect output from minor issues. And for language learning content, like posters and flashcards, the biggest ‘minor’ issue – the poor spellings that crop up in AI image generation – makes the difference between useful and useless material.

Rescuing ChatGPT Posters

Take this example. It’s a simple brief – a stylish, 1950s style travel poster for France. Here’s the prompt I used to generate it:

Create a vibrant, stylish 1950s style travel poster featuring Paris and the slogan “La France”.

I wanted the text “La France” at the top, but, as you can see, we’ve got a rogue M in there instead of an N.

ChatGPT generated image of a French travel poster

To target that, I tap the image in the ChatGPT app. It calls up the image in edit mode, where I can highlight the areas that need attention:

ChatGPT image editing window

Then, I press Next, and can re-prompt for that part of the image. I simply restate the slogan instructions:

The slogan should read “La France”.

The result – a correct spelling, this time!

ChatGPT French travel poster

It can take a few goes. Dodgy spelling hasn’t been fixed; we’ve just been given a way to try again without scrapping the entire image. Certain details also won’t be retained between versions, such as the font, in this example. Others may be added, like the highly stylised merging of the L and F in the slogan (a feature, rather than a bug, I think!).

But the overall result is good enough that our lovely 1950s style poster wasn’t a total write-off.

Another case of AI being highly imperfect on its own, but a great tool when enhanced by us human users. It still won’t replace us – just yet!

Image tweaking is currently only available in the ChatGPT app (iOS / Android).

Neon robots racing. Can Claude 3 win the AI race with its brand new set of models?

Claude 3 – the New AI Models Putting Anthropic Back in the Game

You’d be forgiven for not knowing Claude. This chirpily-named AI assistant from Anthropic has been around for a while, like its celebrity cousin ChatGPT. But while ChatGPT hit the big time, Claude hasn’t quite progressed beyond the Other Platforms heading in most AI presentations – until now.

What changed everything this month was Anthropic’s release of all-new Claude 3 models – models that not only caught up with ChatGPT-4 benchmarks, but surpassed them. It’s wise to take benchmarks with a pinch of salt, not least because they’re often internal, proprietary measures. But the buzz around this latest release echoed through the newsletters, podcasts and socials, suggesting that this really was big news.

Tiers of a Claude

Claude 3 comes in three flavours. The most powerful, Opus, is the feistiest ChatGPT-beater by far. It’s also, understandably, the most processor-intensive, so available only as a premium option. That cost is on a level with competitors’ premium offerings, at just under £20 a month.

But just a notch beneath Opus, we have Sonnet. That’s Claude 3’s mid-range model, and the one you’ll chat with for free at https://claude.ai/chats. Anthropic reports that Sonnet still pips ChatGPT-4 on several reasoning benchmarks, with users praising how naturally conversational it seems.

Finally, we have a third tier, Haiku. This is the most streamlined of the three in terms of computing power. But it still manages to trounce ChatGPT-3.5 while coming impressively close to most of those ChatGPT-4 benchmarks. And the real clincher?

It’s cheap.

Haiku costs a fraction of the price per token of competing models to developers. That means it’s a lot cheaper to build it into language learning apps, opening up a route for many to incorporate AI into their software. That lower power usage too is a huge win against a backdrop of serious concerns around AI energy demands.

Claude and Content Creation

So how does it measure up in terms of language learning content? I set Claude’s Sonnet model loose on the sample prompt from my recent Gemini Advanced vs. ChatGPT-4 battle. And the verdict?

It more than holds its own.

Here’s the prompt (feel free to adapt and use this for your own worksheets – it creates some lovely materials!):

Create an original, self-contained French worksheet for students of the language who are around level A2 on the CEFR scale. The topic of the worksheet is “Reality TV in France“.

The worksheet format is as follows:

– An engaging introductory text (400 words) using clear and idiomatic language
– Glossary of 10 key words / phrases from the text (ignore obvious cognates with English) in table format
– Reading comprehension quiz on the text (5 questions)
– Gap-fill exercise recycling the same vocabulary and phrases in a different order (10 questions)
– ‘Talking about it’ section with useful phrases for expressing opinions on the topic
– A model dialogue (10-12 lines) between two people discussing the topic
– A set of thoughtful questions to spark further dialogue on the topic
– An answer key covering all the questions

Ensure the language is native-speaker quality and error-free.

Sonnet does an admirable job. If I’m nitpicking, the text is perhaps slightly less fun and engaging than Gemini Advanced. But then, that’s the sort of thing you could sort out by tweaking the prompt.

Otherwise, it’s factual and relevant, with some nice authentic cultural links. The questions make sense and the activities are useful. Claude also followed instructions closely, particularly with the inclusion of an answer key (so often missing in lesser models).

There’s little to quibble over here.

A language learning worksheet created with Claude 3 Sonnet.

A Claude 3 French worksheet. Click here to download the PDF!

Another Tool For the Toolbox

The claims around Claude 3 are certainly exciting. And they have substance – even the free Sonnet model available at https://claude.ai/chats produces content on a par with the big hitters. Although our focus here is worksheet creation, its conversational slant makes it a great option for experimenting with live AI language games, too.

So if you haven’t had a chance yet, go and get acquainted with Claude. Its all-new model set, including a fabulous free option, makes it one more essential tool in the teacher’s AI toolbox.

Masses of digital text. AIs with a large context window can process much more of it!

Gemini’s Long Context Window – a True Spec Cruncher

Maybe you’ve noticed that Google’s Gemini has been making gains on ChatGPT lately. Of all its recent impressive improvements, one of the lesser-sung features – at least in AI for Ed circles – is its much enhanced context window.

The context window is essentially how much text the AI can ‘remember’, and work with.  Google’s next model boasts one million characters of this memory, leaving other models – which count their own in the hundreds of thousands – in the dust. It blows open the possibilities for a particular kind of AI task: working with long texts.

Language learners make use of all kinds of texts, of course. But one particularly unwieldy (although hugely useful) type where this new feature could help is the exam spec.

Exam Spec Crunching with AI

Language exam specs are roadmaps to qualifications, listing the knowledge and skills students need to demonstrate linguistic competency. But they have a lot of fine detail that can bog us down.

As a content creator, one thing that challenges me is teasing out this detail into some kind of meaningful arrangement for student activities. There is a mass of vocab data in there. And as systematic as it is, abstract lists of connectives, temporal adverbs and helper verbs don’t make for very student-friendly lesson material.

With a massive text cruncher like Gemini, they are a lot easier to process. Just drag in your spec PDF (I’ve been playing around with the new AQA GCSE German doc), and tease out the material in a more useful format for planning:

Take this German exam spec, and create an outline plan of three terms of twelve lessons that will cover all of the thematic material.

Additionally, it can help in creating resources that cover all bases:

Create a short reading text to introduce students to the exam topic “Celebrity Culture”. It should be appropriate for students aiming for the top tier mark in the spec. In the text, make sure to include all of the prepositions from the prescribed word list.

With a long textual memory, it’s even possible to interrogate the spec after you’ve uploaded it. That’s literally just asking questions of the document itself – and, with that bigger window, getting answers that don’t overlook half the content:

If students have one year to learn ALL of the prescribed vocabulary in the spec, how many words should they be learning a week? Organise them into weekly lists that follow a broadly thematic pattern.

Supersized Context Window – Playing Soon at an AI Near You!

For sure, you can use these techniques on existing platforms straight away. However, due to the smaller context window, results might not always be 100% reliable (although it’s always fun trying!). For the new Google magic, we’ll have to wait just a little longer. 

But from the initial signs, it definitely looks worth the wait!

Gemini’s new supersized context window is available only in a limited released currently, and only via its AI Studio playground. Expect to see it coming to Gemini Advanced very soon!

Foreign alphabet soup (image generated by AI)

AI Chat Support for Foreign Language Alphabets

I turn to AI first and foremost for content creation, as it’s so good at creating model foreign language texts. But it’s also a pretty good conversational tool for language learners.

That said, one of the biggest obstacles to using LLMs like ChatGPT for conversational practice can be an unfamiliar script. Ask it to speak Arabic, and you’ll get lots of Arabic script. It’s usually smart enough to work out if you’re typing back using Latin characters, but it’ll likely continue to speak in script.

Now, it’s easy enough to ask your AI platform of choice to transliterate everything into Latin characters, and expect the same from you – simply instruct it to do so in your prompts. But blanket transliteration won’t help your development of native reading and writing skills. There’s a much better best of both worlds way that does.

Best of Both Worlds AI Chat Prompt

This prompt sets up a basic conversation environment. The clincher is that is give you the option to write in script  or not. And if not, you’ll get what script should look like modelled right back at you. It’s a great way to jump into conversation practice even before you’re comfortable switching keyboard layouts.

You are a Modern Greek language teacher, and you are helping me to develop my conversational skills in the language at level A2 (CEFR). Always keep the language short and simple at the given level, and always keep the conversation going with follow-up questions.

I will often type in transliterated Latin script, as I am still learning the target language alphabet. Rewrite all of my responses correctly in the target language script with any necessary grammatical corrections.

Similarly, write all of your own responses both in the target language script and also a transliteration in Latin characters. For instance,

Καλημέρα σου!
Kaliméra sou!

Do NOT give any English translations – the only support for me will be transliterations of the target language.

Let’s start off the conversation by talking about the weather.

This prompt worked pretty reliably in ChatGPT-4, Claude, Copilot, and Gemini. The first two were very strong; the latter two occasionally forget the don’t translate! instruction, but otherwise, script support – the name of the game here – was good throughout.

Try changing the language (top) and topic (bottom) to see what it comes up with!