ChatGPT takes conversation to the next level with Advanced Voice Mode

ChatGPT Advanced Voice Mode is Finally Here (For Most of Us!)

Finally – and it has taken SO much longer to get it this side of the Pond – Advanced Voice Mode has popped up in my ChatGPT. And it’s a bit of a mind-blower to say the least.

Multilingually speaking, it’s a huge step up for the platform. For a start, its non-English accents are hugely improved – no longer French or German with an American twang. Furthermore, user language detection seems more reliable, too. Open it up, initiate a conversation in your target language, and it’s ready to go without further fiddling.

But it’s the flexibility and emotiveness of those voices which is the real game-changer. There’s real humanity in those voices, now, reminiscent of Hume’s emotionally aware AI voices. As well as emotion, there’s variation in timbre and speed. What that means for learners is that it’s now possible to get it to mimic slow, deliberate speech when you ask that language learning staple “can you repeat that more slowly, please?”. It makes for a much more adaptive digital conversation partner.

Likewise – and rather incredibly – it’s possible to simulate a whole range of regional accents. I asked for Austrian German, and believe me, it is UNCANNILY good. Granted, it did occasionally verge on parody, but as a general impression, it’s shocking how close it gets. It’s a great way to prepare for speaking your target language with real people, who use real, regionally marked speech.

Advanced Voice Mode, together with its recently added ability to remember details from past conversations (previously achievable only via a hack), is turning ChatGPT into a much cannier language learning assistant. It was certainly worth the wait. And for linguaphiles, it’ll be fascinating to see how it continues to develop as an intelligent conversationalist from here.

Shelves of helpful robots - a bit like Poe, really!

Which LLM? Poe offers them all (and some!)

One of the most frequent questions when I’ve given AI training to language professionals is “which is your favourite platform?”. It’s a tricky one to answer, not least because we’re currently in the middle of the AI Wars – new, competing models are coming out all the time, and my personal choice of LLM changes with each new release.

That said, I’m a late and recent convert to Poe – an app that gives you them all in one place. The real clincher is the inclusion of brand new models, before they’re widely available elsewhere.

To illustrate just how handy it is, just a couple of weeks ago, Meta dropped Llama 3.1 – the first of their models to really challenge the frontrunners. However, unless you have a computer powerful enough to run it locally, or access to Meta AI (US-only right now), you’ll be waiting a while to try it.

Enter Poe. Within a couple of days, all flavours of Llama 3.1 were available. And the best thing? You can interact with most of them for nothing.

The Poe Currency

Poe works on a currency of Compute Points, which are used to pay for messages to the model. More powerful models guzzle through compute points at a higher rate, and models tend to become cheaper as they get older. Meta’s Llama-3.1-405B-T, for example, costs 335 points per message, while OpenAI’s ChatGPT-4o-Mini comes in at a bargain 15 points for each request.

Users of Poe’s free tier get a pretty generous 3000 Compute Points every day. That’s enough credit to work quite extensively on some of the older models without much limitation at all. But it’s also enough to get some really useful (8-ish-requests daily) use from Llama 3.1. And, thanks to that, I can tell you – Llama 3.1 is great at creating language learning resources!

Saying that, with the right prompt, most of the higher-end models are, these days. Claude-3.5-Sonnet is another favourite – check out my interactive worksheet experiments with it here. And yes, Claude-3.5-Sonnet is available on Poe, at a cost of 200 points per message (and that’s already dropped from its initial cost some weeks back!). Even the image generation model Flux has made its way onto the platform, just days after the hype. And it’s a lot better with text-in-image (handy if you’re creating illustrated language materials).

Poe pulls together all sorts of cloud providers in a marketplace-style setup to offer the latest bots, and it’s a model that works. The latest and greatest will always burn through your stash of Computer Points faster, but there’s still no easier way to be amongst the first to try a new LLM!

AI Parallel Texts for Learning Two Similar Languages

I’ve seen a fair few social media posts recently about linguist Michael Petrunin’s series of Comparative Grammars for polyglots. They seem to have gone down a storm, not least because of the popularity of triangulation as a polyglot strategy.

They’re a great addition to the language learning bookshelf, since there’s still so little formal course material that uses this principle. Of course, you can triangulate by selecting course books in your base language, as many do with Assimil and other series like the Éditions Ellipse.

Parallel Texts à la LLM

But LLMs like ChatGPT, which already do a great job of the parallel text learning style, are pretty handy for creative comparative texts, too. Taking a story format, here’s a sample parallel text prompt for learners of German and Dutch. It treats each sentence as a mini lesson in highlighting differences between the languages.

I’m learning Dutch and German, two closely related languages. To help me learn them in parallel and distinguish them from each other, create a short story for me in Dutch, German and English in parallel text style. Each sentence should be given in Dutch, German and English. Purposefully use grammatical elements which highlight the differences between the languages, which a student of both does need to work hard to distinguish, in order to make the text more effective.

The language level should be lower intermediate, or B1 on the CEFR scale. Make the story engaging, with an interesting twist. Format the text so it is easy to read, grouping the story lines together with each separate sentence on a new line, and the English in italics.

You can tweak the formatting, as well as the premise – specify that the learner already speaks one of the languages more proficiently than the other, for example. You could also offer a scenario for the story to start with, so you don’t end up with “once upon a time” every run. But the result is quite a compact, step-by-step learning resource that builds on a comparative approach.

ChatGPT creating parallel texts in German and Dutch with an English translation.

ChatGPT creating parallel texts in German and Dutch with an English translation.

Variations and Limitations

I also tried prompting for explanatory notes:

Where the languages differ significantly in grammar / syntax, add an explanatory note (in English) to the sentences, giving details.

This was very hit and miss, with quite unhelpful notes in most runs. In fact, this exposes the biggest current limitation of LLMs: they’re excellent content creators, but still far off the mark in terms of logically appraising the language they create.

It is, however, pretty good at embellishing the format of its output. The following variation is especially impressive in an LLM platform that shows a preview of its code:

I’m learning Spanish and Portuguese, two closely related languages. To help me learn them in parallel and distinguish them from each other, create a short story for me in Spanish, Portuguese and English in parallel text style. Each sentence should be given in Spanish, Portuguese and English. Purposefully use grammatical elements which highlight the differences between the languages, which a student of both does need to work hard to distinguish, in order to make the text more effective.

The language level should be lower intermediate, or B1 on the CEFR scale. Make the story engaging, with an interesting twist.

The output should be an attractively formatted HTML page, using a professional layout. Format the sentences so they are easy to read, grouping the story lines together with each separate sentence on a new line, and the English in italics. Hide the English sentences first – include a “toggle translation” button for the user.

Claude by Anthropic creating an HTML-formatted parallel story in Spanish and Portuguese.

Claude by Anthropic creating an HTML-formatted parallel story in Spanish and Portuguese.

It’s another use case that highlights LLMs’ greatest strength: the creation of humanlike texts. For linguists, it matters not a jot how much (or little) deep understanding there is beneath that. With the language quality now almost indistinguishable from real people-speak, AI texts serve as brilliant ‘fake authentic’ language models.

e-Stories as parallel texts are yet another fun, useful flavour of that!

Robots exchanging gifts. We can exchange - and adapt - digital resources now, with Claude's shareable Artifacts.

Sharing Your Language Learning Games with Claude Artifacts

If Claude’s recent improvements weren’t already impressive enough, Anthropic has only gone and done it again – this time, by making Artifacts shareable.

Artifacts are working versions of the programs and content you, the user, prompt for in Claude. For example, they pop up when you ask the AI to write a language practice game in HTML, running the code it writes as a playable activity. Instant language learning games – no coding required.

Now, you can share your working, fully playable creations, with a simple link.

Instant Spanish Quiz with Claude

Take this simple Spanish quiz (very topical given the forthcoming Euros 2024 final!). I prompted for it as follows:

Create an original, self-contained quiz in Spanish for upper beginner / lower intermediate students of the language, on the topic “Spain in the European Football Championships”. It should be completely self-contained in an HTML page. The quiz should be multiple choice, with ten questions each having four alternative answer buttons – only one is right, and there is always one ‘funny’ alternative answer in the mix too.

Every time the quiz is played, the questions and the answers are in a random order. The student can keep trying answers until they get the right one (obviously after clicking an answer button, it should be disabled). Incorrect buttons turn red – correct ones green. Keep score of the player’s accuracy as they work through the questions (number of correct clicks / total clicks).

Make sure it looks attractive, slick and smart too, with CSS styling included in the HTML page.

If you have Artifacts turned on (see here for more). you should see your working game appear in a new pane. But now, you’ll also see a little Publish link in the bottom-right corner. Click this, and you can choose to make your creation public with an access link.

Publishing your working language activities using a share link with Claude Artifacts

Publishing your working language activities using a share link with Claude Artifacts

Remixing Artifacts

But wait – there’s more. When colleagues access your Artifact, they will see a Remix button in that bottom-right corner.

Remixing Artifacts in Claude

Remixing Artifacts in Claude

By hitting that, they can pick up where you left off and tweak your materials with further prompting. For instance, to keep the quiz format but change the language and topic, they could simply ask:

Now create a version of this quiz for French learners on the topic “France at the Olympic Games”.

It makes for an incredibly powerful way to network your learning resources. It’s also perfectly possible to take advantage of all this using only Claude’s free tier, which gives you 10 or so messages every few hours.

More than enough to knock up some learning games.

Have you created anything for colleagues to adapt and share on in Claude? Let us know in the comments!

ChatGPT French travel poster

A Second Shot at Perfect Posters – ChatGPT’s Image Tweaker

The big ChatGPT news in recent weeks is about images, rather than words. The AI frontrunner has added a facility to selectively re-prompt for parts of an image, allowing us to tweak sections that don’t live up to prompt expectations.

In essence, this new facility gives us a second shot at saving otherwise perfect output from minor issues. And for language learning content, like posters and flashcards, the biggest ‘minor’ issue – the poor spellings that crop up in AI image generation – makes the difference between useful and useless material.

Rescuing ChatGPT Posters

Take this example. It’s a simple brief – a stylish, 1950s style travel poster for France. Here’s the prompt I used to generate it:

Create a vibrant, stylish 1950s style travel poster featuring Paris and the slogan “La France”.

I wanted the text “La France” at the top, but, as you can see, we’ve got a rogue M in there instead of an N.

ChatGPT generated image of a French travel poster

To target that, I tap the image in the ChatGPT app. It calls up the image in edit mode, where I can highlight the areas that need attention:

ChatGPT image editing window

Then, I press Next, and can re-prompt for that part of the image. I simply restate the slogan instructions:

The slogan should read “La France”.

The result – a correct spelling, this time!

ChatGPT French travel poster

It can take a few goes. Dodgy spelling hasn’t been fixed; we’ve just been given a way to try again without scrapping the entire image. Certain details also won’t be retained between versions, such as the font, in this example. Others may be added, like the highly stylised merging of the L and F in the slogan (a feature, rather than a bug, I think!).

But the overall result is good enough that our lovely 1950s style poster wasn’t a total write-off.

Another case of AI being highly imperfect on its own, but a great tool when enhanced by us human users. It still won’t replace us – just yet!

Image tweaking is currently only available in the ChatGPT app (iOS / Android).

Neon robots racing. Can Claude 3 win the AI race with its brand new set of models?

Claude 3 – the New AI Models Putting Anthropic Back in the Game

You’d be forgiven for not knowing Claude. This chirpily-named AI assistant from Anthropic has been around for a while, like its celebrity cousin ChatGPT. But while ChatGPT hit the big time, Claude hasn’t quite progressed beyond the Other Platforms heading in most AI presentations – until now.

What changed everything this month was Anthropic’s release of all-new Claude 3 models – models that not only caught up with ChatGPT-4 benchmarks, but surpassed them. It’s wise to take benchmarks with a pinch of salt, not least because they’re often internal, proprietary measures. But the buzz around this latest release echoed through the newsletters, podcasts and socials, suggesting that this really was big news.

Tiers of a Claude

Claude 3 comes in three flavours. The most powerful, Opus, is the feistiest ChatGPT-beater by far. It’s also, understandably, the most processor-intensive, so available only as a premium option. That cost is on a level with competitors’ premium offerings, at just under £20 a month.

But just a notch beneath Opus, we have Sonnet. That’s Claude 3’s mid-range model, and the one you’ll chat with for free at https://claude.ai/chats. Anthropic reports that Sonnet still pips ChatGPT-4 on several reasoning benchmarks, with users praising how naturally conversational it seems.

Finally, we have a third tier, Haiku. This is the most streamlined of the three in terms of computing power. But it still manages to trounce ChatGPT-3.5 while coming impressively close to most of those ChatGPT-4 benchmarks. And the real clincher?

It’s cheap.

Haiku costs a fraction of the price per token of competing models to developers. That means it’s a lot cheaper to build it into language learning apps, opening up a route for many to incorporate AI into their software. That lower power usage too is a huge win against a backdrop of serious concerns around AI energy demands.

Claude and Content Creation

So how does it measure up in terms of language learning content? I set Claude’s Sonnet model loose on the sample prompt from my recent Gemini Advanced vs. ChatGPT-4 battle. And the verdict?

It more than holds its own.

Here’s the prompt (feel free to adapt and use this for your own worksheets – it creates some lovely materials!):

Create an original, self-contained French worksheet for students of the language who are around level A2 on the CEFR scale. The topic of the worksheet is “Reality TV in France“.

The worksheet format is as follows:

– An engaging introductory text (400 words) using clear and idiomatic language
– Glossary of 10 key words / phrases from the text (ignore obvious cognates with English) in table format
– Reading comprehension quiz on the text (5 questions)
– Gap-fill exercise recycling the same vocabulary and phrases in a different order (10 questions)
– ‘Talking about it’ section with useful phrases for expressing opinions on the topic
– A model dialogue (10-12 lines) between two people discussing the topic
– A set of thoughtful questions to spark further dialogue on the topic
– An answer key covering all the questions

Ensure the language is native-speaker quality and error-free.

Sonnet does an admirable job. If I’m nitpicking, the text is perhaps slightly less fun and engaging than Gemini Advanced. But then, that’s the sort of thing you could sort out by tweaking the prompt.

Otherwise, it’s factual and relevant, with some nice authentic cultural links. The questions make sense and the activities are useful. Claude also followed instructions closely, particularly with the inclusion of an answer key (so often missing in lesser models).

There’s little to quibble over here.

A language learning worksheet created with Claude 3 Sonnet.

A Claude 3 French worksheet. Click here to download the PDF!

Another Tool For the Toolbox

The claims around Claude 3 are certainly exciting. And they have substance – even the free Sonnet model available at https://claude.ai/chats produces content on a par with the big hitters. Although our focus here is worksheet creation, its conversational slant makes it a great option for experimenting with live AI language games, too.

So if you haven’t had a chance yet, go and get acquainted with Claude. Its all-new model set, including a fabulous free option, makes it one more essential tool in the teacher’s AI toolbox.

Two AI robots squaring up to each other

AI Worksheet Wars : Google Gemini Advanced vs. ChatGPT-4

With this week’s release of Gemini Advanced, Google’s latest, premium AI model, we have another platform for language learning content creation.

Google fanfares Gemini as the “most capable AI model” yet, releasing benchmark results that position it as a potential ChatGPT-4 beater. Significantly, Google claims that their new top model even outperforms humans at some language-based benchmarking.

So what do those improvements hold for language learners? I decided to put Gemini Advanced head-to-head with the leader to date, ChatGPT-4, to find out. I used the following prompt on both ChatGPT-4 and Gemini Advanced to create a topic prep style worksheet like those I use before lessons. A target language text, vocab support, and practice questions – perfect topic prep:

Create an original, self-contained French worksheet for students of the language who are around level A2 on the CEFR scale. The topic of the worksheet is “Reality TV in France“.

The worksheet format is as follows:

– An engaging introductory text (400 words) using clear and idiomatic language
– Glossary of 10 key words / phrases from the text (ignore obvious cognates with English) in table format
– Reading comprehension quiz on the text (5 questions)
– Gap-fill exercise recycling the same vocabulary and phrases in a different order (10 questions)
– ‘Talking about it’ section with useful phrases for expressing opinions on the topic
– A model dialogue (10-12 lines) between two people discussing the topic
– A set of thoughtful questions to spark further dialogue on the topic
– An answer key covering all the questions

Ensure the language is native-speaker quality and error-free.

I then laid out the results, with minimal extra formatting, in PDF files (much as I’d use them for my own learning).

Here are the results.

ChatGPT-4

ChatGPT-4, gives solid results, much as expected. I’d been using that platform for my own custom learning content for a while, and it’s both accurate dependable.

The introductory text referenced the real-world topic links very well, albeit a little dry in tone. The glossary was reasonable, although ChatGPT-4 had, as usual, problems leaving out “obvious cognates” as per the prompt instructions. It’s a problem I’ve noticed often, with other LLMs too – workarounds are often necessary to fix these biases.

Likewise, the gap-fill was not “in a different order”, as prompted (and again, exposing a weakness of most LLMs). The questions are in the same order as the glossary entries they refer to!

Looking past those issues – which we could easily correct manually, in any case – the questions were engaging and sensible. Let’s give ChatGPT-4 a solid B!

A French worksheet on Reality TV, created by AI platform ChatGPT-4.

You can download the ChatGPT-4 version of the worksheet from this link.

Gemini Advanced

And onto the challenger! I must admit, I wasn’t expecting to see huge improvements here.

But instantly, I prefer the introductory text. It’s stylistically more interesting; it’s just got the fact that I wanted it to be “engaging”. It’s hard to judge reliably, but I also think it’s closer to a true CEFR A2 language level. Compare it with the encyclopaedia-style ChatGPT-4 version, and it’s more conversational, and certainly more idiomatic.

That attention to idiom is apparent in the glossary, too. There’s far less of that cognate problem here, making for a much more practical vocab list. We have some satisfyingly colloquial phrasal verbs that make me feel that I’m learning something new.

And here’s the clincher: Gemini Advanced aced the randomness test. While the question quality matched ChatGPT-4, the random delivery means the output is usable off the bat. I’m truly impressed by that.

A French worksheet on Reality TV, created by Google's premium AI platform, Gemini Advanced.

You can download the Gemini Advanced version of the worksheet from this link.

Which AI?

After that storming performance by Gemini Advanced, you might expect my answer to be unqualified support for that platform. And, content-wise, I think it did win, hands down. The attention to the nuance of my prompt was something special, and the texts are just more interesting to work with. Big up for creativity.

That said, repeated testing of the prompt did throw up the occasional glitch. Sometimes, it would fail to output the answers, instead showing a cryptic “Answers will follow.” or similar, requiring further prompting. Once or twice, the service went down, too, perhaps a consequence of huge traffic during release week. They’re minor things for the most part, and I expect Google will be busy ironing them out over coming months.

Nonetheless, the signs are hugely promising, and it’s up to ChatGPT-4 now to come back with an even stronger next release. I’ll be playing around with Gemini Advanced a lot in the next few weeks – I really recommend that other language learners and teachers give it a look, too!

If you want to try Google’s Gemini Advanced, there’s a very welcome two-month free trial. Simply head to Gemini to find out more!

Does AI have a noun problem? Strategies for avoiding it.

AI Has A Noun Problem : Let’s Fix It!

If you’re using AI for language learning content creation, you might have already spotted AI’s embarrassing secret. It has a noun problem.

Large Language Models like ChatGPT and Bard are generally great for creating systematic learning content. They’re efficient brainstormers, and can churn out lists and texts like there’s no tomorrow. One use case I’ve found particularly helpful is the creation of vocab lists – all the more so since it can spool them off in formats to suit learning tools like Anki.

But the more I’ve used it, the more it’s become apparent. AI has a blind spot that makes these straight-out-the-box vanilla lists much less useful than they could be.

A fixation with nouns.

Test it yourself; ask your platform of choice simply to generate a set of vocab items on a topic. Chances are there’ll be precious few items that aren’t nouns. And in my experience, more often than not, lists are composed entirely of noun items and nothing else.

ChatGPT-4 giving a list of French vocabulary items - all nouns.

ChatGPT-4 giving a list of French vocabulary items – all nouns.

It’s a curious bias, but I think it has something to do with how the LLM conceives key words. The term is somehow conflated with all the things to do with a topic. And nouns, we’re taught at school, are thing words.

Getting Over Your Noun Problem

Fortunately, there’s therapy for your AI to overcome its noun problem. And like most AI refining strategies, it just boils down to clearer prompting.

Here are some tips to ensure more parts-of-speech variety in your AI language learning content:

  1. Explicit Instruction: When requesting vocabulary lists, spell out what you want. Specify a mix of word types – nouns, verbs, adjectives, adverbs, etc. to nudge the AI towards a more balanced selection. When it doesn’t comply, just tell it so! More verbs, please is good start.
  2. Increase the Word Count: Simply widening the net can work, if you’re willing to manually tweak the list afterwards. Increase you vocab lists to 20 or 30 items, and the chances of the odd verb or adjective appearing are greater.
  3. Contextual Requests: Instead of asking for lists, ask the AI to provide sentences or paragraphs where different parts of speech are used in context. This not only gives you a broader range of word types, but also shows them in action.
  4. Ask for Sentence Frames: Instead of single items, ask for sentence frames (or templates) that you can swap words in an out of. For instance, request a model sentence with a missing verb, along with 10 verbs that could fill that spot. “I ____ bread” might be a simple one for the topic food.
  5. Challenge the AI: Regularly challenge the AI with tasks that require a more nuanced understanding of language – like creating stories, dialogues, or descriptive paragraphs. This can push its boundaries and improve its output.

Example Prompts

Bearing those tips in mind, try these prompts for size. They should produce a much less noun-heavy set of vocab for your learning pleasure:

Create a vocabulary list of 20 French words on the topic “Food and Drink”. Make sure to include a good spread of nouns, verbs, adjectives and adverbs. For each one, illustrate the word in use with a useful sentence of about level A2 on the CEFR scale.
Give me a set of 5 French ‘sentence frames’ for learning and practising vocabulary on the topic “Summer Holidays”. Each frame should have a missing gap, along with five examples of French words that could fit in it.
Write me a short French text of around level A2 on the CEFR scale on the topic “Finding a Job in Paris”. Then, list the main content words from the text in a glossary below in table format.

Have you produced some useful lists with this technique? Let us know in the comments!

AI prompt engineering - the toolkit for getting better results from your platform of choice.

Better AI Language Learning Content with C-A-R-E

AI isn’t just for chat – it’s also great at making static language learning content. And as AI gains ground as a content creation assistant, prompt engineering – the art of tailoring your requests – becomes an ever more important skill.

As you’d expect, frameworks and best practice guides abound for constructing the perfect prompt. They’re generally all about defining your request with clarity, in order to minimise AI misfires and misunderstandings. Perhaps the most well-known and effective of these is R-T-F – that’s role, task, format. Tell your assistant who it is, what to do, and how you want the data to look at the end of it.

Recently, however, I’ve been getting even more reliable MFL content with another prompt framework: C-A-R-E. That is:

  • Context
  • Action
  • Result
  • Example(s)

Some of these steps clearly align with R-T-F. Context is a broader take on role, action matches to task and result roughly to format. But the kicker here is the addition of example(s). A wide-ranging academic investigation into effective prompting recently flagged “example-driven prompting” as an important factor in improving output, and for good reason: the whole concept of LLMs is built on constructing responses from training data. It’s built on the concept of parroting examples.

Crafting AI prompts with C-A-R-E

As far as language content is concerned, C-A-R-E prompting is particularly good for ‘fixed format’ activity creation, like gap-fills or quizzes. There’s a lot of room for misinterpretation when describing a word game simply with words; a short example sets AI back on track. For example:

– I am a French learner creating resources for my own learning, and you are an expert language learning content creator.
– Create a gap-fill activity in French for students around level A2 of the CEFR scale on the topic “Environment”.
– It will consist of ten sentences on different aspects of the topic, with a key word removed from each one for me to fill out. Provide the missing words for me in an alphabetically sorted list at the end as a key.
– As an example, a similar question in English would look like this: “It is very important to look after the ———- for future generations.”

This produces excellent results in Microsoft Copilot / Bing (which we love for the freeness, obviously!) and ChatGPT. For example:

Creating AI language learning content with Microsoft Copilot / Bing Chat

Creating AI language learning content with Microsoft Copilot / Bing Chat

Providing short examples seems like an obvious and intuitive step, but it’s surprising how infrequently we tend to do it in our AI prompts. The gains are so apparent, that it’s worth making a note to always add a little C-A-R-E to your automatic content creation.

If you’ve been struggling to get reliable (or just plain sensible!) results with your AI language learning content, give C-A-R-E a try – and let us know how it goes in the comments!

A digital brain, complete with memory - ChatGPT take note!

Your ChatGPT Teacher – With Persistent Memory!

The interactivity of AI models like ChatGPT and Bing make them the perfect medium for exchange-based language learning. But for one thing: their lack of persistent memory.

The standard setup, to now, has been for a ‘black box’ style conversation on AI platforms. You initiate a session with your instructions, you chat, and it’s over. You can revisit the conversation in your history, but as far as AI is concerned, it’s lost in the mists of time.

It’s something that throws a mini spanner in the works of using AI for language (or any kind of) learning. Teaching and learning are cumulative; human teachers keep records of what their students have studied, and build on previous progress.

DIY ChatGPT Memory

There seems to be little movement in the direction of AI with memory amongst the big platforms, although OpenAI’s recent announcement of memory storage for developer use might lead to third-party applications that ‘remember’. But in the meantime, users within the AI community, ever adept at finding workarounds and pushing the tech, have begun formulating their interim alternatives.

One clever way around it I recently spotted takes advantage of two elements of ChatGPT Plus: custom instructions and file upload/analysis. In a nutshell, an external text file serves as ChatGPT’s ‘memory’, storing summarised past conversations between student and AI teacher. We let ChatGPT know in the custom instructions that we’ll be uploading a history of our previous conversations at the beginning of a learning session. We also specify that it analyse this file in order to pick up where we left off. At the end of each session, we prompt it to add a round-up of the present conversation to that summary, and give the file back to us for safekeeping.

Custom Instructions

Here’s how I’ve worked the persistent memory trick into my own custom instructions:

If I upload a file ‘memory.txt’, this will be a summary of our previous conversations with you as my language teacher; you will use this to pick up where we left off and continue teaching me. When prompted by me at the end of our session, update the file with a summary of the present conversation and provide me with a link to download it for safekeeping. This summary should include a condensed glossary of any foreign language terms we’ve covered.

Wording it as such makes memory mode optional; ‘teacher remembering’ only kicks in if you upload memory.txt. This way, you can otherwise continue using regular, non-teach ChatGPT without any fuss.

The only thing that remains is to create a blank text file called memory.txt to start it all off. Remember to start a new chat before giving it a whirl too, so your new custom instructions take. As you use the technique in your everyday learning chats, you’ll see memory.txt blossom with summary detail. As an offline record of your learning, it even becomes a useful resource in its own right apart from ChatGPT.

Just make sure you keep it safe – that’s your teacher’s brain you have right there!

A page of conversation summaries - my ChatGPT 'memory' file in action.

My ChatGPT ‘memory’ file in action.

Let us know your experiences if you give this technique a go! And if you’re stuck for lesson ideas, why not check out my book, AI for Language Learners?