Escaping the Ecosystem : AI Edition

We live in such unexpected, shifting, fracturing geopolitical times just now. A stability taken for granted for decades no longer seems a given. So much so, that many have begun to question the global tech ecosystem we are embedded in, considering the safety of our data and workflows, and seeking less exposed, closer-to-home alternatives.

It’s something we can explore without straying into conspiracy territory, and it goes beyond data security. Tech writer Cory Doctorow has written at length on the downsides to walled garden platforms that make leaving costs high while degrading (or enshittifying – Macquarie Dictionary’s 2024 word of the year) their services. Linguaphiles should know – our own beloved Duo is one of them. It’s a compelling argument, and one that national consumer protection agencies are starting to incorporate into policy. The notion that we can take meaningful steps to decouple from tech monopolies is beginning to take hold.

Ecosystem creep : AI

This leads us to AI firms – arguably the fastest growing of tech behemoths, whose services nonetheless are working their way into many of our workflows. It’s not all doom and gloom here, though; Anthropic in particular has emerged as one US company willing to stand up for an ethical stance in the field.

That said, most European LLM traffic still goes down that American route, collecting on servers users’ states have no jurisdiction over. Users come to rely more and more on these services for key elements of their day-to-day, although have little control over their place in that ecosystem.

So what to do? LLMs are incredibly useful tools for a number of creative applications. For language teachers, they are particularly good at creating authentic-sounding materials for worksheets. In fact, I’ve often argued that LLMs are a tech almost tailor-made for language learning and teaching – in few other fields is the language structure more important than the actual content! They’re genuinely brilliant at creating copy, often highly nuanced, for learning.

AI Swaps

Well, one quick and easy swap is Le Chat by French AI company Mistral. It’s a ‘full fat’ LLM on a par with the big US names, running your prompts remotely on a multi-billion parameter model. Not so remote, though – their server activity remains within EU jurisdiction.

Then, of course, there is the ‘peak privacy’ option – running your own LLM. That’s a lot easier than it sounds, thanks to easy-setup software like LM Studio or Ollama (both US-based projects, but run locally on your own machine). Install, download a model, and prompt away. While few (to no) people will have the hardware to run full-sized LLMs, small models are getting better and better, rivalling the biggies for everyday use.

Google’s Gemma 4 is a case in point, a new small model (you can get a sub-20gb version) achieving some really impressive benchmark scores. Multi-language support is one of its strengths, and believe me, it does more than a good enough job of worksheet authoring and lesson planning. And it comes with an extra ‘externalities’ bonus, too – the only energy it’s using it your laptop battery, rather than spinning up some red-hot servers on a remote farm somewhere.

That has to be a win-win – using open source releases from the industry leads, without getting trapped inside the matrix.

We may have little control over geopolitics. But there are always choices when it comes to our exposure to it in the tech we use. I’m working on a list of these swaps as part of my own digital hygiene plan, and hope to share much more of this in coming weeks!

Speaking dialect in the mountains

AI and Dialect: Can You Nudge a Language Model Off the Standard Path?

One of my quiet worries about the AI revolution in language is standardisation. When a model trained on billions of documents produces text, it tends to sand everything down to some kind of weighted average – grammatically impeccable, register-appropriate, but utterly devoid of local colour. And where’s the fun in that? You know my thoughts on dialect and language learning.

But here’s the thing: despite the odds, the standard grey slop isn’t a mathematical inevitability. Large Language Models are trained on billions of words. And that material will include plenty of non-standard language, too, simply by dint of the extraordinarily broad sweep of the training brush. Dialect literature, forum posts, subtitles, transcripts – there’s room for non-standard, more colloquial language in every context of the model’s background.

Which raises an interesting question: can we nudge AI output away from the standard, and towards something more authentically regional?

I’ve been playing around with exactly this, and the short answer is: yes – and it’s surprisingly decent at it.

Stepping off the well-trodden path

I’ve said from the start that AI is built for language learning content creation. There aren’t many fields where content accuracy doesn’t matter too much – as long as the language is correct and authentic-sounding. One of my go-to moves is pushing the LLM for a more colloquial register. Standard written language is fine for reading comprehension, but if you’re teaching people how to actually speak, you want contractions, ellipsis, informal vocabulary – the stuff that sounds like a real person. AI handles this pretty well once you prompt for it explicitly.

Create a model German dialogue on the topic “Reality TV”. The language should be contemporary, colloquial and natural-sounding German.

Claude did a pretty good job of this:

  • Lena: Ey, hast du letzte Woche „Love Island” geschaut? Ich konnte gar nicht aufhören – ich war bis halb zwei wach.
  • Kai: Natürlich hab ich das geschaut – wär ich ja bescheuert, das zu verpassen. Diese Tiara und Marco… ich meine, komm schon. Die spielen da nur rum.
  • Lena: Genau! Marco tut total so, als wäre er an ihr interessiert, aber das ist doch alles für die Kamera. Ich find das so zum Fremdschämen.
  • Kai: Ja, aber genau deswegen schaust du’s doch! Gib’s zu. Du liebst den Drama.
  • Lena: Okay, okay, ich geb’s zu. Aber ich find’s trotzdem fragwürdig, wie die Leute da so bloßgestellt werden. Manche wirken echt überfordert.
  • Kai: Absolut. Ich glaub, vielen ist nicht bewusst, wie sehr die Produktion das alles lenkt. Die schneiden das so zusammen, dass du denkst, die Person ist voll die Zicke – aber vielleicht war das gar nicht so gemeint.
  • Lena: Stimmt schon. Und dann wird die Person im Netz fertiggemacht. Ich hab mal gelesen, dass einige danach richtig psychische Probleme hatten.
  • Kai: Das ist so traurig. Und trotzdem melden sich jedes Jahr tausende Leute an. Ich versteh’s irgendwie – du willst berühmt werden, schnelles Geld, Follower… Der Lockruf ist halt stark.
  • Lena: Stimmt. Und seien wir ehrlich – wir beide gucken’s ja auch. Also sind wir Teil des Problems, oder?
  • Kai: Pff… ich würd sagen: wir sind Teil des Unterhaltsystems. Klingt besser. Jetzt mach die nächste Folge an.

Some great non-standard, everyday features in there. Colloquial contractions like -e dropping on verbs, decent use of particles like halt – not a bad job at all. Claude even threw in a bunch of usage notes without asking, too.

But colloquial is still, broadly, floating just beneath a standard umbrella. What if we go further?

Adding Local Flavour

When I’m probing the edges of what AI can do, I like to start with varieties I can actually judge. Austria is my German-speaking region of choice in this regard, being my year abroad destination of old – I still have a huge soft spot for Kärnterisch. So here’s the tweak:

Create a model German dialogue on the topic “Reality TV” between two young Austrian speakers from Kärnten. The language should be contemporary, colloquial and natural-sounding Austrian German.

Here’s what came back:

  • Magdi: Oida, hast du gestern „Love Island” g’schaut? I kann nimma! Die Lea hat den Stefan komplett ausg’spielt, echt a Wahnsinn.
  • Tobi: Na freilich, i schau des jo seit Wochn! Aber geh, die Lea, die tuat doch nur so. Die weiß ganz genau, wos sie macht — des is alles g’spield.
  • Magdi: Ja eh, ober trotzdem — der Stefan is so a Trottel! Wie kann man so blöd sein, gell? Der glaubt ihr jo alles.
  • Tobi: Najo, i mein, de san jo auch so a bissi animiert zum Streiten, oder? Die Produzenten woin das jo so. Ohne Zoff läuft des ned.
  • Magdi: Des stimmt scho, ober i find des trotzdem scheißn. Die tean so, ois ob’s echt wär, und dann geht’s nur ums Follower kriegen. Danoch san alle auf Instagram mit Werbung zuagekleistert.
  • Tobi: Haha, stimmt! Drei Wochen nach der Sendung verkaufen’s alle irgendwelche Protein-Shakes. Gott sei Dank ned mia. [lacht]
  • Magdi: Ober weißt wos? I schau’s trotzdem weiter. I bin schon voll drin, i muss wissen, ob der Kevin die Kathi kriegt oder ned.
  • Tobi: Na sag i dir: die kriegen sich ned. Die Kathi is viel zu schlau für ihn. Die geht da raus, macht a paar Storys, und dann war’s des.
  • Magdi: Oida, du bist so a Zyniker! Vielleicht is es jo echt?
  • Tobi: Magdi. Des is Reality TV. Des is so echt wie mei Oma beim Wrestling. [grinst]
  • Magdi: Haha, okay okay. Ober schauen tust’s trotzdem, gell?
  • Tobi: … Ja. Natürli. Um neun bin i dabei. Sag’s ned weiter.

If you’ve spent any time in Austria, this is probably just as joyful to you as it is to me. It’s added in so much southern colour, from the tag particle ‘gell’ to the reduced i from ich and past participles in g’. There’s even some fun wordplay in there (as real as my gran at wrestling!), so it’s not only dialect, but character. It’s laid it on thick!

There’s an element of pastiche to this, for sure. Nobody will claim this is indistinguishable from an authentic speaker of Klagenfurt. But it’s genuinely not bad, especially if you’re after a solid introduction to local speech features.

And then there’s Trøndersk…

That’s all very well for ‘mainstream’ foreign languages like German. To push the envelope a little more, it’s worth playing around with some lesser-learnt languages – Norwegian is always a good choice.

For my Norwegian test, I went with Trøndersk, the dialect cluster associated with the Trondheim region and much of central Norway. It’s one of the most immediately recognisable Norwegian dialects, with distinctive pitch accent patterns, characteristic -a verb endings, and a few phonological quirks that mark it out clearly from the Oslo Bokmål you’ll meet in textbooks.

How good a job would Claude do of this challenge?

  • Marte: Oi, har du sett at det e ny sesong av Farmen nå? Æ begynte å se på den i går kveld og klarte itj å slutt.
  • Jonas: Åja? Æ har itte sett mye reality-greier på ei stund, men Farmen e jo noe anna. Det e liksom ordentlig underholdning, da.
  • Marte: Ja, det e noko med den serien! Og den nye dama fra Steinkjer – ho e jo helt vill. Ho klaga på absolutt alt og alle og gjor ingenting sjæl.
  • Jonas: Haha, klassisk. Det e alltid nåkka en sånn type på realitygreier. Produsenten e nødt til å ha med dem, skjønner du? Ellers e det jo kjedelig.
  • Marte: Jaja, men det e jo litt ekkelt å innrømme at man liker det, liksom. Æ satt der og ropte på skjermen som et idiot.
  • Jonas: Haha! Ka sa du da?
  • Marte: Noe sånt som «koffor gjør’u det, din tulling!» Men mora mi satt rett ved sia og hun så bare på mæ som om æ var gal.
  • Jonas: Uff da, haha. Ka syns ho om programmet da?
  • Marte: Ho e heilt avhengig sjæl! Det e liksom heime hos oss at hele familien samles rundt skjermen på tirsdager. Det har vorre sånn siden æ va lita.
  • Jonas: Okei men ka med all den andre reality-søpla da? Eks på stranda og sånt? Det e jo bare tull, syns æ.
  • Marte: Ja nei, det gidder itj æ. Det e jo bare folk som vil bli kjendis og som itte har nåkka å by på. Farmen e annleis fordi det faktisk krev noe.
  • Jonas: Sant nok. Kameraten min i Ålesund e helt hekta på Paradise Hotel derimot. Han ser på det heile tida.
  • Marte: Seriøst? Koss klare han det? Æ holdt ut to episoder og ga opp. Alle folkan der oppe e jo helt… æ veit itte. Tomme, liksom.
  • Jonas: Haha, han sier det e «sosialt eksperiment». Æ tror bare han har lyst på drama heime fra sofaen sjæl.
  • Marte: Hahaha! Ja det e nok det. Uansett — skal du se Farmen med mæ på fredag? Vi kan bestille pizza og le av alle.
  • Jonas: Ja, det høres bra ut! Men du bestemme pizza-topping. Sist gang valde du masse squash og det va itte greit.
  • Marte: Oi, squash e jo godt da! Men okei, du får velge. Bare itte ananas.
  • Jonas: Deal. Vi ses fredag!

As far as I can tell – and I’m not so well placed here, I must admit – it’s done a solid job. The k- question words (rather than hv-), the reduced æ for jeg… Even the cultural references, selecting two actual Norwegian shows for the exchange. I’d trust this as a bit of pre-Trondheim trip prep!

That said, if you speak a Trøndersk dialect, what’s the verdict? Does this read as a reasonable attempt, or is it giving you second-hand embarrassment? I’d genuinely love to know. (Drop a comment below, or find me wherever I am on the internet this week.)

AI and Dialect for Language Learning

AI platforms clearly have some representation of dialect. Awareness is the wrong word for it – anthropomorphising LLMs and all that – but you know what I mean. The patterns are in the training data; the question is whether a prompt can reliably surface them, or whether the model’s default pull towards the standard is too strong.

The two languages I explored here seem to work well. The pull to parody is always something to watch, particularly when written material in a dialect tends to be humorous, rather than in-community. And rarer or more exclusively spoken varieties will no doubt be much more troublesome. But the more linguistic scaffolding you give the model – the more you treat prompting as a kind of sociolinguistic briefing – the better the output is likely to get.

That standard grey slop, it seems, has more texture than it first appears – you just have to know where to poke.

Diffuse squares

SingaKids: A Glimpse of Where Multimodal AI Tutoring May Be Headed

A recent pre-print on SingaKids, a multilingual multimodal tutoring system for young learners, offers an interesting look at how AI-supported language learning is evolving. You can read the paper here: SingaKids: A Multilingual Multimodal Dialogic Tutor for Language Learning.

Designed for early primary classrooms, SingaKids is an AI-based system that uses picture-description tasks as the basis for spoken interaction. It combines dense image captioning, multilingual speech recognition, a dialogue model tuned with pedagogical scaffolding, and child-friendly text-to-speech. The system works in English, Mandarin, Malay, and Tamil, with extra attention paid to the lower-resource languages to improve recognition and generation quality.

Flexible Scaffolding

Something that stood out to me in particular was the system’s focus on scaffolding rather than straightforward correction. That approach is flexible; depending on a child’s response, the system shifts between prompts, hints, explanations, and more structured guidance. Higher-performing learners are pushed towards fuller reasoning; less confident learners get clearer cues and more supportive turns. It’s a step away from the rigid “question–answer–score” pattern and closer to the texture of real classroom dialogue.

Although the work is aimed at children, several ideas have wider implications for the rest of us. Picture-guided dialogue isn’t new in ‘grown-up’ resources – think Rosetta Stone, for instance. But it could easily support adult learners practising free production in AI tools, too. Improved multilingual ASR – especially for hesitant, accented, or code-switched speech – would benefit almost every speaking-practice tool. And the flexible scaffolding approach hints at future e-tutors that adapt to the learner’s behaviour dynamically, rather than funnelling everyone down the same path.

The project sits firmly in the research space, but it points towards what the next generation of tools may look like: multimodal, context-aware systems that don’t just respond to learners but actively guide, prompt, and adjust. For anyone keeping an eye on developments in educational AI, it’s a nice indication of the direction of travel (and I’m probably a wee bit envious of those kids getting a chance to try it first!).

AI Role-Plays that Actually Move the Needle

Papers on AI in education are two a penny at the moment, but there’s a particularly nice one that appeared recently in Frontiers in Education (30 Sept 2025). It takes a fresh look at AI-generated, scenario-based conversation practice for university EFL learners – one of perhaps the most obvious and widespread use cases for AI in language learning, but given a smart, systematic treatment by a team of scholars from Saudi Arabia, China and Pakistan.

The gist is simple: build realistic speaking scenarios with AI, let students interact in them over a term, and see what happens. Over 18 weeks with 130 first-years split into control vs. AI-scenario groups, the AI cohort came out ahead on pronunciation, accuracy and conversational flow. They also reported higher interest and better teacher–student interaction to boot.

The catch? Emotional thinness in AI dialogue, patchy content quality if you don’t curate, and a risk of learner over-dependence on the tech. 

So, what can we pinch for our own learning? Well, the paper itself is full of useful nuggets and worth a careful read. But here are some key takeaways for avoiding “AI for AI’s sake” based on the team’s findings.

1) Make your speaking tasks scenario-first, not tool-first.

Before opening any chatbot, sketch a brief: Where am I? Who am I? What’s my goal? What counts as success? That mirrors the paper’s “input → interaction → output” design and stops generative models meandering (always an occupational hazard worth mitigating against).

2) Bake in “flow nudges”.

The study’s gains in conversational flow suggest prompts that push you to repair, clarify and keep turns moving. Add rules to your prompt like: “If I give a short answer, ask a natural follow-up; if I stall, offer two options.” That keeps the exchange discursive rather than Q&A-ish. 

3) Add in a feedback micro-loop.

The report notes improvements in pronunciation, which is fine if you’re using AI in voice mode. If not, replicate that with a regular mini-feedback cycle that gives short explanations for tricky words of phrases.

4) Curate, don’t just generate.

A recurring warning was inconsistent or culturally off-kilter content when left unchecked. Make sure to describe your scenario frames in terms of function, time and place (e.g., returning a faulty purchase in Athens; arranging a GP appointment in Lille). 

5) Add a human(-like) layer to keep things warm

Students benefitted from richer teacher–student interaction around the AI tasks. Translate that to solo study by doing a quick human check: post one 60-second recap to a study buddy, social feed or tutor each week. This ‘social accountability’ step compensates for the AI’s limited emotional range. Try recording the dialogue afterwards as a voice note, too, for some added spoken practice.

6) Watch the dependence trap.

The authors flag tech over-reliance. Give yourself “AI-off Fridays”: repeat a scenario from memory with real materials (voice notes, a friend, or even talking to your phone camera), then compare to your AI-assisted version for gaps. 

AI in Practice

Bringing all that together, here’s a ready-to-use mini-format you can try for a 15-minutes role-play practice that isn’t crow-baring AI in for no real gain:

  • Minute 0–2: Choose a vetted scenario card (place, role, goal, 3 key phrases).

  • 2–3: Prime the bot with constraints (stay in A2/B1, insist on follow-ups, correct only one thing per turn).

  • 3–10: Converse. Every third turn, ask for a meaning / explanatory nudge on one tricky word or structure.

  • 10–12: Bot summary with 3 personalised upgrade lines you could have said.

  • 12–15: Record a no-AI voice note version. Park it for a weekly human warm-layer check.

Pastable Prompt

You are a language conversation partner tasked with improving the language skills of me, the user.
We’ll do a short scenario-based speaking practice in French.
Follow these rules carefully:
1. Keep the level at A2–B1 CEFR.
2. Always stay in character and make the conversation feel natural – imagine we’re really there.
3. Insist on follow-up questions whenever my answers are too short or unnatural.
4. Correct only one thing per turn, briefly and gently, then move on.
5. Every third turn, give me a short “💡 Language note” explaining a tricky word or structure that came up.
6. After about 20 lines or so of dialogue (ideally when the conversation draws to a natural close), give a performance summary, including what I did well, some ‘upgraded’ versions of my sentences showing how I could sound more natural or advanced, and 2-3 new phrases worth learning from this conversation.
7. Keep the tone friendly, realistic, and mildly humorous if it fits the setting. When ready, start the conversation by greeting me in the target language and setting the scene.

The bottom line is that AI role-plays can be genuinely useful when we design around them: scenario first, small feedback loops, and human warmth stitched back in. Treat the model like a scene partner with good timing but flat affect, and you’ll harvest the fluency gains without outsourcing your judgement.

The paper’s results are encouraging; its realistic caveats are a gift that ground us back in practical realism. As always, build guardrails into your AI usage first of all, to ensure that you get the most from – and enjoy – the chat! 

Perplexity Tasks for Language Learners

AI techniques to support language learning are pretty well-known now. From structured conversation partners to resource creators, LLM platforms have been embraced by the polyglot community.

Like many of us, I dip in and out of them almost unthinkingly now. Often, I’ll snap in a page from a chapter I’m working on with my Greek teacher, and it’ll help me prepare ahead of a lesson. Sometimes, I’ll get it to reel off a list of useful phrases on a topic I’m studying. LLMs can make great worksheet creators, too. In many ways, it’s simply a very interactive reference tool, giving (mostly) reliable answers but with a big nod to context.

I’d been pretty dogged in my choice of platform, sticking for the most part with ChatGPT Plus. Claude and Gemini were also in the mix, alongside some fun running local models. But for the most part, I thought my tool choices were pretty settled.

But then I gave Perplexity a whirl.

Perplexity – Task Master

Perplexity isn’t an LLM in the sense that ChatGPT, Gemini and Claude are. It uses LLM technology. But it’s actually more of an intelligent, context-sensitive search tool, that uses natural language APIs to turbo-boost its web-hunting activities.

I’d clearly not found that prospect very exciting, as I’d not gone near it until now. But thanks to a bundled free upgrade, I got to try the premium tier of late. And one particular feature stands out as potentially transformative for my learning habits: Perplexity Tasks.

Tasks are scheduled searches you set up with natural language instructions. And those instructions can be as rich as your usual LLM prompts in terms of requested formatting and such like, so in essence, you can build regular bulletins with up-to-date information in any language you like. Take one of mine, that runs daily:

Search the global news for the biggest world news story of the day. Summarise it in French, German, Modern Greek, Polish, Scottish Gaelic and Swahili at a level appropriate for an intermediate learner, ensuring that the translation is of the highest, native speaker standard quality, idiomatic and natural-sounding. Summaries should be 3-4 sentences long. Highlight key words in bold.

Accompany each summary text with a glossary / vocabulary list detailing all the key / difficult words from it in dictionary format (listing word class, irregular parts if applicable etc.). Hyperlink glossary items to Wiktionary entries where available with further information on them (use the English version en.wiktionary.com).

Lay it all out neatly to make it easy on the eye. Use plenty of emojis for impact too. Make this a fabulous resource for polyglot language learning! 🌍

Now, every morning, I get a wee news digest emailed straight to my inbox in multiple languages. It’s learner-friendly, includes vocab support, and gives me something to talk about in my language meets and lessons. I’ve done the same for academic paper searches in linguistics, and stories on dialect appearing in news outlets.

It feels like a proper game changer!

Tasking on Other Platforms

Now, you don’t need Perplexity to do this – it’s just one of the most user-friendly ways I’ve found to do it. If you have ChatGPT,  check out Scheduled Tasks. In Gemini, Scheduled Actions will do the trick for Pro members. Copilot is in on the game too. Others will no doubt follow suit shortly – clearly, task scheduling is becoming one of those features AI platforms are expected to have.

What I like about Perplexity, though, is that its whole raison d’être is the search – it feels particularly suited to web-based tasks like news digests. It’s also quite nice to keep the separation between my everyday LLM ramblings, and my more structured, scheduled items (use it for a few weeks and you’ll have clogged your timeline up with dozens of chats!).

If you’ve been looking for a way to make AI genuinely work for your learning rather than distract from it, try setting up a task or two – you might just find it becomes part of your morning ritual as well.

Generative Images Locally : Running Models on Your Machine

I’ve written a fair bit about language models of late. This is a language blog, after all! But creating resources is about other visual elements, too. And just as you can run off text from local generative AI, you can create images locally, too.

For working on a computer, ComfyUI is a good bet. It’s a graphical dashboard for creating AI art with a huge array of customisation options. The fully-featuredness of it, admittedly, makes it a complex first intro to image generation. It’s interface, which takes a pipeline / modular format, takes a bit of getting used to. But it also comes with pre-defined workflows that mean you can just open it, prompt and go. There’s also a wide, active community that supports in online, so there’s plenty of help available.

Generate images locally - the ComfyUI interface

Generate images locally – the ComfyUI interface

At the more user-friendly end of it is Draw Things for Apple machines (unfortunately no Android yet). With a user interface much closer to art packages you’ll recognise, Draw Things allows you to download different models and prompt locally – and is available as an iOS app too. Obviously there’s a lot going on when you generate images, so it slugs along at quite a modest trot on my two-year-old iPad. But it gives you so much access to the buttons and knobs to tweak that it’s a great way to learn more about the generation process. Like ComfyUI, its complexity – once you get your head round it – actually teaches you a lot about image generation.

Of all the benefits of these apps, perhaps the greatest is again the environmental. You could fire up a browser and prompt one of the behemoths. But why crank up the heat on a distant data centre machine, when you can run locally? Many commercial generative models are far too powerful for what most people need.

Save power, and prompt locally. It’s more fun!

A swirl of IPA symbols in the ether. Do LLMs 'understand' phonology? And are they any good at translation?

Tencent’s Hunyuan-MT-7B, the Translation Whizz You Can Run Locally

There’s been a lot of talk this week about a brand new translation model, Tencent’s Hunyuan-MT-7B. It’s a Large Language Model (LLM) trained to perform machine translation. And it’s caused a big stir by beating heftier (and heavier) models by Google and OpenAI in a recent event.

This is all the more remarkable given that it’s really quite a small model by LLM standards. Hunyuan actually manages its translation-beating feat packed into just 7 billion parameters (the information nodes that models learn from). Now that might sound a lot. But fewer usually means weaker, and the behemoths are nearing post-trillion param levels already.

So Hunyuan is small. But in spite of that, it can translate accurately and reliably – market-leader beatingly so – between over 30 languages, including some low-resource ones like Tibetan and Kazakh. And its footprint is truly tiny in LLM terms – it’s lightweight enough to run locally on a computer or even tablet, using inference software like LMStudio or PocketPal.

The model is available in various GGUF formats at Hugging Face. The 4-bit quantised version comes in at just over 4 GB, making it iPad-runnable. If you want greater fidelity, then 8-bit quantised is still only around 8 GB, easily handleable in LMStudio with a decent laptop spec.

So is it any good?

Well, I ran a few deliberately tricky English to German tasks through it, trying to find a weak spot. And honestly, it’s excellent – it produces idiomatic, native-quality translations that don’t sound clunky. What I found particularly impressive was its ability to paraphrase where a literal translation wouldn’t work.

There are plenty of use cases, even if you’re not looking for a translation engine for a full-blown app. Pocketising it means you have a top-notch multi-language translator to use offline, anywhere. For language learners – particularly those struggling with the lower-resource languages the model can handle with ease – it’s another source of native-quality text to learn from.

Find out more about the model at Hugging Face, and check out last week’s post for details on loading it onto your device!

Ultra-Mobile LLMs : Getting the Most from PocketPal

If you were following along last week, I was deep into the territory of running open, small-scale Large Language Models (LLMs) locally on a laptop in the free LMStudio environment. There are lots of reasons you’d want to run these mini chatbots, including the educational, environmental, and security aspects.

I finished off with a very cursory mention of an even more mobile vehicle for these, PocketPal. This free, open source app (available on Google and iOS) allows for easy (no computer science degree required) searching, downloading and running LLMs on smartphones and tablets. And, despite the resource limitations of mobile devices compared with full computer hardware, they run surprisingly well.

PocketPal is such a powerful and unique tool, and definitely worth a spotlight of its own. So, this week, I thought I’d share some tips and tricks I’ve found for smooth running of these language models in your pocket.

Full-Fat LLMs?

First off, even small, compact models can be (as you’d expect) unwieldy and resource-heavy files. Compressed, self-contained LLM models are available as .gguf files from sources like Hugging Face, and they can be colossal. There’s a process you’ll hear mentioned a lot in the AI world called quantisation, which compresses models to varying degrees. Generally speaking, the more compression, the more poorly the model performs. But even the most highly compressed small models can weigh in at 2gb and above. After downloading them, these mammoth blobs then load into memory, ready to be prompted. That’s a lot of data for your system to be hanging onto!

That said, with disk space, a good internet connection, and decent RAM, it’s quite doable. On a newish MacBook, I was comfortably downloading and running .gguf files 8gb large and above in LMStudio. And you don’t need to downgrade your expectations too much to run models in PocketPal, either.

For reference, I’m using a 2023 iPad Pro with the M2 chip – quite a modest spec now – and a 2024 iPhone 16. On both of them, the sweet spot seems to be a .gguf size of around 4gb – you can go larger, but there’s a noticeable slowdown and sluggishness beyond that. A couple of the models I’ve been getting good, sensible and usable results from on mobile recently are:

  • Qwen3-4b-Instruct (8-bit quantised version) – 4.28gb
  • Llama-3.2-3B-Instruct (6-bit quantised version) – 3.26gb

The ‘instruct’ in those model names refers to the fact that they’ve been trained to follow instructions particularly keenly – one of the reasons they give such decent practical prompt responses with a small footprint.

Optimising PocketPal

Once you have them downloaded, there are a couple of things you can tweak in PocketPal to eke out even more performance.

The first is to head to the settings and switch on Metal, Apple’s hardware-accelerated API. Then, increase the “Layers on GPU” setting to around 80 or so – you can experiment with this to see what your system is happy with. But the performance improvement should be instantaneous, the LLM spitting out tokens at multiple times the default speed.

What’s happening with this change is that iOS is shifting some of the processing from the device’s CPU to the GPU, or graphical processing unit. That may seem odd, but modern graphics chips are capable of intense mathematical operations, and this small switch recruits them into doing some of the heavy work.

Additionally, on some recent devices, switching on “Flash Attention” can bring extra performance enhancements. This interacts with the way LLMs track how much weight to give certain tokens, and how that matrix is stored in memory during generation. It’s pot luck whether it will make a difference, depending on device spec, but I see a little boost.

Tweaking PocketPal’s settings to run LLMs more efficiently

Tweaking PocketPal’s settings to run LLMs more efficiently

Making Pals – Your Own Custom Bots

When you’re all up and running with your PocketPal LLMs, there’s another great feature you can play with to get very domain-specific results – “Pal” creation. Pals are just system prompts – instructions that set the boundaries and parameters for the conversation – in a nice wrapper. And you can be as specific as you want with them, instructing the LLM to behave as a language learning assistant, a nutrition expert, a habits coach, and such like – with as many rules and output notes as you see fit. It’s an easy way to turn a very generalised tool into something focused and with real-world application.

So that’s my PocketPal in-a-nutshell power guide. I hope you can see why it’s worth much more than just a cursory mention at the end of last week’s post! Tools like PocketPal and LMStudio put you right at the centre of LLM development, and I must admit it’s turned me into a models geek – I’m already looking forward to what new open LLMs will be unleashed next.

So what have you set your mobile models doing? Please share your tips and experiences in the comments!

Small LLMs

LLMs on Your Laptop

I mentioned last week that I’m spending a lot of time with LLMs recently. I’m poking and prodding them to test their ‘understanding’ (inverted commas necessary there!) of phonology, in particular with non-standard speech and dialects.

And you’d be forgiven for thinking I’m just tapping my prompts into ChatGPT, Claude, Gemini or the other big commercial concerns. Mention AI, and those are the names people come up with. They’re the all-bells-and-whistles web-facing services that get all the public fanfare and newspaper column inches.

The thing is, that’s not all there is to Large Language Models. There’s a whole world of open source (or the slightly less open ‘open weights’) models out there. Some of them offshoots of those big names, while others less well-known. But you can download all of them to run offline on any reasonably-specced laptop.

LMStudio – LLMs on your laptop

Meet LMStudio – the multi-platform desktop app that allows you to install and interrogate LLMs locally. It all sounds terribly technical, but at its most basic use – a custom chatbot – you don’t need any special tech skills. Browsing, installing and chatting with models is all done via the tab-based interface. You can do much more with it – the option to run it as a local server is super useful for development and testing – but you don’t have to touch any of that.

Many of the models downloadable within LMStudio are small models – just a few gigabytes, rather than the behemoths behind GPT-5 and other headline-grabbing releases. They feature the same architecture as those big-hitters, though. And in many cases, they are trained to approach, even match, their performance on specific tasks like problem-solving or programming. You’ll even find reasoning models, that produce a ‘stepwise-thinking’ output, similar to platforms like Gemini.

A few recent models for download include:

  • Qwen3 4B Thinking – a really compact model (just over 2gb) which supports reasoning by default
  • OpenAI’s gpt-oss-20b – the AI giant’s open weights offering, released this August
  • Gemma 3 – Google’s multimodal model optimised for use on everyday devices
  • Mistral Small 3.2 – the French AI company’s open model, with vision capabilities

So why would you bother, when you can just fire up ChatGPT / Google / Claude in a few browser clicks?

LLMs locally – but why?

Well, from an academic standpoint, you have complete control over these models if you’re exploring their use cases in a particular field, like linguistics or language learning. You can set parameters like temperature, for instance – the degree of ‘creativity wobble’ the LLM has (0 being a very rigid none, and 1 being, well, basically insane). And if you can set parameters, you can report these in your findings, which allows others to replicate your experiments and build on your knowledge.

Small models also run on smaller hardware – so you can develop solutions that people don’t need a huge data centre for. If you do hit upon a use case or process that supports researchers, then it’s super easy for colleagues to access the technology, whatever their recourse to funding support.

Secondly, there’s the environmental impact. If the resource greed of colossal data centres is something that worries you (and there’s every indication that it should be a conversation we’re all having ), then running LLMs locally allows you to take advantage of them without heating up a server farm somewhere deep inside the US. The only thing running hot will be your laptop fan (it does growl a bit with the larger models – I take that as a sign to give it a rest for a bit!).

And talk of those US server farms leads on to the next point: data privacy. OpenAI recently caused waves with their suggestion that user conversations are not the confidential chats many assume them to be. If you’re not happy with your prompts and queries passing out of your control and into the data banks of a foreign state, then local LLMs offer not a little peace of mind too.

Give it a go!

The best thing? LMStudio is completely free. So download it, give it a spin, and see whether these much smaller-footprint models can give you what you need without entering the ecosystem of the online giants.

Lastly, don’t have a laptop? Well, you can also run LLMs locally on phones and tablets too. Free app PocketPal (on iOS and Android) runs like a cut-down version of LMStudio. Great for tinkering on the go!

A swirl of IPA symbols in the ether. Do LLMs 'understand' phonology? And are they any good at translation?

Do LLMs have phonological ‘understanding’?

LLMs are everywhere just now. And as statistical word-crunchers, these large language models seem a tantalisingly good fit for linguistics work.

And, where there’s new tech, there’s new research: one of the big questions floating around in linguistics circles right now is whether large language models (LLMs) “understand” language systems in any meaningful way – at least any way that can be useful to research linguists.

LLMs doing the donkey work?

One truly exciting potential avenue is the use of LLMs to do the heavy lifting of massive corpus annotation. Language corpora can be huge – billions of words in some cases. And to be usefully searchable, those words have to be tagged with some kind of category information. For years, we’ve had logic-based Natural Language Processing (NLP) tech to do this, and for perhaps the most block-wise faculty of language – syntax – it’s done a generally grand, unthinking job.

But LLMs go one step beyond this. They not only demonstrate (or simulate) a more creative manipulation of language. Now, they have begun to incorporate thinking too. Many recent models,  such as the hot-off-the-press GPT-5, are already well along the production line of a new generation of high reasoning LLM models. These skills that are making them useful in other fields of linguistics, beyond syntax – fields where things like sentiment and intention come into play. Pragmatics is one area that has been a great fit, with one study into LLM tagging showing promising results.

The sounds behind the tokens

As for phonology, the linguistic field that deals with our mental representations of sound systems, the answer is a little more complicated.

On the one hand, LLMs are completely text-based. They don’t hear or produce sounds – they’re pattern matchers for strings of tokens – bits of words. But because written language does encode sound–meaning correspondences, they end up with a kind of latent ability to spot phonological patterns indirectly. For example, ask an LLM to generate rhyming words, or to apply a regular sound alternation like plural –s in English, and it usually does a decent job. In fact, one focus of a recent study was rhyming, and it found that, with some training, LLMs can approach a pretty humanlike level of rhyme generation.

On one level, that’s intuitive – it’s because orthography tends (largely) to reflect underlying phonotactics and morphophonology. Also, the sheer volume of data helps the model make the right generalisations – in those billions of pages of crunched training data, there are bound to be examples of the link. Where it gets shakier is with non-standard spellings, dialect writing, or novel words. Without clear orthographic cues, the model struggles to “hear” the system. You might see it overgeneralise, or miss distinctions that are obvious to a native speaker. In other words, it mimics phonological competence through text-based proxy, but it doesn’t have one.

It’s that ‘shakier’ competence I’m exploring in my own research right now. How easy is it to coax an understanding of non-standard phonology from an out-of-the-box LLM? Pre-training is key, finding wily ways to prime that mysterious ‘reasoning’ new models use.

Rough-Edged tools that need honing

So, do LLMs have phonological understanding?

Well, not in the sense of a human speaker with an embodied grammar. But what they do have is an uncanny knack for inferring patterns from writing, a kind of orthography-mediated phonology.

That makes them rough tools starting out, but potentially powerful assistants: not replacements for the linguist’s ear and analysis, but tools that can highlight patterns, make generalisation we might otherwise miss, and help us sift through mountains of messy data.