ElevenLabs Hits the Right Note: A.I. Songwriting for Language Learners

In case you missed it, A.I. text-to-speech leader ElevenLabs is the latest platform to join the generative music scene – so language learners and teachers have another choice for creating original learning songs.

ElevenLabs’ Creative Platform ElevenMusic takes a much more structured approach to music creation that other platforms I’ve tried. Enter your prompt (or full lyrics), and it will build a song from block components – verse, chorus, bridge – just as you might construct one as a human writer. It makes for a much more natural-sounding track.

ElevenLabs music creation

ElevenLabs music creation

As you’d expect from voice experts ElevenLabs, the service copes with a wide range of languages and the diction is very convincing. A tad more so, I think, than the current iteration of the first big name on the block, Suno AI. No doubt the latter will have some tricks up its sleeve to keep up the pace – but for now, ElevenLabs is the place to go for quick and catchy learning song.

Anyway, here’s one I made earlier – a rather natty French rock and roll song about the Moon landings. Get those blue suede Moon boots on!

It’s definitely worth having a play on the site to see what you can come up with for you or your classes. ElevenLabs has a free tier, of course, so you can try it out straight away. [Note: that’s my wee affiliate link, so if you do sign up and hop on a higher tier later, you’re helping keep Polyglossic going!]

Generative Images Locally : Running Models on Your Machine

I’ve written a fair bit about language models of late. This is a language blog, after all! But creating resources is about other visual elements, too. And just as you can run off text from local generative AI, you can create images locally, too.

For working on a computer, ComfyUI is a good bet. It’s a graphical dashboard for creating AI art with a huge array of customisation options. The fully-featuredness of it, admittedly, makes it a complex first intro to image generation. It’s interface, which takes a pipeline / modular format, takes a bit of getting used to. But it also comes with pre-defined workflows that mean you can just open it, prompt and go. There’s also a wide, active community that supports in online, so there’s plenty of help available.

Generate images locally - the ComfyUI interface

Generate images locally – the ComfyUI interface

At the more user-friendly end of it is Draw Things for Apple machines (unfortunately no Android yet). With a user interface much closer to art packages you’ll recognise, Draw Things allows you to download different models and prompt locally – and is available as an iOS app too. Obviously there’s a lot going on when you generate images, so it slugs along at quite a modest trot on my two-year-old iPad. But it gives you so much access to the buttons and knobs to tweak that it’s a great way to learn more about the generation process. Like ComfyUI, its complexity – once you get your head round it – actually teaches you a lot about image generation.

Of all the benefits of these apps, perhaps the greatest is again the environmental. You could fire up a browser and prompt one of the behemoths. But why crank up the heat on a distant data centre machine, when you can run locally? Many commercial generative models are far too powerful for what most people need.

Save power, and prompt locally. It’s more fun!

A swirl of IPA symbols in the ether. Do LLMs 'understand' phonology? And are they any good at translation?

Tencent’s Hunyuan-MT-7B, the Translation Whizz You Can Run Locally

There’s been a lot of talk this week about a brand new translation model, Tencent’s Hunyuan-MT-7B. It’s a Large Language Model (LLM) trained to perform machine translation. And it’s caused a big stir by beating heftier (and heavier) models by Google and OpenAI in a recent event.

This is all the more remarkable given that it’s really quite a small model by LLM standards. Hunyuan actually manages its translation-beating feat packed into just 7 billion parameters (the information nodes that models learn from). Now that might sound a lot. But fewer usually means weaker, and the behemoths are nearing post-trillion param levels already.

So Hunyuan is small. But in spite of that, it can translate accurately and reliably – market-leader beatingly so – between over 30 languages, including some low-resource ones like Tibetan and Kazakh. And its footprint is truly tiny in LLM terms – it’s lightweight enough to run locally on a computer or even tablet, using inference software like LMStudio or PocketPal.

The model is available in various GGUF formats at Hugging Face. The 4-bit quantised version comes in at just over 4 GB, making it iPad-runnable. If you want greater fidelity, then 8-bit quantised is still only around 8 GB, easily handleable in LMStudio with a decent laptop spec.

So is it any good?

Well, I ran a few deliberately tricky English to German tasks through it, trying to find a weak spot. And honestly, it’s excellent – it produces idiomatic, native-quality translations that don’t sound clunky. What I found particularly impressive was its ability to paraphrase where a literal translation wouldn’t work.

There are plenty of use cases, even if you’re not looking for a translation engine for a full-blown app. Pocketising it means you have a top-notch multi-language translator to use offline, anywhere. For language learners – particularly those struggling with the lower-resource languages the model can handle with ease – it’s another source of native-quality text to learn from.

Find out more about the model at Hugging Face, and check out last week’s post for details on loading it onto your device!

Ultra-Mobile LLMs : Getting the Most from PocketPal

If you were following along last week, I was deep into the territory of running open, small-scale Large Language Models (LLMs) locally on a laptop in the free LMStudio environment. There are lots of reasons you’d want to run these mini chatbots, including the educational, environmental, and security aspects.

I finished off with a very cursory mention of an even more mobile vehicle for these, PocketPal. This free, open source app (available on Google and iOS) allows for easy (no computer science degree required) searching, downloading and running LLMs on smartphones and tablets. And, despite the resource limitations of mobile devices compared with full computer hardware, they run surprisingly well.

PocketPal is such a powerful and unique tool, and definitely worth a spotlight of its own. So, this week, I thought I’d share some tips and tricks I’ve found for smooth running of these language models in your pocket.

Full-Fat LLMs?

First off, even small, compact models can be (as you’d expect) unwieldy and resource-heavy files. Compressed, self-contained LLM models are available as .gguf files from sources like Hugging Face, and they can be colossal. There’s a process you’ll hear mentioned a lot in the AI world called quantisation, which compresses models to varying degrees. Generally speaking, the more compression, the more poorly the model performs. But even the most highly compressed small models can weigh in at 2gb and above. After downloading them, these mammoth blobs then load into memory, ready to be prompted. That’s a lot of data for your system to be hanging onto!

That said, with disk space, a good internet connection, and decent RAM, it’s quite doable. On a newish MacBook, I was comfortably downloading and running .gguf files 8gb large and above in LMStudio. And you don’t need to downgrade your expectations too much to run models in PocketPal, either.

For reference, I’m using a 2023 iPad Pro with the M2 chip – quite a modest spec now – and a 2024 iPhone 16. On both of them, the sweet spot seems to be a .gguf size of around 4gb – you can go larger, but there’s a noticeable slowdown and sluggishness beyond that. A couple of the models I’ve been getting good, sensible and usable results from on mobile recently are:

  • Qwen3-4b-Instruct (8-bit quantised version) – 4.28gb
  • Llama-3.2-3B-Instruct (6-bit quantised version) – 3.26gb

The ‘instruct’ in those model names refers to the fact that they’ve been trained to follow instructions particularly keenly – one of the reasons they give such decent practical prompt responses with a small footprint.

Optimising PocketPal

Once you have them downloaded, there are a couple of things you can tweak in PocketPal to eke out even more performance.

The first is to head to the settings and switch on Metal, Apple’s hardware-accelerated API. Then, increase the “Layers on GPU” setting to around 80 or so – you can experiment with this to see what your system is happy with. But the performance improvement should be instantaneous, the LLM spitting out tokens at multiple times the default speed.

What’s happening with this change is that iOS is shifting some of the processing from the device’s CPU to the GPU, or graphical processing unit. That may seem odd, but modern graphics chips are capable of intense mathematical operations, and this small switch recruits them into doing some of the heavy work.

Additionally, on some recent devices, switching on “Flash Attention” can bring extra performance enhancements. This interacts with the way LLMs track how much weight to give certain tokens, and how that matrix is stored in memory during generation. It’s pot luck whether it will make a difference, depending on device spec, but I see a little boost.

Tweaking PocketPal’s settings to run LLMs more efficiently

Tweaking PocketPal’s settings to run LLMs more efficiently

Making Pals – Your Own Custom Bots

When you’re all up and running with your PocketPal LLMs, there’s another great feature you can play with to get very domain-specific results – “Pal” creation. Pals are just system prompts – instructions that set the boundaries and parameters for the conversation – in a nice wrapper. And you can be as specific as you want with them, instructing the LLM to behave as a language learning assistant, a nutrition expert, a habits coach, and such like – with as many rules and output notes as you see fit. It’s an easy way to turn a very generalised tool into something focused and with real-world application.

So that’s my PocketPal in-a-nutshell power guide. I hope you can see why it’s worth much more than just a cursory mention at the end of last week’s post! Tools like PocketPal and LMStudio put you right at the centre of LLM development, and I must admit it’s turned me into a models geek – I’m already looking forward to what new open LLMs will be unleashed next.

So what have you set your mobile models doing? Please share your tips and experiences in the comments!

Small LLMs

LLMs on Your Laptop

I mentioned last week that I’m spending a lot of time with LLMs recently. I’m poking and prodding them to test their ‘understanding’ (inverted commas necessary there!) of phonology, in particular with non-standard speech and dialects.

And you’d be forgiven for thinking I’m just tapping my prompts into ChatGPT, Claude, Gemini or the other big commercial concerns. Mention AI, and those are the names people come up with. They’re the all-bells-and-whistles web-facing services that get all the public fanfare and newspaper column inches.

The thing is, that’s not all there is to Large Language Models. There’s a whole world of open source (or the slightly less open ‘open weights’) models out there. Some of them offshoots of those big names, while others less well-known. But you can download all of them to run offline on any reasonably-specced laptop.

LMStudio – LLMs on your laptop

Meet LMStudio – the multi-platform desktop app that allows you to install and interrogate LLMs locally. It all sounds terribly technical, but at its most basic use – a custom chatbot – you don’t need any special tech skills. Browsing, installing and chatting with models is all done via the tab-based interface. You can do much more with it – the option to run it as a local server is super useful for development and testing – but you don’t have to touch any of that.

Many of the models downloadable within LMStudio are small models – just a few gigabytes, rather than the behemoths behind GPT-5 and other headline-grabbing releases. They feature the same architecture as those big-hitters, though. And in many cases, they are trained to approach, even match, their performance on specific tasks like problem-solving or programming. You’ll even find reasoning models, that produce a ‘stepwise-thinking’ output, similar to platforms like Gemini.

A few recent models for download include:

  • Qwen3 4B Thinking – a really compact model (just over 2gb) which supports reasoning by default
  • OpenAI’s gpt-oss-20b – the AI giant’s open weights offering, released this August
  • Gemma 3 – Google’s multimodal model optimised for use on everyday devices
  • Mistral Small 3.2 – the French AI company’s open model, with vision capabilities

So why would you bother, when you can just fire up ChatGPT / Google / Claude in a few browser clicks?

LLMs locally – but why?

Well, from an academic standpoint, you have complete control over these models if you’re exploring their use cases in a particular field, like linguistics or language learning. You can set parameters like temperature, for instance – the degree of ‘creativity wobble’ the LLM has (0 being a very rigid none, and 1 being, well, basically insane). And if you can set parameters, you can report these in your findings, which allows others to replicate your experiments and build on your knowledge.

Small models also run on smaller hardware – so you can develop solutions that people don’t need a huge data centre for. If you do hit upon a use case or process that supports researchers, then it’s super easy for colleagues to access the technology, whatever their recourse to funding support.

Secondly, there’s the environmental impact. If the resource greed of colossal data centres is something that worries you (and there’s every indication that it should be a conversation we’re all having ), then running LLMs locally allows you to take advantage of them without heating up a server farm somewhere deep inside the US. The only thing running hot will be your laptop fan (it does growl a bit with the larger models – I take that as a sign to give it a rest for a bit!).

And talk of those US server farms leads on to the next point: data privacy. OpenAI recently caused waves with their suggestion that user conversations are not the confidential chats many assume them to be. If you’re not happy with your prompts and queries passing out of your control and into the data banks of a foreign state, then local LLMs offer not a little peace of mind too.

Give it a go!

The best thing? LMStudio is completely free. So download it, give it a spin, and see whether these much smaller-footprint models can give you what you need without entering the ecosystem of the online giants.

Lastly, don’t have a laptop? Well, you can also run LLMs locally on phones and tablets too. Free app PocketPal (on iOS and Android) runs like a cut-down version of LMStudio. Great for tinkering on the go!

A swirl of IPA symbols in the ether. Do LLMs 'understand' phonology? And are they any good at translation?

Do LLMs have phonological ‘understanding’?

LLMs are everywhere just now. And as statistical word-crunchers, these large language models seem a tantalisingly good fit for linguistics work.

And, where there’s new tech, there’s new research: one of the big questions floating around in linguistics circles right now is whether large language models (LLMs) “understand” language systems in any meaningful way – at least any way that can be useful to research linguists.

LLMs doing the donkey work?

One truly exciting potential avenue is the use of LLMs to do the heavy lifting of massive corpus annotation. Language corpora can be huge – billions of words in some cases. And to be usefully searchable, those words have to be tagged with some kind of category information. For years, we’ve had logic-based Natural Language Processing (NLP) tech to do this, and for perhaps the most block-wise faculty of language – syntax – it’s done a generally grand, unthinking job.

But LLMs go one step beyond this. They not only demonstrate (or simulate) a more creative manipulation of language. Now, they have begun to incorporate thinking too. Many recent models,  such as the hot-off-the-press GPT-5, are already well along the production line of a new generation of high reasoning LLM models. These skills that are making them useful in other fields of linguistics, beyond syntax – fields where things like sentiment and intention come into play. Pragmatics is one area that has been a great fit, with one study into LLM tagging showing promising results.

The sounds behind the tokens

As for phonology, the linguistic field that deals with our mental representations of sound systems, the answer is a little more complicated.

On the one hand, LLMs are completely text-based. They don’t hear or produce sounds – they’re pattern matchers for strings of tokens – bits of words. But because written language does encode sound–meaning correspondences, they end up with a kind of latent ability to spot phonological patterns indirectly. For example, ask an LLM to generate rhyming words, or to apply a regular sound alternation like plural –s in English, and it usually does a decent job. In fact, one focus of a recent study was rhyming, and it found that, with some training, LLMs can approach a pretty humanlike level of rhyme generation.

On one level, that’s intuitive – it’s because orthography tends (largely) to reflect underlying phonotactics and morphophonology. Also, the sheer volume of data helps the model make the right generalisations – in those billions of pages of crunched training data, there are bound to be examples of the link. Where it gets shakier is with non-standard spellings, dialect writing, or novel words. Without clear orthographic cues, the model struggles to “hear” the system. You might see it overgeneralise, or miss distinctions that are obvious to a native speaker. In other words, it mimics phonological competence through text-based proxy, but it doesn’t have one.

It’s that ‘shakier’ competence I’m exploring in my own research right now. How easy is it to coax an understanding of non-standard phonology from an out-of-the-box LLM? Pre-training is key, finding wily ways to prime that mysterious ‘reasoning’ new models use.

Rough-Edged tools that need honing

So, do LLMs have phonological understanding?

Well, not in the sense of a human speaker with an embodied grammar. But what they do have is an uncanny knack for inferring patterns from writing, a kind of orthography-mediated phonology.

That makes them rough tools starting out, but potentially powerful assistants: not replacements for the linguist’s ear and analysis, but tools that can highlight patterns, make generalisation we might otherwise miss, and help us sift through mountains of messy data.

A finch flying above a beautiful landscape

Finch : Tiny Bird, Big Habits [Review]

When I first saw a Finch ad on Instagram, I confess, I rolled my eyes. Yet another quirky productivity app wrapped up as a kid’s game and pitched to grown‑ups, I thought. Isn’t Insta awash with them lately? But curiosity won the day, and I’m honestly quite glad it did.

As you’ve probably guessed, Finch turns your self‑care and habit building into a gentle, gamified ritual – with a little birdie companion. It might seem a touch infantile, but don’t be fooled: its foundation rests on solid habit‑science, and yes, adults do love things that are fluffy and cute (well, I do anyway).

Finch is generous too – its free version offers custom goals, journaling, mood tracking and more, without forcing you to pay to access the essentials. I haven’t paid a penny to use it yet, but the range of function on the free tier has been more than enough to keep me using it.

Why It Works

  • It starts small. When you first set it up, the suggested goals are self-care easy wins – drink more water, get outside at least once day – things to get you used to the app environment. Want to journal, stretch, or simply “get out of bed”? Go ahead and make it count.
  • Flexible goal‑setting for grown‑ups. Once you’ve got used to the interface, you can go to town setting your own goals – even on the free tier. I’ve added language learning daily tactics, university reading goals and all sorts – I almost feel guilty that I’m doing all this without a subscription!
  • Gentle gamification. As you check off goals, your bird gains energy, goes on charming adventures, and earns “Rainbow Stones” for adorning its nest. It’s rewarding without being punishing. And of course, also streak-building is part of the ecosystem, your Finch never dies if you miss a day (God forbid).
  • Supportive, not prescriptive. Other users highlight how the app strikes the right tone: compassionate rather than preachy. Some users with ADHD, anxiety or depression say its warmth makes self‑care feel doable.
  • Friend‑based encouragement. You can buddy up with a friend on a single goal (or more) without exposing your progress to a social feed. It’s discreet, pressure‑free support. For a laugh, I added a pal on the “drink more water” goal. We laugh about it, but it’s actually not a bad habit to develop, is it?

Final Verdict

Finch is a cosy, surprisingly effective habit app wrapped in feathers and whimsy. It’s kind to energy-drained minds, flexible enough for real lives, and – despite coming at me via the dreaded Insta ads – far more than a passing gimmick.

If you’ve ever felt wary of habit tools that feel too serious or demanding, Finch might just surprise you. And if nothing else, the little bird and its gentle cheer-on can make daily tasks feel a bit more doable – and dare I say it, sweet.

Finch is available as a free download on all the usual platforms –
find out more at their website here!

#EdFringe 2025 - an illustration of a vibrant street full of performers

#EdFringe for Language Lovers : 2025 Edition

It’s that time again when we all like to moan about how flippin’ busy the Edinburgh streets are. Yes, #EdFringe is here! But along with the inevitable tourist surge, there’s international comedy and entertainment of all shapes and sizes. And, of course, that means there are a few gems that will light up the language lovers.

So what treats does the 2025 edition have in store for us? Quite a bit, it turns out. Here are my picks for this year, taking in French, German, Spanish, and … Norwegian!

PIAF AND BREL: THE IMPOSSIBLE CONCERT (MELANIE GALL)

I’ve often said you can’t walk a yard during festival time without seeing a poster for a Piaf tribute. This year there are three, but writer, singer and music historian Melanie – a familiar face from previous Fringe years – blends in Brel too, and is amongst the best. The music takes centre stage, of course, but her storytelling is excellent.

After you’ve ticked that one off, check out Piaf Revisited and C’est moi as well! 

FRENCH MÉLODIE AND GERMAN LIEDER

A double whammy here – a quartet of musicians present a lunchtime treat of romantic song in French and German. It’s only on Monday 11th August, so be quick if you want to catch it!

SERGI POLO: SPANISH WORK IN PROGRESS

It’s not often you get a whole standup set in a non-English language at #EdFringe, but here we are (hoorah). Sergi Polo has brought his show to Edinburgh in both Spanish and English, although the Spanish set is for one night only (13th August).

FELI Y LOS MALOS

Spanish-language funky blues is the order of the day with this latin-pop quartet led by Colombian-American Felipe Schrieberg. Their last #EdFringe gig is Wednesday 13th, though, so be quick to catch them before they head home!

COPLA : A SPANISH CABARET

The history of Spanish cabaret is intertwined with the queer migrant experience in this moving, dramatic show at George Square. If you want a Spanish show that is on all month (great if you’re occasionally slow to get your ticketing act together, ahem), then this is a great choice.

Achtung! The Superkrauts are Coming!

If it’s winsome takes on German culture you want, then look no further than this duo! Blending music and not a little absurdist comedy, this Bavarian-Rheinland mix should get the laughs going. And if you liked that, you can catch one half of them, Jürgen, in his own standup set too.

LEO MAHR IS A SEASONED *****

It’s a great year for LGBTQIA+ comedy this year, and this is a cheeky one, but I couldn’t resist. Queer-coded, after-hours, Swiss German shenanigans are what’s on offer here, and the best thing about it? It’s free, playing every day at venue #82, the Laughing Horse (actually the iconic City Café) from the 12th to the 25th.

THOR STENHAUG : ONE-NIGHT STAND BABY

Fringe programmes are driven more and more by social media breakouts in recent years, and 2025’s listings are full of them. Norway’s Thor Stenhaug is one, having built a loyal base on TikTok with a set that puts a quirky spin on a Norwegian’s experience of UK life. Obviously I couldn’t not add a Norwegian-themed act, could I?

So there you go – a clutch of fun shows to take you around the world (well, mainly Europe, but there’s a bit of the RoW in there too!). What have I missed? What have you seen that is unmissable? I’d love more recommendations – please share yours in the comments!

Summer language learning - a book on the grass.

My Language Learning Life : July 2025 Update

So the summer hols are here – and what better time to take stock of my own polyglot progress? July’s been solid – not life-changing, but the kind of steady language learning momentum that actually gets you places over time.

Here’s where things stand.

Greek: From Textbooks to TikTok

Greek continues to be my most active language learning project right now. I’m keeping up weekly iTalki sessions with my usual tutor, grinding through Τα λέμε Ελληνικά – a B1-B2 course that’s about as exciting as it sounds but gets the job done. Grammar drills aren’t everybody’s cup of tea (well – they are mine, actually), but they work.

The real fun’s been on social media. @greekoutwithmaria is gold – idiomatic, useful Greek with clear explanations. I’ve compiled a whole list of other useful Greek accounts here if you want more where that came from!

To not get lost in the scroll, I dip in occasionally and bookmark stuff as I go. Then, I make sure to have a weekly session where I actually do something with it – vocab decks, Anki cards, and the like. It’s a system that’s added some real conversational polish to my Greek.

German: Going Old School Again

I’ve been gravitating back to actual books to maintain my German lately. There’s something about physical pages that screens can’t replicate – maybe it’s the weight, maybe it’s not getting distracted by notifications every five minutes.

I threw myself at two very different reads this month. First up is Torsten Sträter’s Es ist nie zu spät, unpünktlich zu sein, which serves up observational comedy that’s heavy on dad jokes but light on mental effort. It makes perfect train reading when your brain’s already fried from the day. Then there’s Hermann Hesse’s Siddhartha, which I’m finally tackling after seeing it on every German language and literature syllabus for years. And it’s a thoroughly readable classic – there’s something very soothing about it as an adventure into the soul.

A little light Readly

Readly, the multi-magazine app, still gets plenty of action on long journeys. I’ve been reading Men’s Health Germany and Sweden’s Språktidningen (pop linguistics in Swedish – a real treat) regularly. Saying that, the platform recently axed most of their Norwegian titles, which is annoying thanks to shifting licensing deals.

As for target language reading of any kind the golden rule applies: read what you’d actually want to read, just in another language.

Life’s too short for boring books in any tongue.

Podcasts: When Your Day Job Meets Your Hobby

I’ve started listening to Der KI Podcast, which covers AI developments in chatty, accessible German. It’s the perfect overlap with both my work and PhD research, so it basically counts as multitasking disguised as language practice. That’s really the sweet spot we’re always looking for: finding content that ticks multiple boxes, rather than forcing language learning into spaces where it doesn’t naturally belong.

Side Quests: Persian, Albanian, and Library Rabbit Holes

Joy of joys – my university library recently added the entire Routledge Colloquial series digitally, which has proven dangerous territory for someone with my particular brand of linguistic OCD. My latest obsession has been Persian, which I’ve been exploring through both the recently updated Routledge title and an ancient Teach Yourself Persian volume that’s pure grammar-translation throwback. You can sense the layers of metaphorical dust on it, but I genuinely love the methodical approach of dissembling languages during the learning process to see how they tick.

Albanian also got a brief look-in after Dua Lipa’s Wembley extravaganza sent me down a cultural rabbit hole. Yes, continuing that trend of letting pop culture determine my dabbling directions. I don’t have any grand plans with it, just some structured curiosity that might lead somewhere (or probably not).

Trips: Lyon and Dublin in Linguistic Technicolor

I took two quick city breaks this month to Lyon and Dublin, which meant the usual soundtrack of overheard conversations and multilingual signage. Nothing was particularly structured – just casual linguistic tourism really. It was great to be the designated restaurant orderer in France, though – that feeling of achievement and usefulness we linguaphiles yearn for!

The Verdict

So that was the past couple of months: steady progress rather than dramatic breakthroughs. Greek keeps moving forward, German feels natural and flowing, podcasts are doing their job, and my side projects are staying appropriately peripheral (but very interesting).

It might not be Instagram-worthy content, but it’s sustainable, and that matters more in the long run than any flashy sprint.

How was your language learning month? Let us know in the comments!

The Greek flag flying in a sunny sky

Greek participles – meet the -μένος gang!

There’s a class of words in Modern Greek that are derived from verbs but not used to form tenses – they’re purely adjectival. I’ve written about them in the past, in terms of how they contrast with another class of adjectives, and knowing a bit more about them can really help polish your fluency.

It’s worth revisiting these as they’re so widespread. In fact, the Duolingo Greek course has a whole unit on them, which is why they’re suddenly on my own radar again! I’m talking about passive past participles – they describe something that has been done to someone or something.

Meet the -μένος gang

You can usually spot them by their characteristic -μένος ending. In fact, you’ve probably been using a couple without even knowing it:

κουρασμένος (tired)

απασχολημένος (busy)

These words are passive as they describe a state of having had something happen to you – something has tired you out, for example (even the English is a past participle here). For busy, it’s closer to translate απασχολημένος  as ‘occupied’, which is what has been ‘done’ to busy people!

These passive past participles are formed from the verbal root. And in most cases, they’re completely transparent, containing all the elements of that root:

κουράζω (I tire) > κουρασμένος (tired) (ζ and σ are a common alternation in Greek roots)

απασχολώ (I occupy) > απασχολημένος (occupied, busy)

A disappearing act – Greek assimilation

Sometimes, however, the connection is not so obvious. There’s a group of Greek verbs that have a root with -β- and -φ- where that element disappears from the participle:

κόβω (I cut) κομμένος (cut)
κρύβω (I hide, tr.) κρυμμένος (hidden)
ράβω (I sew) ραμμένος (sewn)
βάφω (I paint) βαμμένος (painted)
γράφω (I write) γραμμένος (written)

What’s happened here is called assimilation – a case of one sound becoming more like another. Because the root consonant of these verbs is labial, ie., pronounced with the lips, it matches the place of articulation of the /m/ of the ending -μμένος. For ease of pronunciation, one becomes even more like the other – and it’s that /m/ that wins out here, passing its properties backwards (so this is regressive assimilation rather than progressive, where the properties of an earlier segment move to a later one).

There’s even a set of these participles that are formed additionally via reduplication – a doubling of syllables to express some category change (for instance, an imperfective / perfective distinction). Here are a couple:

δίνω (I give) δεδομένος (given)
πείθω (I convince) πεπεισμένος (convinced)

These are particularly exciting to scholars of Indo-European, as it’s a quite an ancient mechanism found in the proto-language, and not particularly productive in modern day Indo-European languages. When you see it fossilised in forms like this, historical linguists can get very excited.

Peeking under the bonnet of Greek grammar reveals just how deep some of these patterns run – and how much historical linguistics can supercharge your understanding and retention!