A swirl of IPA symbols in the ether. Do LLMs 'understand' phonology? And are they any good at translation?

Do LLMs have phonological ‘understanding’?

LLMs are everywhere just now. And as statistical word-crunchers, these large language models seem a tantalisingly good fit for linguistics work.

And, where there’s new tech, there’s new research: one of the big questions floating around in linguistics circles right now is whether large language models (LLMs) “understand” language systems in any meaningful way – at least any way that can be useful to research linguists.

LLMs doing the donkey work?

One truly exciting potential avenue is the use of LLMs to do the heavy lifting of massive corpus annotation. Language corpora can be huge – billions of words in some cases. And to be usefully searchable, those words have to be tagged with some kind of category information. For years, we’ve had logic-based Natural Language Processing (NLP) tech to do this, and for perhaps the most block-wise faculty of language – syntax – it’s done a generally grand, unthinking job.

But LLMs go one step beyond this. They not only demonstrate (or simulate) a more creative manipulation of language. Now, they have begun to incorporate thinking too. Many recent models,  such as the hot-off-the-press GPT-5, are already well along the production line of a new generation of high reasoning LLM models. These skills that are making them useful in other fields of linguistics, beyond syntax – fields where things like sentiment and intention come into play. Pragmatics is one area that has been a great fit, with one study into LLM tagging showing promising results.

The sounds behind the tokens

As for phonology, the linguistic field that deals with our mental representations of sound systems, the answer is a little more complicated.

On the one hand, LLMs are completely text-based. They don’t hear or produce sounds – they’re pattern matchers for strings of tokens – bits of words. But because written language does encode sound–meaning correspondences, they end up with a kind of latent ability to spot phonological patterns indirectly. For example, ask an LLM to generate rhyming words, or to apply a regular sound alternation like plural –s in English, and it usually does a decent job. In fact, one focus of a recent study was rhyming, and it found that, with some training, LLMs can approach a pretty humanlike level of rhyme generation.

On one level, that’s intuitive – it’s because orthography tends (largely) to reflect underlying phonotactics and morphophonology. Also, the sheer volume of data helps the model make the right generalisations – in those billions of pages of crunched training data, there are bound to be examples of the link. Where it gets shakier is with non-standard spellings, dialect writing, or novel words. Without clear orthographic cues, the model struggles to “hear” the system. You might see it overgeneralise, or miss distinctions that are obvious to a native speaker. In other words, it mimics phonological competence through text-based proxy, but it doesn’t have one.

It’s that ‘shakier’ competence I’m exploring in my own research right now. How easy is it to coax an understanding of non-standard phonology from an out-of-the-box LLM? Pre-training is key, finding wily ways to prime that mysterious ‘reasoning’ new models use.

Rough-Edged tools that need honing

So, do LLMs have phonological understanding?

Well, not in the sense of a human speaker with an embodied grammar. But what they do have is an uncanny knack for inferring patterns from writing, a kind of orthography-mediated phonology.

That makes them rough tools starting out, but potentially powerful assistants: not replacements for the linguist’s ear and analysis, but tools that can highlight patterns, make generalisation we might otherwise miss, and help us sift through mountains of messy data.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.