4 min read

#10: How and why we work without AI

#10: How and why we work without AI
The Comical Hotch Potch, or The Alphabet turn'd Posture-Master, 1782, via the always-wonderful Public Domain Review

AI is hideous on so many fronts. It's ecologically ruinous, it's encouraging kids to commit suicide, it's made of slave-mined minerals, it's operated by exploited data workers in the Global South, it's being actively used to commit genocide in Gaza, it gives duff health advice, and it's racist. No wonder fascists love it, and are desperately trying to shoehorn it into every possible corner of online life.

The thing is, nobody seems to give a shit. I saw a guy at the weekend asking ChatGPT what kind of flowers to buy his girlfriend. People I know and respect treat the disembodied voice on their phones as a medical authority, close friend and therapist rolled into one.

AI can – with the right guardrails, scepticism and scrutiny – be used for extreme good. The ability to analyse previously unthinkable datasets can only be a boon for medical research, and it has transformed things like cancer diagnosis. We could even argue that it might spur new developments in human-made art much like photography did, and automate pointless drivel: corporate emails, LinkedIn posts, and so on.

But for AI to do good, it needs to be treated as a tool, not a stand-in for actual people. The main problem is that it has been deliberately made to seem trustworthy and human, and people are buying into it. They feed their favourite LLM a question and then blindly accept its answer, delivered with a polite, deferential authority that erodes and undermines our critical thinking skills. Nowhere is this more apparent than in translation.

Why we don't use it

The Google Translate of 2015 produced wonky, bad, inaccurate translation. We all knew it was crap, nobody trusted it, and you could spot it a mile off. Professional machine translation software was a tad better, but still needed considerable editing from a human translator to produce something usable.

AI-powered machine translation has got better since then. Where the Google Translate of yore would have turned the (wonderfully colourful) Spanish expression "sin pelos en la lengua" into "without hairs on the tongue", AI-powered translation tool DeepL now gives the correct translation of "not mincing words". It even offers valid alternatives like "outspoken" and "frank" to boot.

But this isn't very impressive when you think about it. DeepL is great for when you need to check you've got the right idea, but the ability to accurately decipher idiomatic expressions isn't really groundbreaking. There's a WordReference forum post from 2007 that answers the same question. DeepL makes it easier to find this information, but so does WordReference's own dictionary entry.

AI is not a good translator, no better, at least, than the machine translation that we've had for a long time now. What it does offer is an output that, and this is the important part, looks very correct. Most AI-powered translation could probably, at first glance, pass for something written by a native speaker. But that doesn't mean it's an accurate translation.

The days of "hair on the tongue" are gone, but in a way things are worse. Whereas before we could spot shoddy machine translation, you could now unwittingly read a text that was translated or written by AI, perhaps with no human input at all. What you read might be an accurate translation, but there's also a good chance it's not. Even if it gets the grammar and vocabulary right, the tone, emphasis, information, terminology, subtext, connotation (etc etc) could be inaccurately translated or misleading. As well as the obvious problem of misinformation, there's also a wider question: who is held accountable when AI gets it wrong?

What AI gives us is a simulacrum of accurate translation or good writing, a facsimile that looks and sounds correct but often is not. Too many people are mistaking it for the real thing.

How we work without it

In Guerrilla Media Collective, we don't rely on machine translation, and we certainly don't use AI tools in any of our work – translating, editing, copy writing, researching, the lot.

The nuts and bolts of how we do this are extremely straightforward and not at all groundbreaking. We translate in teams: one translates, the other edits. The same goes for copy editing and writing publications: we each take a share of the work, and check each others' work for consistency. Even smaller jobs, like this newsletter, are done in a team – I'm writing it now, but it will then be checked by other members of the team before going out, and they'll be quite honest if something needs to change.

If we're stuck on a translation then we usually ask a native speaker of the source language – we have a dedicated "translation help" Slack channel just for this purpose. We'd only use machine translation as a last resort if we were totally befuddled by a knotty, Rajoy-esque specimen of a sentence. Chances are it spits out an unusable direct translation, but it might help us to get our heads around its meaning. We're also not averse to messaging clients directly in case we're missing a piece of specialised terminology, and that tends to clear things right up.

Some might see this as wildly inefficient, that we're clinging to some kind of silly, romanticised ideal of artisan "handcrafted" work or taking a futile stand by making our own lives more difficult. Some of this is true, but there's also a deeply practical dimension to how we work.

With (almost) everybody now using the same handful of LLMs to write, edit or translate what they publish, text is all starting to sound the same. This article does a really good job of explaining the specifics of AI's not-so-unique "voice" (here's an unrelated link to a paywall remover in case you need it for something else). It makes the very good point that certain features of "good" writing – "not X but Y" constructions, unobtrusive metaphors, listing things in threes, en dashes, the word "delve" – are now actually a possible sign of indiscriminate AI involvement.

By doing our work as humans we are taking a personal stand and pushing back against this monoculture. But our stance also adds value to what we do. If you write, edit or translate with AI – even outside of professional contexts – you'll end up sounding the same as everyone else who uses it. Readers will switch off, and you'll fail to stand out.

Leaving aside the economic implications (weakened brand reputation = less $$$) does that sound like a world you want to inhabit? We don't, and we're definitely not the only ones. It's crushingly sad to think that "human-made" could soon be a mark of distinction, but we're betting on a world where people want to hear and read actual human voices.