6 Reasons Why Human Subtitles Are Much Better Than Automated Subtitles
The allure of automated subtitles is understandable. You can get functional translations for your video content quickly and cheaply.
The audience that prefers subtitles is large and growing; more than 80% of Netflix members use subtitles at least once a month, and the majority of the youth use subtitles. It makes perfect sense to cater to these markets, and to do it economically through automation.
However, there are major trade-offs when relying on automated subtitles. In this article, we’ll go over the issues that machine subtitling brings, and why humans are much better at creating quality subtitles for video content.
- Accuracy in Localisation
Translation is not merely replacing words in one language with their counterparts in another language. There has to be an understanding of what is meant behind the words being said. Cultural norms don’t always have the exact same equivalents between different cultures. There is context to be considered.
Don’t expect a machine that automatically generates subtitles to account for all these complex factors. It will do its best, and the technology is getting better, but it still has a long way to go. YouTube automatic captions have about a 60 to 70% accuracy rate, and that’s coming from the world’s biggest video platform with resources from one of the world’s biggest tech companies in Google.
If a computer is already making mistakes transcribing speech in one language, how much more inaccurate would it get translating the incorrect words?
An actual human with native fluency in the language that needs to be translated for subtitles is much more reliable. Not only will they get the baseline understanding correctly, they will also apply their cultural knowledge to best convey the original language’s message to the intended audience that speaks another language.
For example, “biscuit” said by a British character would be subtitled as is by a machine, when a human would more accurately localise the word into “cookie” for an American audience.
- Nuance Over Literal Translation
TV shows and films are at their core artistic expressions. How characters say their lines matter just as much as what they say. Subtitles have to capture the nuance in the dialogues and the performances to successfully tell the stories being told.
Automated subtitles are nowhere close to being at the level where they can contextualise speech and account for the scene currently playing out and the delivery of the actors.
Human translators, however, can do just that. They factor in the intonation, the pauses, the facial expressions, the actions, the relationships of the characters involved in the scene, etc. With all these things in mind, they are able to produce subtitles that faithfully communicate the storytelling of these forms of media.
Humour can already be tough for human translators. Wordplay and references only locals can get are staples of comedy. Translators have the difficult job of making audiences understand such jokes _and _laugh at them. Machine-made subtitles can only translate word for word, and it’s only going to be unintentionally funny.
- Subtitling Multiple Languages
For movies and TV series with characters that speak different languages, there’s a lot more work that needs to be done compared to those that only need one set of subtitles.
You can’t just run the script through a machine and expect to get all the languages translated properly. Language blocks for each language used have to be included in the translation systems. Even then, there’s no accounting for the context involved in the use of the different languages through machine translation. Automated subtitles would have you believe that two people experiencing a language barrier are talking to each other with no problem.
There’s also a purposeful choice to be made for having subtitles appear for certain languages in certain scenes. A Japanese character who is the protagonist may have their speech subtitled, but in a scene where they’re surrounded by Chinese characters, there may be no Mandarin subtitles.
A human can make this decision, knowing the scene is meant to show the protagonist’s confusion. Automated subtitles would translate the Mandarin dialogue anyway, missing the point of the scene.
- Polish in the Details
There are a lot of tiny technicalities in adding subtitles that can be lost when automating the process.
Compound words such as “ice cream” or “sweet tooth” can be subtitled incorrectly by a machine if they aren’t in the system. Each word is translated separately, resulting in odd phrases.
Technical language can be easily mistranslated through automation. A character explaining a heady concept with jargon that a machine is not familiar with is only going to make the scene more confusing for audiences.
Subtitle length is an important consideration. It has to match line deliveries of the characters on the screen while also being readable to audiences for a reasonable amount of time. This sometimes means cutting out words that aren’t necessary. A machine will include every single word uttered, even if it makes for messy subtitles.
Clauses have to be parsed properly to achieve neat and easy-to-read subtitles. A human can check where the periods and commas are to know when to start new lines of subtitles.
Multiple people talking simultaneously is bound to be mixed up when subtitles are automated, so a human touch is required for such cases.
- Clarity in Closed Captions
On top of being necessary for accessibility, closed captions provide much needed context when it comes to sound effects. They help deaf and hard-of-hearing people on a fundamental level, while adding to the viewing experience for those who simply don’t have the optimal setup for hearing audio. Everyone benefits from closed captions.
Closed captions that are machine-generated may only spell out the sound effects for a scene without any other important information. Humans can make it explicit when a “ticking” sound is coming from a normal clock or a time bomb, or when footsteps are stomping loudly during a solemn moment or faintly echoing in a tense horror sequence.
Silence can be used to dramatic effect in movies or TV shows, and the closed captions have to reflect that. A raucous crowd that suddenly stops clapping because of what a character does when it’s not shown on screen should be indicated through the closed captions.
The same goes for when characters are moving their lips but there’s no actual sound coming out or the words are muffled. Humans will know to provide the appropriate closed captions; automation will not.
- Care and Credibility
Automated subtitles sacrifice quality for practicality. While they may be affordable and quick to churn out, they produce lesser results than subtitles that have been created, reviewed, and edited with care by real people.
Choosing cheap automation over human attention reflects poorly on your brand. It shows that you don’t put much importance on any other market than local audiences, and that meeting accessibility requirements are not on your list of priorities. Ultimately, you can alienate large groups of people with the decision.
Take the time to think if going with automated subtitles is worth the hit to your credibility when they inevitably diminish the viewing experience for your film or TV show.
Providing the Human Touch in Translations
The art of audiovisual storytelling is a human endeavour. Robots still can’t fully comprehend the subtleties in the medium that go beyond logic, data sets, and algorithms. There’s no replacing the skillset and the cultural connections that human translators have for subtitling films and shows anytime soon.
We can ensure the humanity of your video projects touches all your target audiences and our global network of multilingual specialists have years of experience providing stellar subtitle translation services that communicate the stories our clients want to tell. Contact us today to get started.