VIDEO LOCALIZATION TOOLS:
THE ONLINE ERA

Share this post:

By Alex Yoffe, Product Manager, OOONA

Published in MESA’s M + E Journal 22.02

ABSTRACT: Never before have we seen such a boom in tools to do with video localization. With hundreds of localization tools in the market, many of which are specialized for audiovisual translation and transcription purposes, where is the technology at the dawn of the third decade of the 21st century – and where is it heading?

Never before have we seen such abundance of tools to do with video localization. The Nimdzi Technology Atlas lists hundreds of localization tools in the market, including well over one hundred for audiovisual translation and transcription alone. How far has the technology come at the dawn of the third decade of the 21st century and where is it heading?

Video localization technology first came about decades ago with the introduction of affordable desktop computers. These allowed a single person to carry out the entire subtitling process. These first subtitle editors were DOS-based, linked to a TV monitor and video cassette player with jog shuttle. Desktop computers and company servers became the mainstream equipment for subtitle production in the 1990s.

Things changed profoundly with digitization. Subtitling software mushroomed in the noughties; freeware too. Word class subtitle editors were developed with features such as accurate frame timing, shot-change detection, wave bar, reading speed indicators, customizable hotkeys, automated backups, sophisticated quality assurance and assisted translation tools. They also provided the ability to use templates and presets, communicate between team members and convert between any of the myriad file formats used in the industry. For live subtitling workflows, speech recognition software was also integrated in the user interface to allow for dictation-based workflows. In short: software tools could support practically anything a subtitler could ask for.

Translation management systems also made their appearance in the noughties. Content volumes skyrocketed and production was centralized with the appearance of DVD. With cloud infrastructures increasingly adopted, it was inevitable that subtitling toolkits would move to the cloud as well. This took place in the following decade as the streaming era caused another large increase in content volume to be localized.

The primary factors for the selection of cloud infrastructures by businesses have always been ease of deployment and data security. The latter had been a prime concern for the media sector: multi-factor authentication, video watermarking, cyber security certifications, continuous pen testing and 24/7/365 technical support are now the norm for platforms used by language service providers wishing to offer video localization services to their end clients.

Online subtitle editors are now used by most top media localization providers, typically integrated into a translation management system. The better ones lack none of the prime features of the best desktop software of the previous decade, such as automatic shot change detection and audio scrubbing, a sine-qua-non for frame accurate subtitling.

Integration to a translation management system allows the automatic handling of client orders, automated or bulk assignment of work to resources, live dashboards, file management and user metrics, as well as integration with finance tools for a complete end-to-end solution. Work allocation and completion are thus managed and controlled more effectively and transparently with in-built communication tools that facilitate remote and collaborative work. This cuts down duplication of effort, turnaround time and the potential for error. It also offers a seamless experience to users. Production can be scaled up easily as content volumes fluctuate and requirements change.

The adoption of online editors was accelerated by the Covid-19 pandemic. It created a surge in the development of professional online tools for revoicing purposes following the global closure of dubbing studios during the lockdowns. Dubbing had been a very local and fragmented industry for years with many family-owned businesses in the market which allowed manual practices to perpetuate. The forced closures of studios all over the world provided the necessary push to reprioritize software development agendas. In the past few years, we have seen most top media localizers adopt their own custom-made platform to enable audio localization work in the cloud.

The benefits of fully integrated cloud systems used for subtitling purposes shone through the pandemic and provided inspiration to streamline all other media localization production in the cloud as well. Script editors are very much like subtitle editors in terms of functionality, with different settings relating to timing rules, line length, character limits and so on. The industry saw an increasing push to repurpose content and access file metadata as early on in the process as possible, to inform technologies such as machine translation that are used downstream. It made sense for scripting production to move to the cloud as well.

“We have been working hard on developing our scripting tool further to accommodate our client needs best,” says Wayne Garb, OOONA co-founder and CEO. “Functionality such as ‘multilayers’, or the ability to display multiple tracks simultaneously, a must in Japanese subtitle production, has been available in our scripting tool too for a while,” he adds. “We remain customer-responsive in our development roadmap. A recent study of requirements from our client base indicates a strong demand in scripting and audio localization work, so it is our priority to develop such features to best support this market trend.”

The ability to record remotely combined with the increase in quality and customization of synthetic voices has made tasks such as audio description, which consists of a complex scripting but fairly straightforward recording process, prime candidates for fully online workflows. “This is the reason behind our partnership with Veritone whose +100 synthetic voices are now available through the OOONA Integrated platform and already used in production by end clients,” adds Garb.

At OOONA we make sure to listen to all our users’ needs. “We ran a contest earlier this year,” says Shlomi Harari, OOONA Global Account Manager. “We wanted to collect ideas from our users on functionalities they think we need to focus on.” The results of the #OOONA2022 contest have included many of the features translator associations have been vocal about, such as concordance and termbase searches, predictive typing and dictation support. More automation is certainly on the roadmap for OOONA Tools, made possible by solid API connections to third party tools and software that can further facilitate the localization workflow. A selection of speech recognition and machine translation engines have already been integrated so OOONA clients have the option of selecting the right engine for each language they work in. A deeper integration of these tools is envisaged, with support for customized solutions and toggles for the use of metadata collected upstream to inform the system output. This will provide solutions tailored to the workflow, be it a subtitling or revoicing one.

Alex Yoffe is the Product Manager of OOONA Tools, a suite of web-based localization tools for media content. Alex studied industrial engineering and management at Technion, the Israel Institute of Technology, and worked previously in media content management. He enjoys meeting people and solving technology problems and is thrilled to have started travelling again. Contact: alex@ooona.net Twitter: @Ooona13

t

t

Fill in your details below