Automated transcription services have a variety of applications. Enterprises frequently use them to transcribe meetings, and call centers use them to transcribe phone calls into text to more easily analyze the substance of each call.
The services are widely used to aid the deaf, by automatically providing subtitles to videos and television shows, as well as in call centers that enable the deaf to communicate with each other by transcribing each person’s speech.
VTCSecure and Google
VTCSecure, a several-years-old startup based in Clearwater, Fla., uses Google Cloud’s Speech-to-Text services to power a transcription platform that is used by businesses, non-profits, and municipalities around the world to aid the deaf and hard of hearing.
The platform offers an array of capabilities, including video services that connect users to a real-time sign-language interpreter, and deaf-to-deaf call centers. The call centers, enabling users to connect via video, voice or real-time-text, build on Google Cloud’s Speech-to-Text technology to provide users with automatic transcriptions.
Google Cloud has long sold Speech-to-Text and Text-to-Speech services, which provide developers with the data and framework to create their own transcription or voice applications. For Hayes, the services, powered in part by speech technologies developed by parent company Alphabet Inc.’s DeepMind division, were easy to set up and adapt.
“It was one of the best processes,” said Peter Hayes, CEO of VTCSecure. He added that his company has been with happy with what it considers a high level of support from Google.
Hayes said Google provides technologies, as well as development support, for VTCSecure and for his newest company, TranslateLive.
Hayes also runs the platform on Google Cloud, after doing a demo for the FTC that he said lagged on a rival cloud network.
Google Cloud’s Speech-to-Text and Text-to-Speech technology, as well as the translation technologies used for TranslateLive, constantly receive updates from Google, Hayes said.
Startup Verbit provides automated transcription services that it built in-house. While only two years old, the startup considers itself a competitor to Google Cloud’s transcription services, even releasing a blog post last year outlining how its automated transcription services can surpass Google’s.
Verbit, unlike Google, adds humans to the transcription loop, explained Tom Livne, co-founder and CEO of the Israel-based startup. It relies on its home-grown models for an initial transcription, and then passes those off to remote human transcribers who fine-tune the transcription, reviewing them and making edits.
The combined process produces high accuracy, Livne said.
A lawyer, Livne initially started Verbit to specifically sell to law firms. However, the vendor moved quickly into the education space.
“We want to create an equal opportunity for students with disabilities,” Livne said. Technology, he noted, has long been able to aid those with disabilities.
Tom LivneCo-founder and CEO, Verbit
George Mason University, a public university in Fairfax, Va., relies on Verbit to automatically transcribe videos and online lectures.
“We address the technology needs of students with disabilities here on campus,” said Korey Singleton, assistive technology initiative manager at George Mason.
After trying out other vendors, the school settled on Verbit largely because of its competitive pricing, Singleton said. As most of its captioning and transcription comes from the development of online courses, the school doesn’t require a quick turnaround, Singleton said. So, Verbit was able to offer a cheaper price.
“We needed to find a vendor that could do everything we needed to do and provide us with a really good rate,” Singleton said. Verbit provided that.
Moving forward, George Mason will be looking for a way to automatically integrate transcripts with the courses. Now, putting them together is a manual process, but with some APIs and automated technologies, Singleton said he’s aiming to make that happen automatically.
Go to Original Article