Enterprise videos– visionary statements, product introductions, town hall meetings, training aids, and conferences – are everywhere on the Internet and corporate Intranets. But no matter how flashy the graphics or how well-prepared the speaker, there’s something missing when it comes to the viewer experience: The ability to search these videos.
Ramp is one of the vendors aiming to address the issue by delivering a fully automated data-driven user experience around finding content. It’s about the ability to watch and look inside a video -- a 45-minute keynote, for example, said Joshua Berkowitz, the company’s director of product management at Enterprise Search & Discovery 2014. Everyone has had the experience of starting to view such an event online, only to get distracted by their smartphones or something else a few minutes in. In the meantime, the video plays on and goes right past the part you were most interested in without your even noticing. “How to find the piece of content that interests you in the same way you could find those pieces inside a document?” he asked the audience.
More importantly, how can the supplier of that content facilitate that, as well as other ways to help the viewer interact with the elements they are interested in, or provide additional information such as links to product or contact details? “Time-based metadata for video can revolutionize the search experience,” Berkowitz said, a capability Ramp’s technology supports with its MediaCloud technology that generates time-coded text transcripts and metadata from video content, providing a time-coded transcript and tag set.
It uses automated speech-to-text technology and natural language processing to extract meaning from transcripts and metadata, and offers global dictionaries and support for users’ custom dictionaries for tag generation. “That’s particularly valuable around things like the names of executives or products – these are very relevant in an enterprise use case,” he said. Viewers can search for a word or term and click on the time-stamps to go right to the appropriate spots.
Advancing search inside video is “designed to keep users more engaged with the video,” he said, “so they can find the content most relevant to them.” Otherwise, the opportunity to get more value out of their video assets, such as providing targeted advertising, risks being lost.
It’s not just searching inside the video that can present challenges. In so many cases, he said, businesses create videos and upload them to their digital asset management system as is, without title or description – or at least without a considered title or description – so it can be hard for users to even know what’s available to them in the first place. “Having rich metadata on top of that lets your users find it,” he said. Even if the word mobile didn’t manage to make it into the title of a video where the speakers are talking about exactly that, a content management system crawl of the metadata can find the word to help identify it as an asset to check out for a user looking for video on that topic.
Determining the sentiment of a video or pats of it is also part of the equation, as is facial recognition to automatically identify a face in a frame, time-stamp its appearance and duration, and match it to a list of pre-defined people. “That’s very, very powerful but it has to be trained,” Berkowitz said. “For certain executives or even products you want to highlight in a video, we can power that same time-coded metadata experience based on visual cues.”