A few weeks ago, I wrote about an interesting new video search application that debuted in September called VideoSurf. I mentioned that VideoSurf is a great piece of the overall video search puzzle, alongside some other obvious pieces like video speech-to-text search. But I didn’t raise the question of what should happen next in speech-to-text.
As if on cue, EveryZing (formerly PodZinger, but that was a long, long time ago in Internet time), announced the integration of the speech-to-text technology it uses in video search engine optimization and site search tools into a consumer-facing video player it calls the EveryZing MetaPlayer. See the example that CEO Tom Wilde shared with me.
As I mentioned in my last post, Nexidia has been doing a slight variant (speech-to-phoneme) search, which has a lot of cool features such as multiple language support. However, it hasn’t been implemented widely, leading me to speculate that it’s not as easy to integrate as one might hope. I suspect that’s what drove EveryZing to fashion their solution as a video player (that handily integrates with whatever flash-based player environment a company uses like Brightcove or others). By providing a simple package such as this, it might motivate more entities like the Dallas Cowboys, a launch client of EveryZing, to try this out.
The extra cool part comes beyond text search when you realize that this system gives a video content provider a way to auto-tag video content — even content they don’t own but can embed from YouTube or others –thus matching it to existing tagging systems they have built. These tags can trigger ads, trigger relevant sidebar content (like player stats or stock information). All in an automated and scalable way.
We’re still not all the way there, and as I said before, this will eventually get built into the cloud (a la Google) or the OS (a la MSFT and Apple) or both, but I like what I see so far.