Video search, part 2 (or, The Plot Thickens)

A few weeks ago, I wrote about an interesting new video search application that debuted in September called VideoSurf. I mentioned that VideoSurf is a great piece of the overall video search puzzle, alongside some other obvious pieces like video speech-to-text search. But I didn’t raise the question of what should happen next in speech-to-text.

As if on cue, EveryZing (formerly PodZinger, but that was a long, long time ago in Internet time), announced the integration of the speech-to-text technology it uses in video search engine optimization and site search tools into a consumer-facing video player it calls the EveryZing MetaPlayer.  See the example that CEO Tom Wilde shared with me.

 

The text generated from the video shows up on the side of the video, allowing you to search through the text, then find the spots in the video that mention those words (a little yellow dot above the playline tells you where the word is mentioned so you can visualize where in the clip you are).

The text generated from the video shows up on the side of the video, allowing you to search through the text, then find the spots in the video that mention those words (a little yellow dot above the playline tells you where the word is mentioned so you can visualize where in the clip you are).

As I mentioned in my last post, Nexidia has been doing a slight variant (speech-to-phoneme) search, which has a lot of cool features such as multiple language support. However, it hasn’t been implemented widely, leading me to speculate that it’s not as easy to integrate as one might hope. I suspect that’s what drove EveryZing to fashion their solution as a video player (that handily integrates with whatever flash-based player environment a company uses like Brightcove or others). By providing a simple package such as this, it might motivate more entities like the Dallas Cowboys, a launch client of EveryZing, to try this out.

The extra cool part comes beyond text search when you realize that this system gives a video content provider a way to auto-tag video content — even content they don’t own but can embed from YouTube or others –thus matching it to existing tagging systems they have built. These tags can trigger ads, trigger relevant sidebar content (like player stats or stock information). All in an automated and scalable way.

We’re still not all the way there, and as I said before, this will eventually get built into the cloud (a la Google) or the OS (a la MSFT and Apple) or both, but I like what I see so far.

Maybe video search doesn’t have to stink after all.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: