Above video consists of several topics with multiple choices, all presented in a rectangle. Each question has a keyword included in its body; different questions have different keywords. I define a task to retrieve all frames display a wanted question based on a keyword I input. After that, I will select a concrete frame where it can exhibit my wanted question and associated answers to my query. For example, my expectation if I use Wine as the keyword is bellow.
I used pre-trained models from
Tesseract for text extractions. As this was my quick attempt, my pipeline is not optimal in terms of running time because it handles image by image on CPU. In the post’s scope, I used a sampling strategy and binary search to find an appropriate video segment, hence, running time indeed is still acceptable (less than 15 minutes).
There are some observations can help
- a segment can be divided into 2 phases, one of them is when no answer is given. There are short periods where the box contains question doesn’t change much in terms of size, colors to give time for responses.
- detect rectangle and its size can track phases, potentially support to finalize a frame
- the question box has a transparent color, strongly influenced by colors from the background hence it is hard for image processing techniques such as Canny edge detector, Hough transform.
- from point #1, we can leverage the difference between frames
- Sampling video
- For image in sampled frames
- Pass sampled frames to text detector + text recognization (treat the image as a single word)
- If a frame consist input keyword –> append to target_indexes
- Find leftmost and rightmost index from target_indexes
- Expand left and right to cover all frames have keyword
- Compute hist_diff and find local minima
- Return interested frame & the part of video