Optical flow estimation is a difficult task given real-world video footage with camera and object blur. In this paper, we combine a 3D pose&position tracker with an RGB sensor allowing us to capture video footage together with 3D camera motion. We show that the additional camera motion information can be embedded into a hybrid optical flow framework by interleaving an iterative blind deconvolution and warping based minimization scheme. Such a hybrid framework significantly improves the accuracy of optical flow estimation in scenes with strong blur. Our approach yields improved overall performance against three state-of-the-art baseline methods applied to our proposed ground truth sequences as well as in several other real-world sequences captured by our novel imaging system.

Figure 1. RGB-Motion Imaging System.

Scene blur within video footage is typically due to fast camera motion and/or long exposure times. In particular, such blur can be considered as a function of the camera trajectory supplied to image space during the exposure time. It therefore follows that knowledge of the actual camera motion between image pairs can provide significant information when performing image deblurring. In Fig. 1, we propose a simple and portable setup, combining an RGB sensor and a 3D pose&position tracker in order to capture continuous scenes along with real-time camera pose&position information. Our tracker provides the rotation (yaw, pitch and roll), translation and zoom information synchronized to the relative corresponding image frame using the middleware of [Lee et al. 2013].



Related Papers:

W. Li, Y. Chen, J. Lee, G. Ren, and D. Cosker, Blur Robust Optical Flow using Motion Channel, Neurocomputing 2016. [PDF]

W. Li, Y. Chen, J. Lee, G. Ren, and D. Cosker, Robust Optical Flow Estimation for Continuous Blurred Scenes using RGB-Motion Imaging and Directional Filtering, in Proceeding of IEEE Winter Conf. on Application of Computer Vision (WACV’14), 2014, pp. 792–799. [PDF] BEST STUDENT PAPER AWARDED