It is difficult to recover the motion field from a real-world footage given a mixture of camera shake and other photometric effects. In this paper we propose a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a traditional optical flow energy. We first conduct a CNN architecture using a novel learnable directional filtering layer. Such layer encodes the angle and distance similarity matrix between blur and camera motion, which is able to enhance the blur features of the camera-shake footages. The proposed CNNs are then integrated into an iterative optical flow framework, which enable the capability of modelling and solving both the blind deconvolution and the optical flow estimation problems simultaneously. Our framework is trained end-to-end on a synthetic dataset and yields competitive precision and performance against the state-of-the-art approaches.
In this paper, we investigate the problem for recovering optical flow from a camera-shake video footage. We first propose a novel CNNs architecture for video frame deblurring using an extra directional similarity and filtering layer. In practice, such learnable filters are able to adoptively preserve the directional blur information without the pre-knowledge of the camera motion. We then highlight the benefits of the integration of our network into an iterative optical flow framework. The evaluation demonstrates that our hybrid framework gives the overall competitive precision and higher performance in runtime. The limitations may lie in the presence of mixed blur, globally invariant blur and spatial noise. Such difficulties could be improved by using more comprehensive training data.
W. Li, D. Chen, Z. Lv, Y. Yan, and D. Cosker, Learn to Model Blurry Motion via Directional Similarity and Filtering, Pattern Recognition 2017, In Press. [PDF]