[Paper] [Supplementary] [Code]

We distill optical flow labels from single, still images and an off-the-shelf monocular depth estimator.

Abstract. This paper deals with the scarcity of data for training optical flow networks, highlighting the limitations of existing sources such as labeled synthetic datasets or unlabeled real videos. Specifically, we introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture. Given an image, we use an off-the-shelf monocular depth estimation network to build a plausible point cloud for the observed scene. Then, we virtually move the camera in the reconstructed environment with known motion vectors and rotation angles, allowing us to synthesize both a novel view and the corresponding optical flow field connecting each pixel in the input image to the one in the new frame. When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data compared to the same models trained either on annotated synthetic datasets or unlabeled videos, and better specialization if combined with synthetic images.

Given a single image I0 and its estimated depth map D0, we place the camera in c0 and virtually move it (red arrow) towards a new viewpoint c1. From the depth and virtual ego-motion, we obtain optical flow labels F0 ↛ 1 and a novel I1 through forward warping.

Citation:

@inproceedings{Aleotti_CVPR_2021,
    title={Learning optical flow from still images},
    author={Aleotti, Filippo and Poggi, Matteo 
            and Mattoccia, Stefano},
    booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
    note={CVPR},
    year={2021}
}