Temporally consistent gradient based video editing
Presentation* of the paper: “A Variational Model for Gradient-Based Video Editing”, Sadek, Facciolo, Arias, and Caselles, IJCV, 2013.
Presented 2013-06-24 at IMNC, Orsay, France.
Presented 2013-10-04 at Technicolor, Rennes, France.
* Embedded videos only visible with Acrobat.
In this work we present a gradient-based variational model for video editing, addressing the problem of propagating gradient-domain information along the optical flow of the video. The resulting propagation is temporally consistent and blends seamlessly with its spatial surroundings. In addition, the presented model is able to cope with additive illumination changes and handles occlusions/dis-occlusions. The problem of propagation along the optical flow arises in different video editing applications. In this work we consider the application where a user edits a frame by modifying the texture of an object's surface and wishes to propagate this editing throughout the video.