A Hybrid Approach for Near-Range Video Stabilization
mediaposted on 24.02.2016 by Shuaicheng Liu, Binhan Xu, Chuang Deng, Shuyuan Zhu, Bing Zeng, Moncef Gabbouj
Media is any form of research output that is recorded and played. This is most commonly video, but can be audio or 3D representations.
We present a hybrid approach that combines the benefits of 2D methods with those of 3D methods for near-range video stabilization. Near-range videos contain objects that are close to the camera. These videos often contain discontinuous depth variation (DDV), which is the main challenge to the existing video stabilization methods. Traditionally, 2D methods are robust to various camera motions (e.g., quick rotation and zooming) under scenes with continuous-depth variation (CDV). However, in presence of DDV, they often generate wobbled results due to the limited ability of their 2D motion models. Alternatively, 3D methods are more robust in handling near-range videos. We show that by compensating rotational motions and ignoring translational motions, near-range videos can be successfully stabilized without sacrificing the stability too much. However, it is time-consuming to reconstruct the 3D structures for the entire video and sometimes even impossible due to rapid camera motions. In this paper, we aim to combine the advantages of 2D and 3D methods, yielding a hybrid approach that is robust to various camera motions and can handle the near-range scenarios well. In particular, we partition the input video into CDV and DDV segments automatically. Then, the 2D and 3D approaches are adopted for CDV and DDV clips, respectively. Finally, these segments are stitched seamlessly via a constrained optimization. We validate our method on a large variety of consumer videos.