![]() Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. Schroers, C., Bazin, J.C., Sorkine-Hornung, A.: An omnistereoscopic video pipeline for capture and display of real-world VR. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), pp. Richardt, C., Pritch, Y., Zimmer, H., Sorkine-Hornung, A.: MegaStereo: constructing high-resolution stereo panoramas. Peleg, S., Ben-Ezra, M., Pritch, Y.: Omnistereo: Panoramic stereo imaging. Parra Pozo, A., et al.: An integrated 6DoF video camera and system design. Overbeck, R.S., Erickson, D., Evangelakos, D., Pharr, M., Debevec, P.: A system for acquiring, compressing, and rendering panoramic light field stills for virtual reality. Mildenhall, B., et al.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. In: Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. McMillan, L., Bishop, G.: Plenoptic modeling: an image-based rendering system. ![]() 198, Proceedings of SIGGRAPH Asiaīertel, T., Campbell, N.D.F., Richardt, C.: MegaParallax: casual 360 \(^\) scene representation for head-motion parallax. MIT Press (1991)Īnderson, R., et al.: Jump: virtual reality video. In: Computational Models of Visual Processing, pp. KeywordsĪdelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. The chapter ends by discussing advantages and disadvantages as well as outlining the most important limitations and future work. We describe both methods and discuss their similarities and differences in corresponding steps in the RealVR pipeline and show selected results. The approach introduces view-dependent flow-based blending to enable novel-view synthesis with head-motion parallax within a viewing area determined by the field of view of the input cameras and the capturing radius. MegaParallax proposes a pipeline for RealVR content generation and rendering that emphasizes casual, hand-held capturing. Based on precomputed disparity motion fields and pairwise optical flow, novel viewpoints are synthesized on the fly using flow-based blending of the nearest two to three input views which provides compelling head-motion parallax. Parallax360 uses a robotic arm for capturing thousands of input views on the surface of a sphere. Both propose complete pipelines for RealVR content generation and novel-view synthesis with head-motion parallax for 360° environments. In this chapter, we present, compare, and discuss two recent end-to-end approaches, Parallax360 by Luo et al. Creation and delivery of “RealVR” experiences essentially consists of the following four main steps: capture, processing, representation and rendering.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |