Hi all,
I need to bring this thread back up again to get further on my road to 3D+Photo=VR.
Those things a simple:
- matching 3D objects into single photos
- 360 degrees renderings to use in VR (browser or eg. Sansung Gear VR)
- using 360 degrees panoramic photos as backdrop and / or illumination / reflection (LDR / HDR) for larger, exterior scenes or for product visualization (cars, objects, whatever)
What about interior panoramic photos with mapped 3D objects?
I came across those guys from Switzerland doing Archviz, mapping 3D objects into 360 degrees photos for VR, see this example:
http://www.designraum.ch/interaktiv-reader/tag/Panorama+360%C2%B0.html (look for project "Seewürfel")
So basically this seems obvious to me: they use the precise floorplan data to position the interior objects as well as the exact position of the camera, where the panoramic photo has been taken (XYZ). If you then render a 360 degrees full spherical image from this position (with backdrop or alpha for later compositing) you wouldn't have to worry about FOV at all (I'm a little unsure about this point, just intuitive guessing )
BUT
this works for this specific purpose only.
What I'd like to achieve is to take a full spherical image (I am doing that already, the manual way, stitching bracketed images into 2:1 equirectangular panoramas with about 12 EV, 32 Bit HDR) of a room, develop some design (eg. built-in furniture) and use this rendered composition in VR as previz and final presentation, showing how it would look like if it was built in place.
Any tips on how to achieve this?
I might be just a little not seeing the forest for the trees.
I have some perfect reference cube of 1x1x1 meter btw. But I guess this is of no use with stiched photos as this has been mentioned before.
Thanks in advance, best regards,
Niko