Jump to content

Search the Community

Showing results for tags 'camera matching'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • CGARCHITECT.COM
    • Forum Rules
  • MAIN FORUMS
    • General Discussions
    • Hardware and Technical Discussions
    • Off Topic
    • Pro Plan Members
  • VISUALIZATION GALLERIES
    • Architectural Visualization Gallery
    • Work in Progress (WIP)
  • SOFTWARE
    • 3ds Max
    • V-Ray
    • Other Renderers
    • Other Visualization Software
    • CAD Software
    • Post Production Software
    • VR/AR/MR/Real-Time
    • Vegetation
    • Color Management
  • MISCELLANEOUS
    • New Member Introductions
  • SITE FEEDBACK AND SUGGESTIONS
    • Comments and Feedback

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Display Name


Country

Found 5 results

  1. Hello, I am looking for some advice on the production of verifiable photomontages for use in landscape impact assessment. We use 3Ds max to produce photomontages of proposed plant developments for the energy industry. We model the development and then try to match it the best we can to a background image. This can be either a single photograph or a stitched panorama photograph. The field of view we have to cover is anything up to 180dgrees. These images are not for interactive QTVR/ Panotour type output. Our end product is for flat printing for the purposes of public exhibition and environmental impact assessment reports. These photomontages are required to be verifiable and to match the geometry and perspective of the camera used to take the stitched images. When matching to a wide-angle panoramic stitch we are finding it more difficult as we cannot find a way of getting 3d Max to have a projection output that matches the background image accurately. Does anyone know of any techniques, scripts or plug-ins that would allow 3D max to project/match a model correctly against a panoramic image? Or are there any other programmes that anyone knows can achieve this? We know it is technically, computing wise, possible as we use a programme called ‘Windfarm’ for wind turbine site assessments. This programme allows you to match model and DTM to up to 180 degree stitched panoramic images. It also takes into account earth curvature and atmospheric refraction. Unfortunately this programme cannot import other custom non wind turbine models. In some methodologies there is a mention of VRAY using the physical camera set to cylindrical projection. We have down loaded the demo and are finding that the ‘Warped Spherical (Old style)’ projection seems to create a nearly convincing effect. We are finding however that the overall width of the projection does not quite fit the position of existing objects in the panoramic image. This may be because the GPS data measurements are not accurate enough for the purpose. However we are trying to demo the ‘.lens’ file facility to see if that will help. Does anyone know if using VRAY is the correct answer to this image matching problem? If so can you offer any advice that should theoretically make VRAY match exactly the geometry and perspective of a wide angle stitched panoramic photo? Thanks in advance for any help.
  2. Hi guys. Can anyone advise me on whether it is still possible to match a 3D scene to a background photograph in Cinema 4D using points rather than perspective lines. 3DS max uses 'Campoints' which are quite effective (see this video ), and I'm aware that back before the camera calibration tag was introduced in R14, there was a third party plugin available for C4D from an outfit called Pano4D that also had this functionality (check out from about 2:25 onwards in this video ). The Pano4D plugin seems to be discontinued now and their website is inactive. The current system in the camera calibration tag that uses perspective lines may work well enough for basic 3D compositing, but I've found it to be very unreliable for accurately compositing architectural scenes, even into photographs that have identifiable x y and z axes. And in a situation where I have known / surveyed points in the photograph but no perspective lines, it is no use at all. If anyone knows of a current plugin for C4D that can match photos using points, or has another reliable methodology for this that they're willing to share, I'd love to hear from them. Thanks! Jeremy
  3. I am doing a quick project for a friend and he supplied me with an image of his house that he took with his iPhone. When viewed in the typical programs, there is nothing amiss about the image, but when I piped it into max, I got some pretty crazy stretching. In terms of bringing the image in and matching the safe frame, I did that correctly as I do it all of the time so I started investigating the "Pixel Aspect" in Max. Playing with this value made things look better, but I couldn't be sure which value was correct. Here is what I did: The Display Aspect Ratio (DAR) of the phone is 1136 x 640 | 1136/640 = 1.775 The Storage Aspect Ration (SAR) of the image is 2048 x 1536 | 2048/1536 = 1.333 According to Wikipedia Pixel Aspect Ratio (PAR) is found by: PAR = DAR/SAR or in my case here PAR = 1.775/1.333, or 1.332 This resultant value of 1.332 did nothing to fix my problem in Max, nor did its inverse. When I opened the image in Photoshop (where it looked correct mind you) and I applied a Pixel Aspect Correction the image looked stretched. I saved this out and piped it into Max again and voila! the image looked correct. The Max settings are set to a PAR of 1.0. The piece that confuses me is why max knew/knows what the other image viewers did not? Why were the other viewers able to auto correct and not max and why does the correction in Max not solve the problem the way it did in PS? What am I missing? I have solved the problem, but I'd like to know why. Thank you
  4. Hello, i'm trying to match up a normal camera with a Vray physical camera in 3DS MAX, i've been able to get it to nearly a pixel difference but as the image below will show you there is a blue outline when composting in the separate layer of the fumefx due to the slight difference in the camera. I've tried tweaking the cameras FOV very slightly but still the same result. I tried using lelas script to convert a normal camera but i get a huge difference still. http://imgur.com/Vky3WNv
  5. Has anybody ever camera matched to an iPhone 5 photograph. I'm happy doing camera matching in max but need to know the CCD sizes are in order to get any accuracy. any help?
×
×
  • Create New...