Jump to content

ryannelson

Members
  • Posts

    132
  • Joined

Personal Information

  • Country
    Canada

ryannelson's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. This is the right answer. Most of my inquiries come through instagram.
  2. Seems pretty low-resolution,but if you're looking for RT stuff it's probably sufficient. This won't be TRUE RT because you're still limited to a single view point - where you took the spherical photo. You're still looking at a static photo. This kind of VR is still technically a still photo with an html5 twist. To do a spherical photo, you need a special tripod head and a half decent camera (full-frame preferably). This is the one I have: http://www.manfrotto.ca/product/24329.31708.1017111.0.0/303SPH/_/WTVR_Spherical_Panoramic_Pro_Head Take some measurements from where you're shooting the photo and use that as reference when you render. Bring a long tape measure...
  3. Matching the camera position is important, otherwise your 3d building will not be in the right spot. You won't be matching the *lens length either because your viewport will not be matching a single photo, rather a spherical stitch. Unless you go the route of camera matching a single photo from the spherical shot-set to match and render out an image you comp into that one photo, and then use that in the spherical stitch. In which case, you'll have some weird looking fisheye render - and your building will likely span across more than one image. Don't stitch a small panorama in order to have a background for your building either, that'll confuse your stitching software when you go to do the spherical stitch. Keep your set as individual images. When i shoot spherical images for VR I use an 8-15mm lens at 15mm on a full-frame body. 8mm crops the corners into a round photo, which doesn't stitch. 15mm is SUPER wide still...
  4. in a correct 360 photo, the horizon line should be flat and centre - your camera needs to be dead level. atmospheric distortion, aka atmospheric perspective, refers to the haze you get when things get far away - so sure... but not really. Just match your camera position - as perfectly as possible - and add the 360 photo as the background. Render as 360 then run through w/e VR software you want.
  5. I agree too that you are being too precious. If you're concerned about your client going to another company, you have to build exclusivity into the contract. That being said, you might find that a hard sell. Don't take it personally or as a reflection of your work if your client goes with someone else. Always work with clients that pay on time too haha
  6. If i were the teacher, I'd make them tell me what the differences are and proceed to correct their own image. The differences of the two are clear, your question is self-defeating. If you are interested in learning how to create a compelling image, questions are better directed to how to achieve them, not why. But to answer your question... light, materials, composition, and technical quality make some images better than others - objectively speaking.
  7. I think this is a contradictory statement here. If you're starting a business and spend money on hardware, the exact risk is that you don't make money with it. I was fortunate enough to have a client front me the cash to buy a good PC to get going. I'm only running one machine (i7 5820k @ 4.2ghz and 32gb ddr400 ram) and it's doing a great job, was around $3k canadian. As a freelancer, i'll work on the image after work and let it render overnight and through the day if necessary. Works out quite well actually.
  8. I've run into this problem with Vray for Rhino before. What it turned out to be was that the lens effects are not supported by distributed rendering. If you're not using DR, then i'm not sure...
  9. ryannelson

    Vray For Rhino

    I strongly recommend NOT using emissive materials for lighting in Vray for Rhino as you cannot control the light subdivisions. Emissive materials are for highlights, like digital displays. Use actual lights for lighting scenes. If IES lights are too complicated, use a simple rectangular light and frame it out.
  10. I think all visualizers should be photographers but not all photographers should be visualizers. The best visualizations are often strong because of excellent photographic quality - not to be confused with photographic realism. Framing, lighting, depth of field, ambiance etc... all photography skills. I do both and I think my renders are stronger because of experience in shooting.
  11. The "hairy arm technique", as I like to refer to it as, only goes so far if you're working directly for someone. When you take the hairy arm out after they ask, they just ask you to fix the next thing. I empathize with you greatly with lighting, I've had to re-render or absurdly edit the lights/shadows in order to unrealistically bring up shadows or tone down highlights in order to homogenize the scene. No one appreciates chiaroscuro anymore! I think part of the problem is that most people don't understand the dynamic range difference of camera sensors and eyeballs. Just save out as EXR and play around with that.
  12. I have to give this railclone plugin a chance one of these days... I cut my parametric teeth with Grasshopper but have since moved to 3ds Max for rendering. Railclone seems pretty sweet too!
×
×
  • Create New...