Jump to content

yp

Members
  • Posts

    70
  • Joined

Personal Information

  • Country
    Germany

yp's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. You can map textures to objects in SU and create UVs through this: paint objects (better: face selection) with image textures and set up the proper dimensions in the settings of each texture (real world scale). Objects will come in mapped in 3DS and you can change materials there. Eg. give the facade a brick material with 5x5m length in SU and change it in 3DS. Easy way to even rotate textures on faces too. Note: plain colors have NO UV mapping in SU therefor no UV mapping on your objects in 3DS.
  2. Hi guys, this is something I came across multiple times in the past, always wondering what was happening.. I never really mattered drastically for my workflow, which is mainly using SU for 80% modeling and pre-texturing, then switching to 3DS for further processing, adding props, scattering, final texturing and rendering. Classic for some people I would say Sometimes imported objects through (at least native) Sketchup's .skp files would end up having double vertices. Sometimes I didnt even recognize it, sometimes it mattered and I fixed it by welding all vertices with a low threshold. But today I started investigating, because in a recent project I DO run into trouble now if its happening - so I had to find out. Figured out its actually happening on objects which contain multiple materials in Sketchup (-> results in a muti-material in 3DS / unique MatIDs) >> and only where those materials meet! Weird, but narrowed down, thats something... I cant change anything on SU side and I cant use any other export format. I'm using SU 2016 (saving out a Ver. 7 file to import it in Max) and 3DSMax 2016. Any idea what causes this, how to avoid it or how to batch process it in 3DS? Well, might find a script though that welds all vertices of all scene objects at given tolerance, should be easy - still open for guesses. Thanks Niko
  3. Well, THAT was a little weird, actually solved it myself with a little DOH.. Looked in the wrong area to solve it.. Naming will be automatically set by object's name and naming extension in elements name field in the bake textures rollout. Eg: ObjectXY_AO.png. As a sidenote, macroscript macro_baketextures.mcr (in Max' macroscripts folder) needs to be changed from filetype .tga to .png in this case. Thanks anyways guys and happy modeling! Niko
  4. Not that I know of - but with a well subdivided plane and editable poly > paint deformation you can at least start with shaping some green. Probably the one click solution for designing golf courses is not that demanded, yet.
  5. Hi guys, I came across a strange behaviour and cant find the reason for it: normally if you use Unwrap UVW > Pack UVs (+ normalize / rescale) it will result in a more-or-less well packed single UV tile. Totally sufficient for quick mapping of rectangular objects. Now I have this drawer object which always ends up in using only one half (more or less, strangely enough it did exceed the half size once but did not use the full tile) of the UV tile when I apply a chamfer modifier - which results in losing a lot of texture space for detail.. Please see attached image. Any idea why? Maybe its just me Object geometry should be prefectly clean, no internal faces or any strange fancy mismodling. Geometry comes from Sketchup, but this normally works totally fine. (Su 2016 > Max 2016) Big thanks, Niko
  6. Hi guys, how can I set up a custom render naming template to render out baked AO maps for various objects and save them with a custom name derived from the objects name inside Max? Max help only offers this pretty limited options: The root name of the MAX scene file The name of the active camera or viewport The month The day The year I'm missing in this list Any idea how to solve this in native Max, with a (free, ideally, not necessarily) plugin or with a smart workaround? Thanks a lot, best, Niko
  7. Ach *$!#, shoot. The whole reply gone after sending.. To sum it up, yes, you're totally right, thanks for that insight! Didnt know about the VFB not being able to save out multichannel EXR before, just always wondered why it didnt Good to know the "workaround" saving VRIMG as EXR coming out multichanneled. Actually had some troubles in the past with EXR but I guess this time it should be fine. Normally I use EXR for all stills anyways. LDR done, sticking with EXR now. Thanks and best Niko
  8. Hi guys, any idea if it is possible to save render elements multiple in various formats? I'll be rendering an animation tomorrow and am not 100% sure, maybe I will get myself a copy of ArionFX for AE for HDR processing and therefor would save diffuse and some other channels like reflection in EXR file format, color IDs for later RGB masking could remain 8 bits. But I'd like to be on the save side, timewise and even filesize.. I'd like to save the whole out animation frames in two versions, one the low dynamic range PNG /JPG (save and small size) and the other in 32 bit EXR. Even if its only the beauty pass, that would work I guess. Any idea how this can be done (if at all)? Maybe I'm missing a point, is there something like a VRIMG batch process so I could extract filetypes later? Thanks in advance, good night from Berlin Niko
  9. Yeah, no problem Dylan, I know it's a lot to discover and sort out. It might look like a bit of a bombing to tell about 32 bit, tonemapping and so on - but actually it's not THAT much and it will help you a lot if you know about those things and the techniques to handle them and use them for your own. To clearify your last message: PNG (as well as many other file formats, even then ones you mentionend, you just didnt save it with transparent background..) files CAN contain alpha information, they dont necessarily do. Just as a sidenote. A JPG eg will contain your background - or a different color, but never be transparent in eg. Photoshop. But to answer your question - Peter Guthries HDRI are only a semi sphere, the lower half of the image is black. That is what you see. But still, depending on which image you're using, check if your VRAY dome light options are set to full or half sphere (half is standard I guess). Solution: plant more trees to cover up the holes. Or place images behind as mentioned by Francisco.
  10. No, I didnt. Nothing to do with VRAY. It's basic photography. Take a picture of whatever object you like against a visible sky. It will most likely be underexposed with a visible sky or visible but with an overexposed sky. (As long as you're not using a preprocessed HDR function in your camera). That's exactly what you do in VRAY. Basics of photography Once again, choose one of the many possiblities mentioned. Dont know if you've looked into the 32 Bit tonemapping method I proposed. There are some free tonemapping softwares around, maybe some even process exr or hdr files, probably they do process unclamped 32 bit tif. Another approach is the processing of your rendering within the VRAY frame buffer, probably you can recover information in the overexposed areas as well. Then its not pure white outline around your matted image but closer to the later comp. If there are mountains (I do read) maybe a blue outline will still be not the best choice, so go for the other methods instead (matting options in PS, shrinking the mask selection by a pixel, or others)
  11. Sure it is. Because you're swapping your background with an image with different exposure. Your rendering backround will probably be way more exposed. No mystery involved. Use the techniques mentioned or stick with the rendered background - which by the way is a more natural image than comping in a blue-blue sky. Otherwise use 32 Bit workflow and tonemapping methods to turn down the background's exposure. Then keep it or comp some other in. Depending on the lens lenght you're using swapping the background will get unnatural as well.
  12. ..halo around "objects" or alpha mapped images like leaves?? A picture would help. If its leaves then a background closer to the later comp color is better - but mainly check the image with alpha channel may not be set on pure white background, white line will bleed through otherwise.
  13. Export SU als 2D DWG in 1:1 and set scale in AutoCad or similar. Should be correct then. If you start setting the scale in SU you'll most likely have to rescale things later (which works too, just measure and calculate) If you still aim for the SU to linework results you can just export SU as PDF, import it to Illustrator, Corel or alike and set line values. Comes in as each intersection of lines is dividing all elements into segments which can be messy, but does work pretty good in the end. Many ways to achive that, render with toon style (eg. for lineweight depending on depth), use plugins, export native from software. Isometric view in SU needs plugins, look for TIG's plugins, one will need to be executed through the ruby console window with commands like axo3060 and alike. Props (trees, people..) are easier placed in the software if they are within other 3D objects, otherwise can easily be placed later and set to same line and fill values. Good luck, you'll make it.
  14. True. VraySky is sky-blue. Depending on your HDRI it'd be blue anyways if your camera's white balance is not set up properly. IF your using vray camera. So, one way or the other, if you have a "blue light source" (intense bright blue part or sky) check white balance to neutralize it.
  15. Hi all, I need to bring this thread back up again to get further on my road to 3D+Photo=VR. Those things a simple: - matching 3D objects into single photos - 360 degrees renderings to use in VR (browser or eg. Sansung Gear VR) - using 360 degrees panoramic photos as backdrop and / or illumination / reflection (LDR / HDR) for larger, exterior scenes or for product visualization (cars, objects, whatever) What about interior panoramic photos with mapped 3D objects? I came across those guys from Switzerland doing Archviz, mapping 3D objects into 360 degrees photos for VR, see this example: http://www.designraum.ch/interaktiv-reader/tag/Panorama+360%C2%B0.html (look for project "Seewürfel") So basically this seems obvious to me: they use the precise floorplan data to position the interior objects as well as the exact position of the camera, where the panoramic photo has been taken (XYZ). If you then render a 360 degrees full spherical image from this position (with backdrop or alpha for later compositing) you wouldn't have to worry about FOV at all (I'm a little unsure about this point, just intuitive guessing ) BUT this works for this specific purpose only. What I'd like to achieve is to take a full spherical image (I am doing that already, the manual way, stitching bracketed images into 2:1 equirectangular panoramas with about 12 EV, 32 Bit HDR) of a room, develop some design (eg. built-in furniture) and use this rendered composition in VR as previz and final presentation, showing how it would look like if it was built in place. Any tips on how to achieve this? I might be just a little not seeing the forest for the trees. I have some perfect reference cube of 1x1x1 meter btw. But I guess this is of no use with stiched photos as this has been mentioned before. Thanks in advance, best regards, Niko
×
×
  • Create New...