Jump to content

MasterZap

Members
  • Posts

    74
  • Joined

Personal Information

  • Country
    Sweden

MasterZap's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. You can't use an Orthographic view with a spherical environment - these are looked up based on direction, and in an Orthographic view, every pixel has the same direction. Turn on perspective (hit "P") and you should see results. /Z
  2. Yes yes, that is correct. I'm just telling you the UI is suboptimal; it implies that "rasterizer" is a subset of "scanline" (as you describe) but on a technical level, this isn't really true, rasterizer is actually sometihng *different* to scanline, rather than a subset. The flag inside mental ray to enable the rasterizer historically was related to the scanline flag. This historical dependency was replicated when the 3ds max UI was designed - unfortunately. /Z
  3. Rasterizer and proxies should work fine together. Rasterizer is not the same as traditional "scanline" mode. The UI in 3ds max is rather confusing at this point, makign it seem as rasterizer is a superset of "scanline". It's not. There isn't even a relation, really. Basically, mental ray have three totally distinct modes of handling primary rays - raytracing - "scanline" - rasterizer Of these, "scanline" can get you into memory problems, and since mental ray raytracing is so blazingly fast, sometimes, maybe even often, raytracing for primary rays are faster than "scanline". So in most cases I suggest you turn it off, at least as soon as you start to run into memory issues. Do *not* confuse rasterizer w. scanline, though. Rasterizer has none of those memory issues. /Z
  4. A good friend of mine made SUPER grass by taking some grass geometry, making an mr proxy out of it (and some 5 different variation of some grass) and instancing the mr proxy some quadrillion times. Fast rendering, nice looking. (Tip: try the rasterizer for thin geometry.) /Z
  5. Aaaargh, colors specified in sRGB, how I hate this. Mkay, to fix your color, put it into a Utility Gamma/Gain shader. Or re-calculate it to the PROPER color manually. First step is to get rid of the 0-255 range and get into the proper 0-1 range by dividing by 255. Then, you raise that to the power of the gamma (2.2 in this case). This meeans that your color (R:188 G:157 B:129) when correctly fed into an A&D material should be 0.511, 0.344, 0.223 /Z
  6. There is one thing you can try: - Make your (empty) proxy object. - Select it. - Go to the "Modify" panel - Open the maxscript listener (= hit F11) - Type in the lower half (white part of the window): $.flags = 4 - Now click the "None" button and select your source object, and proceed as normally creating a proxy. The "flags=4" setting causes the proxy creation to skip a step (pretesselation) that is potentially memory consuming. Try this, it may help some. /Z
  7. Actually you do need that. Or not the render per se, but the translation step, and the invokation of the geometry shader that writes the data (which is a "render" call). The fact that you also get an image out of the process is a bonus. The process (well, at least the major part of the process that consumes time + memory) still needs to be done, since it is the "translation" process (to mental ray proxy format). You can avoid the render step by simply zooming to an empty area before making the proxy, and uncheck the "zoom extents" button in the dialog that appears. Then, for all practical purpouses, the ONLY thing that will happen is the translation step, the pixels you render are black (or background colored, but you get the idea). Not really - since the "render" really *is* the "creation process" - it's created by calling the renderer. You can avoid getting actual pixels out of it by zooming to an empty area, as said. /Z
  8. That's probably the dumbest thing I read in a long time, probably written by some 50+ luddite type of guy. Really. I'm a strong emoticon advocate (note: I don't condone l33tsp34k in formal communication) because I've been around so many cases where light humor is misunderstood because someone either forgot to input a smiley, or because the recipient didn't understand smilies (this doesn't happen nowdays when emoticons are almost universally graphical, but back in the day when someone didn't understand colon-right-paren, it could create some... issues). /Z
  9. Hah You guys are right, the "camera parameters to EV" calculation is wrong (it uses the ISO backwards). The "EV to camera parameters", however, is correct. ISO 100, f/16 and shutter 1 should NOT be the same brightness as ISO 50, f/16 and shutter 2, it is shutter 0.5 that is the same. Remember the "sunny 16" rule: same "shutter speed" as "film speed" for f/16. So shutter and iso should move in the *same* direction (remember the shutter value is the 1/value measurement) to give "same value". But the EV is actually calculated wrong for any iso other than 100! But the EV isn't what is actually used, the "camera parameters" is what is actually used. Also those wanting the "physical camera" shuold read here /Z
  10. I also made your life a lot easier, you can drop the "blogspot" bit. I.e. http://www.mentalraytips.com /Z
  11. On the contrary - I'd say that the Kelvin values you see when not using Gamma are the wrong ones. 8000K is nowhere near that deep blue. IMHO, The lighter one is the correct one. (Disclaimer; at least this applies to the mental ray implementation) /Z
  12. Make sure the person asking for 300 DPI really knows he really needs 300 DPI, and actually ask about the LPI (Lines Per Inch) or better yet PPI (Pixels Per Inch) to be totally sure. Just because your printer can print 300 DPI (DOTS per inch) does N O T mean you need a 300 PPI (PIXELS per inch) image. Why? Coz a dot isn't a pixel. No, a *set* of dots build a pixel, because in a printer, a DOT can be either on or off, not inbetween. (Well, this sort of changes with some newer inkjets, sublimation printers, etc, but I am being very general here). So for most classical printers, a grid of "dots" build the halftoning pattern that creates the shades used to represent your pixels. If this is such a printer, it is pointless to resolve the image down to the DPI of the printer. Even half is pushing it. I would suggest 100 PPI is enough for such 300 DPI printer. As a matter of fact many professional printers are rated in LPI, which really is the "repetition rate" of the halftone pattern. Mind you, halftone dots can be split in half, so the "effective resolution" can be higher than the halftone pattern size, but not really much larger, and it can really only resolve edges better than the halftone pattern size, not micro-detail in any meaningful way. Basically, if the actual resolution of your printing device really is 2400 DPI (like some pro magazine printers) then, yes, you may need 300 PPI renderings. Maybe. If even then. The sad part is that even the guys in charge of handling the print stuff rarely understands the difference between DPI, LPI and PPI. Not to mention that they tell you "This image should be 10 inches by 5 inches, at 300 DPI", and then you send them an image of 3000 by 1500 pixels, they get back to you and say "Hey, your image was 72DPI and way too big, you must re-render it..... we can't use it!" /Z
  13. You can do this. You simply put the displacement map also into the cutout map. What you WANT do to is to threshold the cutout map. Just do what you did, but with one addition: For the cutout, pipe the displacement map through the "Output" shader, and set a curve with a sharp discontinuity on it. It should look like a stairstep, with everything below a certain level should be 0, and everything about that level should be 1. /Z
  14. ...and that is EXACTLY what the Round Corners option in the A&D material does! /Z
  15. The Bobo script I post about in my latest blog post is renderer agnostic (actually object agnostic - it simply makes "real instances" of whatever you give it for the PFlow object), so in theory it would even work with vRay. /Z
×
×
  • Create New...