Jump to content

Interesting news annoucnement from Vray


Christopher Nichols
 Share

Recommended Posts

Interesting, not the first such claim in the 3D and 2D graphics world. The text did not describe the technology at all, just the performance claims.

 

I don't mind the idea of better, faster, more. It would be nice if it actually worked. It is a rare inovation claim that does not rely on shortcuts, have they solved a problem with a completely NEW approach. Such is the stuff of patents, not press releases. Still, it will be good to see the goods.

 

I wonder about just how often an architectural illustrator needs to render 50 million polygons. Most buildings are just big boxes with a few holes punched in. That takes surprisingly few polys to model. Entourage CAN be models with 50K polys per car, but a few hundred does remarkably well. (There's the shortcut thing again).

 

Also, the example of figures by a stream with GI was an odd choice. How relevant is that?

 

Film resolution STARTS at 4K, usually it is 8K, but more important is color depth. 10 to 12 bits per pixel is needed to get the most out of renderings for film transfers. I don't know what Star Wars Ep II was done at, thought I would like to know. Can this new Vray technology handle deeper color channels? I just bring it up because they went out of their way to talk about film rendering.

 

Chris, you are the resident Vray champ--what do YOU think of what they said? Do they tend to come through on these PR papers? How does the current product perform vs. the 'usual' level?

Link to comment
Share on other sites

I use VRay and keep a close eye on their official forum, and can say that everything they announce they come through with, and then usually a bit more. They did not go to the Edwin Braun School of Marketing.

 

Part of the trick to this being possible is that it renders in a stand-alone version of VRay, completely independant of 3ds Max.

 

[ February 08, 2003, 08:57 PM: Message edited by: kid ]

Link to comment
Share on other sites

Part of the trick to this being possible is that it renders in a stand-alone version of VRay, completely independant of 3ds Max.
I'm liking this more and more! Does it require MAX as a setup engine, then outside render, or is there a way to use it to render models not from MAX (that sounds doubtful, the more I think of it as I type this)?
Link to comment
Share on other sites

Sounds interesting Chris, keep us informed. But i dont know why they need to render with a raytracer, Electric Image and Renderman mostly dont need a raytracer and have no problems rendering high resolutions or high pixelcounts. I think its time to trash the raytracer and invent something new, maybe GI based.

 

And as Ernest mentioned a higher color dynamic helps more to produce a great picture (AFAIK SW II is 16 Bit per colorchannel), i'm surprised what things are possible when i move the black and whitepoint sliders in my 32 Bit per color channel renderings in Lightwave, its simply amazing. Its just sad that Film Gimp "only" supports 16 Bit, but way better than Photoshop crappy implementation.

 

Just my two cents

Link to comment
Share on other sites

OK... more info on this...

 

First Ingo... Vray at its core is a complete raytracer... same with Brazil, Arnold, etc... there lies the speed and possibilites. Renderman is JUST trying to catch on this, by introducing raytracing FINALLY in version 11. It is also painfully slow in renderman, trust me on this I use it every day.

 

Anyway, one of the issues with raytracing is that it must load the whole scene into ram before rendering, as opposed to a scanline renderer like renderman which only needs to load what it sees. SO if you have a really big scene it will not render. Renderman and handle giant data sets... raytracers have a hard time.

 

From what I gather, the announcement has two parts. First, the Virtual Frame buffer. Vray (like Brazil) now has an independent frame buffer. What I gather they are doing is giving it the ability to write the frame buffer to disk and to add to the image, bucket by bucket... what does this mean? It no longer needs to hold the image in RAM and can free it for rendering. So an 800x600 images uses as much ram as a 8000x6000 image (basically 0 ram). Render times are still much higher on the 8k image... still a lot of pixels to render.

 

The other is the high polycount thing. I am NOT sure if this is a standalone thing. At the very least, it is an additional plugin for max. It too relies on a special type of disk cache. On the vray forum, they showed an image of a 28 million poly model rendered with this method in HDRI:

 

http://www.vrayrender.com/stuff/vray_28_mil_poly_hdri_gi.jpg

 

There is no raytracer out there that can load or render such a model according to them. There are some catches. I think the main one is that it needs fast HD to really work, and I think it is single threaded (for now). The key is not how fast it rendered, the key is that it DID render. However, all things considered, I am sure that Vray will make it render fast, they are always making things very fast.

Link to comment
Share on other sites

Originally posted by ingo:

But i dont know why they need to render with a raytracer, Electric Image and Renderman mostly dont need a raytracer and have no problems rendering high resolutions or high pixelcounts. I think its time to trash the raytracer and invent something new, maybe GI based.

As a renderman user, there is one reason why raytracing is better then renderman... speed... and quality. Trust me, all the big film shops are trying to find a way to make it work in renderman. There is only one thing that renderman accels at in terms of speed and that is motion blur.
Link to comment
Share on other sites

Originally posted by Ernest Burden:

Chris, you are the resident Vray champ--what do YOU think of what they said? Do they tend to come through on these PR papers? How does the current product perform vs. the 'usual' level?

Vray always comes through in my book. They are making no bones about the limitations... (need of fast disk speed, etc..) 99% of the surprises in vray are good ones. Peter and Vlado will have this happen.

 

In terms of high color depth... sure... all max renderings are default 64 bit dithered down to 32, so they can do a high bit render using rla or rpf formats. Or, in MAX 5, VIZ 4, they can use the 16 bit LogL color depth in TIFF. Or even, RGB format (16 bits). As far as I know, all of the renderings have that ability.

Link to comment
Share on other sites

Thanks for the infos in all three mails Chris, gives a great overview. The problems with the Raytracer and memory i'm very well aware of, since Lightwave has a 128 Bit renderer it uses a huge amount of memory for print sized renderings. Do you have any information about VRays colordepth, is it a 16 Bit per channel renderer ?

 

The 28 Milion picture looks nice, a wireframe added would be better; and i'm glad its not another teapot rendering. I only made it up to 12 000 000 polygones, needs a lot of loading time at startup but renders fine. But in real world jobs i think noone has more than 2 000 000 polygones to render.

 

So now i only have to wait for the standalone version of VRay on Mac OS X :rolleyes:

Link to comment
Share on other sites

Originally posted by Ernest Burden:

I wonder about just how often an architectural illustrator needs to render 50 million polygons

When we departure from the rpc world :)

 

we are just starting to use real 3d trees that blow in the wind. each tree is 1/4 of a million polys we have approx 50 trees in the scene, and are rendering a 3 minute video,

and vray is rendering happily in 5-10 mins per frame.

we have been told by our clients, that we were chosen to do some contracts, due to our landscaping looking so real. (no cardboard cutout rpc's)

Now thats with instanced trees. would be nice if every tree is slightly different. (other than rotated and scaled)

and thats where this new technology will help.

Link to comment
Share on other sites

Trying to load 28 million into MAX alone will not work. [/QB]
its been a while since ive seen render polys in my viewport

the general workflow these days, is to use parametric models, sub dee surfaces, nurbs, and trees that have a very low viewport count, but high in polys at render.

 

i have an architectural scene that currently weighs in at 19 mill polys, runs in realtime in my viewport.

and loads in 15 seconds.

 

unless you work with 3d scan data, i would doubt that you need to load in that much polys into your viewport.

if you still do, i would rethink your modeling technique.

 

the idea here, is to never convert anything to an editable mesh unless you have to.

keep your scene parametric, and it will pay you back with fast loading times, small file sizes, and huge poly count renders :)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...