Jump to content

Computer buying help thread for October


Recommended Posts

  • Replies 107
  • Created
  • Last Reply

Top Posters In This Topic

ok... so can anybody tel me any method.. for rendering large sized ( high resolution . e.g. 7200 x 4800 ) images on a normal config like, i5 with 8 gb ram.

 

the only way i can do it with current above mentioned config is using light cache as primary n secondary GI with only 3600 subdivisions n 0.01 scale. still with such a blurry method it takes almost 24 hours to render my scene, ( with all interpolated glossy reflections ).

 

i feel.. vray very easily goes out of 8 gb ram even with lower setting in light cache or irradiance map... my scene has around 8 million faces with couple dozen of tree proxies.

 

anybody knows anyway.. like i can devide the render image in say, 8 parts render each faster. n then automatically stich them properly at the end..

 

please suggest.

 

thanx

Link to comment
Share on other sites

Look in the backlogs on the Vray forum or ask a question about it there and you'll get useful advice. There are many ways to save memory in Vray, such as using instances and proxies, and Vray optimization is a common subject.

 

It's been a while since I was a Vray user but 3600 sounds like way more than I ever used (but light cache/light cache wasn't a method I ever used much - I found that a light cache with fewer subdivs plus an irradiance map was faster).

Link to comment
Share on other sites

thanx again aj.. wud chek in the other forums ... also wud like to know which renderers u r using.. n whats ur opinion about octane or arion.. is it possible to adapt to them for production work.. with 10 million face count of a scene...

Link to comment
Share on other sites

10 million? That's an awful lot. I wouldn't expect a CUDA renderer to be able to load that into the memory available on a video card.

 

I'm not doing any rendering lately, but when I do it's mostly mental ray. (Note that I'm not saying mental ray is better - for the most part they're equal, I just happen to have mental ray and not Vray.) I'm hesitant to develop too much enthusiasm for GPU rendering, for reasons that have been discussed a lot here in the past few weeks and that I wrote about yesterday on 3datstech.com. As I see it, there is no hardware to save you from having to use good technique, so instance instead of copying, use proxies, develop an efficient method for foliage and learn the crap out of renderer optimization.

Link to comment
Share on other sites

wel.. i m using proxies for all trees... the ivy in the scene is making it heavy... i keep only the direct visible ones... rest r 3d cars... n i did see a lot of even heavier 10 times complex scenes rendered so detailed... just curious .. how they must ve managed it.. in my scene the architectural elements r rather very frugal in mesh complexity... the filler elements like plants, shrubs, ivy, cars.. r killing the ram n hence the huge rendering times... ( i wish... nvidia comes witha new system.. where we can fix a GPU as processor.. n keep putting lots of ram... to feeed it... ).

Link to comment
Share on other sites

Actually it is now possible to load very large models into the memory of the GPU. We are coming out with a new iRay rendering platform for Revit, 3DS Max, Maya, and SketchUp before the end of the year. Also, we will be implementing a plugin for Rhino as well.

 

This will be a rendering service that runs through any web browser and you can take advantage of any many Tesla M2070 GPUs as you need on demand. You can find more information here: http://revuprender.com/index.php?option=com_content&view=article&id=72&Itemid=177

 

Should have a video demonstration up and running soon. If you are interested in a live demo webmeeting then let me know. There are some really interesting things happening right now with GPU rendering and Autodesk will be coming out with a similar service using the same RealityServer technology some time in the next year for 3DS Max. However we have been working on this for the last two years and we are lets just say ahead of the curve.

 

If any of you will be at Autodesk University please let us know and we can setup a private demo of your models running on RevUp RealityServer at booth #311. Also Jeff Mottle will be filming a live demo of our new products at the event.

Link to comment
Share on other sites

wel.. revuprender... does ur setup.. will help me render my 10 million face count scene ... in atleast couple of hours... with complete unbiased rendering .... the samples images shown on ur website seems too simplified models... feels like.. few hundred face count.. no point if u cud render a 10 km radius sphere.. with few hundred poly count.. in few minutes. on ur 6gb tesla cards... which r unrealistically expensive..

Link to comment
Share on other sites

Hello Rats,

 

The rendering you are seeing on our website are from a very complex Revit model. I agree that it is not a 10 million face count at all but Revit implementation is our first focus. We can also support 3DS Max and Maya models as well since they can now render with iRay.

 

You could certainly render your 3DS Max model with 10 million faces much faster on our platform then anything else available. We have seen 40GB models loaded into this same system and rendered almost instantly using (4) Tesla M2070 GPUs. Imagine using 10 or 30 M2070's!

 

We are not proposing that you go out and buy any hardware at all. You simply load your model onto our servers running in a secure datacenter and you will have access to as many GPUs as you need on "pay as you go" basis.

Link to comment
Share on other sites

Ok, if you can use iRay in the new release of 3DS Max 2011 then you can use the new iRay materials and see how those render on our system compared to yours. However we should be able to convert your materials automatically into RevUp RealityServer compatible materials with no problems.

Link to comment
Share on other sites

sounds interesting... as i know .. rather for another software called octane... its not the case.. even if u got n number of GPUs.. max memory considered is the smallest memory available of all the cards... only the processing is multiplied as per the capacity of the GPU.... so even if u got a dozen of tesla 2070.. max considerable memory is 6 gb.. thats it... so the scene has to be able to load in that memory including all bitmaps ... n the frame buffer... n i feel... 6 gb is far too less with todays std. of complex 3d scenes.. thoes ve lots of 3d trees, cars, human figures etc.. sorry to say.. revup.. but i m at all not clear of ur claims.. rather u show us some real large sized scenes rendered on ur farm ...

Link to comment
Share on other sites

Hello. Our product RevUp RealityServer is going to be launched near the end of this month. However you can see the entire city of Rotterdam being rendered using RealityServer which is the same iRay rendering engine and GPU clustering technology that we are using. http://www.procedural.com/showcases/rotterdam.html

 

I am not sure how the other software you mentioned works but mental images has made a lot of advancements with being able to cluster the GPU together. Tesla M2070 is currently the most powerful card we can use and I was giving that as an example but we are mostly using Tesla 1070 and 2050 GPU cards.

 

The thing to know is that if you are rendering on (4) GPU's but then you start using (8) GPU's then you will have exactly twice the performance with RealityServer. One of the nicest parts about it is that it is 100% scalable. As I am sure you know it is often very difficult to achieve scalability like this and if you just add twice the CPU's to your system it is most like not going to give you twice the performance.

 

I hope this helps to give you some more insight into what is to come before you go out and spend $1000's on a CPU based system that very soon will be lagging behind all the GPU technology that is very close to your reach.

Link to comment
Share on other sites

Ry,

 

I'm curious about whether you've been able to achieve the production render quality to which we've become accustomed using iray, because to be honest I've been pretty underwhelmed so far by the quality per processing time that these brute force GPU renderers deliver, which is why I've been warning against irrational exuberance for this tech before it's matured.

Link to comment
Share on other sites

AJLynn,

 

I understand your concern with any new ground breaking technology. However to be honest I actually have the complete opposite impression of it from what I have seen and from being involved in developing a product based on this technology. The rendering performance is much faster as it should be since rendering is just something that the GPU is very good at. As far as quality goes from what I have seen the quality is much higher than what you can achieve with the older rendering engines. Using MetaSL materials which are very high quality you can get some very nice output.

Link to comment
Share on other sites

rats asked to see a very large scene before and that is why I sent the link that shows the city of Rotterdam. As you can see from the linked page that the scene was rendered in real-time using iRay and RealityServer. I have seen this demo in person as well and I can say that it renders very fast to this quality.

 

We ourselves are still in development and our first launch is focused on a Revit to RealityServer workflow. Most of the models I have in Revit are not so much production quality for rendering as they are projects that were more in conceptual stages. There is a lot of very clear evidence out there of the performance and quality of iRay and RealityServer. iRay is now included inside of 3DS Max 2011 for subscription members.

Link to comment
Share on other sites

See, the thing is, that Rotterdam video doesn't show any final quality rendering of the quality most of our readers are used to producing. What I'm looking for is something that helps us believe in the usability of this technology - some real examples showing high quality work done on one of these GPU systems that demonstrate real advantages over established CPU based systems. Can a skilled user who demands high quality output get more out of, say, a $3000 investment in GPU technology that CPU? Without that, it's not reasonable to recommend prioritizing GPU capacity over CPU.

Link to comment
Share on other sites

RealityServer is not about just buying GPU hardware but it is about buying the special software to allow one to use the GPU's. I never recommended that you go and buy $3000 worth of GPU hardware, and you will not get the type of performance out of it that I am referring to. What I am recommending here is GPU computing in the cloud.

 

On another note, iRay is now inside 3DS Max so now you can render on the GPU. You would not need to spend $3000 to be able to take advantage of GPU rendering inside of 3DS Max or Maya today. You would get just as good if not better performance using the GPU. It is still going to come down to how good of an artist you are and setting your own custom materials. Take a look directly on mental images website and you will find high quality rendering done with iRay that are just as good of quality if not better than anything done on the CPU.

 

There is always speculation with new technology. Most people seem to not even know what a GPU is and think it is still just for visualization and viewport enhancement. Most people still do not think of GPU's as a processor that can do really advanced calculations many many times faster than a CPU. By the way: iRay and RealityServer can also run off the CPU, it is much slower though.

 

Rendering on the GPU is going to be no different then rendering on the CPU, except you will have better performance and more advanced math calculations, MUCH more accurate lighting! It is true though that iRay can do amazing rendering without much setup at all whereas it could have taken days for a highly skilled rendering artist to achieve the same results. Mental images has done an amazing job of streamlining the entire process. You can now get almost instant results and that is a very powerful design and collaboration tool.

 

There are real-time VR type platforms running on the GPU that are coming out but they are not focused on production rendering. RealityServer is more focused on production rendering in real-time and that is what you are asking about. It is maybe not always instant real-time performance but is close to it and especially with the cloud solution I am recommending.

Link to comment
Share on other sites

Ry,

 

Please forgive me for being blunt, but this is a critical question that is unanswered and if I am being hard on you it's because I know you have some knowledge in the area and I'm hoping you can help.

 

So far all the rendering I've seen from iRay has been either:

 

A. Easy

B. Slow or

C. Shit

 

The mental images site has 3 examples, and they're easy (and might be slow - render time is not given). The Autodesk site does not appear to have samples. Some users have posted work here that was good and not easy, but much slower than mental ray on a CPU. And most damning, the demo video from nVidia where they premiered iray showed easy scenes rendered slowly but compared to mental ray for CPU images rendered extra slowly because the settings were way off - in what one can only assume was a deliberate effort to make mental ray seem slow - and they weren't selling iray as an realtime or interactive tool, but instead using the language of Maxwell marketing to present it as a higher quality production renderer. Meanwhile the claims of better performance haven't yet materialized, claims of more advanced calculations and more accurate lighting are contradicted by my (pretty vast) understanding of the technical issues, and words like "will", "would" and "going to" are being used more often than one hopes to see.

 

So the whole thing comes off as nVidia trying to use mental images to increase its GPU sales at the expense of customers who would be better served by simply running the software they already have on their CPUs. This is the very essence of a vaporware campaign, which is against the best interests of the customers, and my responsibility is to warn the readers against it until there is sufficient evidence that the product is not vaporware.

 

Of course, if the GPU hype can be backed up by results, then these products show fantastic potential for the users, so I'm asking whether you know of any examples of results that go beyond simple hype and show that technology like iray really does have the ability to replace the functionality of proven software like mental ray, and offer some substantial level of performance improvement or other benefit to users.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...