Jump to content

gpu rendering


Recommended Posts

Hello everyone,

 

This is my first post here, and I have to admit I am new and inexperienced in the 3D/rendering field so please forgive me if I say something stupid.

 

Do you think that vray gpu will be able to render quality images/animations? I have read from many places that it will be mainly utilized to get a real time preview to increase your workflow (so you don't have to test render). However, it seems (I'm only making assumptions here), that vray-rt and vray-gpu will be seperate products.

 

I have saved some cash for a build to utilize for gpu rendering. Because opencl will be used with vray-gpu you can use nvidia or ati graphics cards. The way I understand it, the only real limitation of gpu rendering (apart from lack of features maybe) is the amount of video ram you have in your graphics card.

 

Also, I have done a bit of research about opencl. It seems as though you will get different results in terms of performance when comparing nvidia and ati depending on how the opencl program is written. Taking into consideration that the video previews we have seen so far from chaos group, they have always used nvidia gpu's (3x480's in the last presentation in May, got good results too but limited to 1.5 gb of vram). They also mentioned in their video preview that they are looking forward to the new tesla cards coming out. So, it should be a done deal right? Well....not exactly (for me anyway).

 

The thing is, it is highly unlikely that we are going to see an affordable high volume gb ram card from nvidia for some time (based on my assumption, maybe 7-12 months). What I have been keeping my eyes on is the 4gb sapphire 5970, the dual 5870 card ($~1100). To be honest, I have been seriously contemplating in getting two of these. In terms of speed they are about as fast as the gtx 480 (in opencl), but the 4gb vram would be a huge advantage if you want to render high polygon models at high resolutions and possibly with lossless texture formats.

 

Now, my question is: Will vray gpu be able to utilize all the 4gb vram? I know it seems silly because they used 3x480's in their preview, but back in May I believe they said that they are still working on compatibility with ati cards. This seems strange to me because I thought that opencl is multi-platform right off the bat. Maybe they are biased? I don't know. I'm not pointing any fingers here but it just seems strange to me.

 

And last but not "least" (forgive my plain humor) the tesla cards. The Tesla 3gb c2050 at $2500 and the 6gb Tesla c2070 at $4000 (Q3 2010). Now, considering that the new Tesla cards are going to be about as fast as the gtx 480 (in opencl, please correct me if I am wrong!), wouldn't two 4gb 5970's be a much better buy as you would have nearly twice the speed, more memory, and a spare $400-500?

 

I would like to hear other peoples thoughts on this, quite frankly I have been pulling my hair out trying to come to a conclusion.

 

Thank you for reading my post.

Edited by authie
Link to comment
Share on other sites

I am as excited about the future as the next guy. But the future is not here yet. I wouldn't spend that much money on graphic cards, if the sole reason for the purchase is GPU rendering. Another thing to keep in mind is that in many GPU rendering solutions, the GPUs of multiple cards are used, but the RAM is not shared. So two cards with 4 gigs of RAM each will give you 4 gigs of RAM, not 8.

 

At the moment 'unbiased', brute force rendering is big buzz word. The truth is it's much easier to code brute force for stupid vector processors. There are years and years of technical papers and algorithms to greatly speed up rendering without visual loss. The Arnold GI Renderer is all CPU based blows the doors off of all GPU solutions to date. I am personally looking forward to the first biased, CPU + GPU solution.

Link to comment
Share on other sites

Don't buy any of this stuff yet. I cannot stress strongly enough that it is a very bad idea to buy hardware for the purpose of running software that isn't available yet or that you don't have a plan for yet. Hardware advanced too quickly - if you buy it before you are ready to buy whatever GPU rendering solution you're going to use, by the time you get the software the next generation of hardware will be out and it will be cheaper and more powerful.

 

Wait for Vray for GPU to hit the market, and for early adopters to try it, test it on various hardware and write about it on web sites.

Link to comment
Share on other sites

Thanks for the reply guys,

 

Lard, I am aware that the memory is not shared between the gpu's but thanks for the heads up. I agree in terms of quality cpu render softwares produce a much better picture. Yeah I have read about the whole "unbiased" thing, and it seems that every time somebody tries to explain it; there is a new theory hehe. From the way I understand it, the main difference between the two is one being "physically correct" in the way objects locations a displayed and how they reflect light; its like "real" lighting and "faking" lighting isn't it? I might be totally wrong here.

 

AJLynn, you are probably right I should wait until vray gpu is released. Isn't it scheduled to come out within two months? The reason why I am so eager to get the 4g 5970's is because they are limited edition graphics cards. From what I have read from a number of sources there are only 1000 units being produced. However, it is unclear if this number applies to the "toxic" and "normal" version or not as some say vice versa. Since fermi is considered to be somewhat of a failure (forgive me fanboys) since its too hot and the actual performance has been scaled down from what was planned; ati is not as "pushed" to release something better, and nvidia has decided to jump to the 28 and 32nm solutions (this could take a while...).

 

Say I wait two months, the 4gb 5970's from sapphire are sold out, nothing new comes out, and then I would be stuck with the teslas (I would spend over twice the amount of money for what performance I would get and to top it off have 1gb less vram).

Link to comment
Share on other sites

I wouldn't fall for these marketing tricks... you can be pretty sure that there will be 4GB graphic cards available in the future. If not in two months then in the next generation of GPUs - and the next generation will come sooner than you think, release cycles become shorter and shorter... I wouldn't bet on ATI if it is about GPU rendering - all major render software developers are focusing on Nvidia at the moment. That might change quickly, but hardware also changes quickly. I'm pretty sure that there'll be a new generation of ATI Cards available before there are good GPU software solutions available for ATI, the same might apply for NVidia. Don't invest into today hardware for a future technology!

Link to comment
Share on other sites

Thank you for your input blumentopferde,

 

To be honest, I'm not really sure if we will see any new quadro cards from nvidia this year, also they are reported to be applied to a new architecture (new chip), and we all know how long it took them to release fermi cards (they are 6 months late, too hot, and perform worse than what was promised). Also, they are having problems producing the 40nm fermi cards right now, only yielding about 20-25 percent of production. If they are having this much trouble with the 40nm cards, whats to say they are going to be successful with the 32nm or 28nm cards that the quadro and tegra cards will be based on.

 

Because the current gtx fermi cards are too hot, they probably will not be able to handle more vram, that is a fact. They would need to do a complete refresh of the 40nm fermi cards, and since they want to release the future quadro and tegra cards based on the 28 or 32nm architecture we are most likely going to have to wait until beyond 2011.

 

This of course, is just my point of view and is based from articles with comments from nvidia (within a 2-5 month timeframe). Feel free to post your comments, I am sure many of you disagree with me so please prove me wrong.

Edited by authie
typo
Link to comment
Share on other sites

Don't fall for any of this crap. There's no such thing a limited edition video cards - if they sell out the limited edition, they'll introduce the regular edition, and even if they don't make more of that particular model in a few months there will be something better. And until you actually have software for it, a Fermi card is worse than useless - it burns power while performing no functions.

 

Just sit tight, wait, render on your CPU, don't pull your hair out. We're on the verge of a transition period but we're not there yet and there's nothing you can do about it.

 

Let the hardware mature while the software matures. The nVidia power consumption must come down before putting 3 480's in a PC is something a normal person not running a demo should consider. They use more than 300 watts under load. A PC that had 3 of them would run at over 1,000 watts. This is like putting a space heater under your desk - you would need a dedicated 5,000 BTU air conditioner just to remove the heat added to your office by the PC! The ATi hardware is much better, but there's still some software out there running on CUDA only.

Link to comment
Share on other sites

Just for the record, there is no such thing as physically correct rendering. A physically correct renderer would have to scatter radiation at the atomic level of a rough surface to calculate a simple lambert shader (not to mention accommodate the atmospheric conditions and particles in the air) No render does that.

 

Remember that there are patents on many ways renderers approximate the 'true' math of calculating light. A few of the big patents (from Pixar) have expired this year. The buzz word 'non-biased' and 'physically correct' comes from the marketing office to explain why their renderer is slower (absent of the patented techniques).

 

GPUs are faster (sometimes) because it's like firing a shot gun to hit a can 10 feet away. Only a few pellets hit the can (and were needed), but the target was hit. Eventually, engineers will focus the other pellets and get much more from the potential power.

 

That is the problem with regards to RAM. On a (current) GPU renderer, the scene must be loaded into RAM. If you remember, this was the same limitation with many CPU renderers in the later 90's. Not the case with current CPU renderers. I've seen Arnold render 100 billion (you read that correctly, I said billion) poly scene with full GI and gigs of textures on a workstation with 6 gigs of RAM. It can, because the engineers have done some very creative memory loading, unloading, sharing, caching, etc.... There is nothing smart like this going on with GPU rendering to date. It's just firing a shotgun to knock over a can. This will change, but we are still 5 years away

Edited by lard
spelling
Link to comment
Share on other sites

I agree with you AJLynn,

 

In my posts I thought I made it clear that I would much rather get the 4gb ati 5970. By the way I just found out that only the limited edition toxic version will have only 1000 units produced, and the regular version will be in abundance.

 

It uses a little more power than the fermi cards (I think 50 watts more) and yet it is much more powerful. Yet, in opencl it performs about the same in when comparing it to the gtx 480:

http://blog.cudachess.org/2010/03/nvidia-gtx-480-first-opencl-benchmark/

Also, the sapphire card is overlocked out of the box, the "normal version" is clocked the same as a 5870, while the "toxic" version is clocked only 50mhz higher. It runs VERY cool, I have not seen this card go above 67 degrees celsius in benchmark stress tests, and it is very very quiet (it has a triple fan arctic cooling setup).

 

I have a plan: Because I already have a 1gb 5870 card, I will buy vray gpu when it comes out and will test it on my current machine. Then, I will be able to make comparisons and see how the sapphire 4gb 5970 will perform. The main reason for my concern was that I thought I would not be able to get this card by the time vray gpu is out, now that this has been debunked I can rest assured that if it is worth it I can go this route.

 

Edit: Thank you lard, very informative explanation. Much appreciated.

Edited by authie
Link to comment
Share on other sites

These benchmarks are all to be taken with a few grains of salt, because the release versions of software aren't out so the things they can test are limited. There are only a few programs in release that use OpenCL and most are for Mac. Also, ATi's software for GPU computing is not yet as well matured and you can expect these numbers to change by the time apps are widely available. Let things settle down before making any decisions. Meanwhile the 5870 gives you a heck of a lot of display power.

Link to comment
Share on other sites

^.^ That is exactly what I had in mind too.

 

I just had a peek in the vray forums, and there are a few respected members saying that the best bet would be to go with nvidia :/

 

If the opencl code applied in vray gpu is optimized for nvidia hardware, I just hope we will not be seeing results as shown in the follow up link I posted before. Seeing as the ray tracing benchmarks program had a numberplate saying "nvidia"....its not hard to see that those benchmarks are completely biased. I really hope that is not the case with vray gpu, unfortunately the indications are pointing towards this direction.

 

-In all previews nvidia hardware was used.

 

-They say they are looking forward to testing the new tesla cards in the preview.

 

-Ati cards "not supported yet" even though opencl should be able to be used with ati right from the start, right? Or is this not the case? I guess it ultimately depends on how they write the program (obviously) but if this is true, then they did in fact optimize the program for nvidia hardware from the start. (I hope that makes sense lol)

 

-Respected forum members/staff are telling members that it would be "safer" to use nvidia hardware.

Edited by authie
Link to comment
Share on other sites

True, I guess we just have to wait and see.

 

Edit: One more thing I thought I should bring up. It seems as though there has been a lot of confusion if vray rt gpu will be able to render at the same level of quality as the vray cpu product (putting the ram limit aside). There have been numerous comments (not from this forum I believe) that it is only meant to increase your workflow by getting an instant preview of your work. I thought I should cite/quote phrases said during the May presentation:

 

So the first thing we really have to understand is that: Vray, vray rt and vray rt running on gpu's are three completely different products. And, unlike the competitors these three products actually manage to shade and light the same way. This is extremely important for production pipelines where you do not want to re-do your shaders, re-do your materials or you don't want to export a lot your data in other applications......With vray you can fluently switch the three rendering solutions on the fly and you don't have to worry about not being able to render the same materials because vray, vray rt, as well as vray rt running on gpu's will render the same result seamlessly without you having to worry about your setups......Vray, vray rt, and vray rt on gpu's will follow the same setup and they will be able to render the same shaders in the same way....
I thought most people may have missed this, and I think this is a good indication in terms of the picture quality you are able to achieve in vray rt gpu. If I remember correctly there is a presentation taking place in London very soon, so as soon as there are videos up I will post them here and maybe quote the interesting parts.

 

Comments are welcome :)

 

Authie

Edited by authie
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...