Jump to content

Quadro Vs. Tesla


Recommended Posts

The Tesla doesn't have video out. In some was this is helpful. But don't buy either unless you need it now. Both those parts are horrendously overpriced versions of the Geforce 470. ($3000 buys you 4.75GB of RAM - seriously guys?) Hold out with Geforce hardware for now if you can, and wait and see what happens when they get ATI compatibility working, because I'm roughly 85% convinced that we'll seen a 4GB version of a Radeon 6000 GPU that will beat those NV100 chips, at under $1000, before spring.

Link to comment
Share on other sites

That is correct Travis. You basically put it in your second PCI-x slot and had one main card to hook your display up to. You would use the Tesla when needed for a project.

The new gen changes that, since more people are doing the whole HPC workstation builds.

Spec sheet on original one: http://www.nvidia.com/docs/IO/43395/tesla_product_overview_dec.pdf

 

Spec sheet on new ones: http://www.nvidia.com/docs/IO/43395/NV_DS_Tesla_C2050_C2070_jul10_lores.pdf

Link to comment
Share on other sites

Yeah the term is kinda washy like Andrew says.

 

Personally I would do what Andrew stated to begin with and either go with the GTX580 or wait to see what ATI pulls out of there sleeve. Quadro and Tesla are just way to expensive for what you get. I mean, you have companies like Boxx pushing a GPU based rendering machine with (4) GTX580s for a reason...they function just as well as the Quadro/Tesla counterpart these days. Only real difference I see is the drivers are developed a lot cleaner for the Quadro/Tesla lines vs the Geforce line.

 

And yes Travis, HPC= high performance computing

Link to comment
Share on other sites

Yeah the term is kinda washy like Andrew says.

 

Personally I would do what Andrew stated to begin with and either go with the GTX580 or wait to see what ATI pulls out of there sleeve. Quadro and Tesla are just way to expensive for what you get. I mean, you have companies like Boxx pushing a GPU based rendering machine with (4) GTX580s for a reason...they function just as well as the Quadro/Tesla counterpart these days. Only real difference I see is the drivers are developed a lot cleaner for the Quadro/Tesla lines vs the Geforce line.

 

And yes Travis, HPC= high performance computing

 

Yes but GPU computing isn't just about how many gpu's you have memory is just as important, why build a machine with four 1.5gb cards most scenes I work on exceed 4 GB so they'd be useless throughout most of the project. I've also found out that the money to purchase this card must be used before the end of the year so as much as I'd like to wait to see what ATI produces I can't. So my question still stands which card shold I get?

Link to comment
Share on other sites

I agree. The Tesla won't do anything the Quadro won't do, and the Quadro will be more all-around useful at times when you're not running a GPU renderer. If you have a situation where you want to dedicate the Quadro entirely to GPU rendering and not driving the monitor (which is what the Tesla is really meant for), you can always have a second card in there to drive the monitor.

Link to comment
Share on other sites

I have also been investigating Quadro/Tesla versus GeForce. To me the biggest issues was heat/noise and in that regard the Quadro/Tesla combinations are far better. If someone has other experiences in that regard I really would like to know that before my purchase !! Price is important, but heat/noise/stability far more!

Link to comment
Share on other sites

No it is not. Basically the Quadro uses the same processor as the Geforce but Nvidia make the core/shader/memory clocks lower, which make less heat, which means the fan speed can be reduced, which also create longer life for the product.

The gaming cards will be louder then the professional cards.

Edited by Slinger
Link to comment
Share on other sites

The problem was that the original Fermi chip (the GF100, used in the 480, 470, 465, and recent Quadros and Teslas) is shit. Terrible design, rush job. Power leaks everywhere, and when the power's not leaking it's poorly managed. Overheats like a Pentium 4 with roid rage.. Did you know that it actually has 512 shader cores but nVidia has never been able to make the chip well enough that they could turn them all on and have them run correctly? My guess, they rushed it because they had nothing that could really compete with the Radeon 5000 series, and nobody was really caring about power consumption on GPUs anyway until nVidia made it a problem. So anyway, when they made Quadros and Teslas out of them, they underclocked the GPUs relative to what they were doing with the Geforces, which made them slower, cooler and more stable, and declared victory.

 

Then they fixed a bunch of the problems when they made the GF104, which is a better design but easier to make because it's much more conservative. It wasn't until the GF110 (the Geforce 580) that they got most the problems worked out, and were able to more efficiently utilize the power and turn on all the cores.

 

Hopefully they update the Quadro line soon with the new chips, but I've got the sneaking suspicion they won't until the current ones have had several more months to sell.

Link to comment
Share on other sites

The issue with fermi is it was a totally new architecture. There is an interview where the CEO of nivida basically says it is there fault it sucked to begin with because the design team and engineers were not talking during the creation of it.

The original intent of GF100 Fermi was to have the full 512 cuda cores but the yields were so bad they laser cut one off to help stability and temperatures.

 

Now with the GTX580 out, that is what fermi was meant to be in the beginning. The bad thing is that Devin says he really needs a card with a substantial amount of video ram. Like i said originally, I would buy a GTX580 but if you know your scenes will not be able to load all the data b/c of the limited vram, so he is stuck with what he can get at the moment, the Quadro 6000.

 

 

It is a toss up...I would go GTX580 but if you need something with more vram and money is no object, get the Quadro 6000. Then again, who knows id Nvidia will create a Quadro7000 or whatever based off the GF114 GTX580 architecture chip that has the full 512 cuda cores.

Link to comment
Share on other sites

Does anyone know if Vray GPU computing uses memory differently than regular CPU computing; I don't think it's any different but maybe I'm wrong and it uses less. It's really going to kill me if they come out with a 7000 a month or two after I buy this thing, I wish they made a card that allowed you to add memory to it.

Link to comment
Share on other sites

The memory used in video cards is a bit different. The memory architecture is more parallelized than system memory, which lets it hit higher transfer rates - but with the same limitations as the parallelization of the GPU cores. Say you've got 1GB of video card RAM on 8 lanes, you've got 8x the total throughput you'd have on 1, but you can only allocate 128MB in any one place at a time. But you've also got, say, 480 cores, each of which has its own memory space, and those are simplistic and can't have anything to do with each other because of limitations of the architecture so if they could make it you could have your memory divided 480 ways. (This is why stats on things like video card cores and RAM throughput look ridiculously high compared to system cores and RAM - they're not actually comparable numbers. Is a GTX480 GPU really comparable to 480 CPU cores? Of course not, but nVidia would be happy if you thought so.) Because of those differences, video card RAM has to be very tightly integrated into the design, any inefficiencies like going through DIMM socket type pins would screw things up pretty bad, and most real changes in memory setup require testing of a new reference card. It's also more expensive to make these things this way, though the prices come down like everything else. A 1GB video card was ridiculous not long ago, now there are $40 cards with 1GB, but not the most complex types. (They used to make video cards with memory slots but that was a log time ago when this stuff had fewer constraints on design.)

 

But, ummm, I guess that's not what you're asking :) There's less to load into video card RAM. 6GB goes farther. 6GB is quite a lot on a video card. Hell, remember when Chaosgroup wowed everybody with Vray RT-GPU demos, they were running 3x GTX 480 with 1.5GB each - but notice that there have been demos with a lot of geometry or a lot of texture but not a lot of both. (Each video card needs enough RAM for the scene - it doesn't get shared.) So I figure if 4GB consumer cards become a reality next year, and the GF100's problems are behind us, this will all become a lot easier and less expensive to deal with.

Link to comment
Share on other sites

....I haven't currently made my way to GPU rendering due to a number of reasons, the most pressing being time, but is there a accurate way to measure scene memory calculations in terms of how they realate to RAM?

 

Can I simply add everything in terms of disk size (geometry + textures,) or is there a more complex calculation?

Link to comment
Share on other sites

This reminds me of being on a 3bit Windows machine, and needing to be conscious of my RAM usage. I can't remember well, but it seems like I did a fairly straight forward calculation, and then allowed a 400mb or so padding for calculations and temp storage of irradiance and light cache. That was 4 years ago, so my memory on how it actually worked has long been diluted with glasses of wine and old age.

 

According to this guy you need to match the amount of RAM that your scene is utilizing with the amount of RAM on your card. It is towards the bottom of his article.

http://www.jigsaw3d.com/articles/v-ray-rt-gpu-benchmark-test

 

This makes sense if RAM = RAM, more or less, at least after you throw out the spec's. That is just a guess though, but I would think it to be correct.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...