Jump to content

why has GPU rendering not completely eliminated CPU options


John Dollus
 Share

Recommended Posts

So, with all of the advances with GPU cards, lower price points and rendering engine optimization over the past few years, why isn't everyone cranking out 10 minute print-quality renders?

Is it because it takes too much time to learn how to setup a scene so it doesn't require a dozen passes and a couple hours of photoshop artistry to come up with a presentable image?

 

The latest issue of Building Design and Construction has an article from Gensler boasting about being able to produce top notch renderings in mere minutes using gaming cards and only 10 minutes of training using Octane and, yet, people here seem to be wanting to sell their licenses of the software.

Link to comment
Share on other sites

Almost every GPU renderer on the market is feature incomplete compared to fully matured CPU engines- this is due the to both hardware (GPUs) and software (CUDA/OpenCL) limitations. It took Octane 5+ years to integrate some features that Vray had from start.

 

Current hardware still pose limitations to scene size, as even top computing cards like nVidia TeslaK40/QuadroK6000 have 12GB of ram, compared to 32-infinite found already on mainboard. This severely limits the scope of what sort of visual fit into scene. I somehow don't understand how most people don't even seem to be aware of this. Jokes on them when they can't even render anything.

 

Some renderers (Redshift) recently incorporated out-of-core rendering that bypasses this limit to 'certain' extent at performance cost.

 

" so it doesn't require a dozen passes and a couple hours of photoshop artistry "

This has nothing to do whether the renderer is built upon GPU or CPU kernels. This statement actually makes no sense at all.

At the moment there are "simple" path-tracing rendering engines on both side (CPU-Maxwell, Corona,etc... GPU-Octane, Arion,etc..) as well as fully biased solutions (CPU-Vray, MentalRay,etc.. GPU-Redshift, Furryball) but all of them offer compositing workflow regardless of their kernels, algorithms and outside controls.

 

Speed up myth. There were mythical and completely false claims over how much faster GPUs are compared to CPU in terms of (100-to-1000times faster), this is of course, complete bullshit from start and has been debunked many times by actually comparison between engines and likewise on perfomance measure. Thanks to faster development on GPU front, at the moment, I would say, conservatively, you won't get more than twice better performance.

 

Cost and reliability: GPUs aren't cheaper and neither offer better performance/value. Sure, 780Ti with 6GB might be equivalent of top i7 and both cost 500 dollars. But if you want 12GB vram of TeslaK40/Quadro6000 with IDENTICAL performance, you're looking at 4000 dollars. That's 8x price jump. For slightly (but still severely limiting) better flexibility in terms of scene size.

 

All in,all out: GPU rendering, just like 5 years ago, is still nowhere near usable state for most production. Just because some studio here and there uses it, pales in comparison to those who still stick to CPU.

 

With nVidia slowly withdrawing from GP-GPU segment (where those computing cards where meant to be), this fad my actually even die evently, coming full circle back to pure CPU.

Or real-time engines will evolve to allow full-scale visualization, but again, leaving current gpu-renderers in dust.

Or maybe something else will happen totally, who even knows.

Link to comment
Share on other sites

Thanks for confirming.

It's incredibly annoying when an article comes out (conveniently timed with a software update and conference) extolling the coming rapture of rendering and it's read by the CEO who passes it around to the entire dept demanding to know why we can't produce photo realistic renders for 24x36 boards in minutes like the firm in the article insists to be the case.

Link to comment
Share on other sites

The marketing of some of the renderers out there has been truly awful, overdone to extreme.

What is funny it happens like every year again and again..

 

One big name example is esp. very funny ("the only true xyz", "finally true abc",etc..repeated...30times)

 

Now I can see that can be frustrating if someone not so knowledgeable falls to these cheap claims and forces it to everyone.

Link to comment
Share on other sites

 

"Seven seconds. That is all it took for Gensler’s Mark Bassett to generate a geometrically accurate, photorealistic rendering from one of the firm’s Revit models."

 

“To put it into perspective, when Pixar was developing the movie ‘Cars 2,’ its average rendering time per frame was 11.5 hours,” says Bassett. “We’re rendering at seven seconds a frame using Octane. It changes everything. There are now Hollywood movies being made using this technology.”

 

LOL?!? :eek:

Link to comment
Share on other sites

The same article was sent to me on Monday of this week by one of our architects. He meant well. Although I have not had someone ask me why I cannot crank out 3 minute renders where "The lighting and shading were spot on. The materials, finishes, fixtures, and furnishings were precisely represented", I expect that question will be coming soon. When you're talking about producing fast design renders for the purposes of collaborating and refining spaces I think they were onto something. That's not how the article was written. Although these words were not used, they made this solution sound like the end all, that will essentially eliminate the need for visualization artists because the architects and designers now have what they need to produce marketing level images (and animations) themselves, so quickly in fact that the game has changed forever. Even if Mark Bassett has put his time into the business and earned the title of "game changer", the article was over the top.

 

Do we all want to get better and progress as individuals and as industry? Of course. To claim "The lighting and shading were spot on. The materials, finishes, fixtures, and furnishings were precisely represented", as we all know is silly. There is no holy grail of visualization. That's what makes images by seasoned artists so special. The day that I no longer have to work at what I love because it's all done for me is the day I find another line of work.

 

Maybe one day we can just Google the specs and our machine will squeeze out a real work of art!!

Link to comment
Share on other sites

Hello, my name is Panos and I'm one of the Redshift cofounders/developers.

 

I accidentally bumped into this thread and just wanted to make a couple of comments.

 

First of all, I am in complete agreement that GPU rendering speedup claims are sometimes exaggerated to a ridiculous point. And it personally annoys me because it's creating an unrealistic expectation (and subsequent disappointment) for potential customers, which causes more harm than good to companies like ours.

 

Unless you're talking about some limited test scenario, yes, a GPU renderer will not render a frame (at the same quality) hundreds or a thousands of times faster than a CPU renderer. If it did, we wouldn't be having this discussion! :-)

 

But make no mistake, in the vast majority of cases a GPU renderer will beat a CPU renderer. And it will do it by more than just twice as fast! Depending on which CPU/GPU you compare, you will often hear about performance ratios in the range of 5/10/20/30 times faster. Don't take my word for it: talk to anyone who's using a GPU renderer for their day-to-day work and they should be able to confirm it. If GPU rendering wasn't considerably faster why would anyone spend money buying a GPU and accept any limitations on the featureset?

 

Regarding the comment that NVidia pulling out of GPGPU... can you provide some references? Because from our perspective (and knowing about their roadmaps, SDK development, etc) I'd say it's actually quite the opposite!

 

Look, you guys are pros and you'll obviously use whatever tool you feel most comfortable with. If some limitation of a GPU renderer is a deal breaker for you... well, that's the end of the story, really! :-)

 

But I would certainly keep an eye on GPU rendering. It's not going to go away. And any limitations you find with it today, they might not be there tomorrow.

 

Regards,

 

-Panos

Edited by panagiotiszompolas
Link to comment
Share on other sites

I just LOVE this quote from the linked article

“To put it into perspective, when Pixar was developing the movie ‘Cars 2,’ its average rendering time per frame was 11.5 hours,” says Bassett. “We’re rendering at seven seconds a frame using Octane. It changes everything. There are now Hollywood movies being made using this technology.”

Equating a CARS 2 frame to an architectural Revit model its simply ludicrous. Whats even more insulting to the reader is the title: "Gensler...looks to the gaming and moviemaking industry"...wouldnt that imply that all of Hollywood is using Octane? And that the movie and gaming industries use the same tools?

And did you see the animation of the 2 robots they 'made in an hour?' Yeah, looked like an hour. Well Im sure Hollywood is not looking over its shoulder quaking in its collective boots.

 

And if thats not enough, Gensler had the nerve to brag that they'd hoodwinked a client with their instant renders to win a job.

Good for you Gensler.

 

Edit: And.... they touched a nerve with me using the word photo-real. Im really getting annoyed with the term being used for anything thats not a Sketchup linework render. The only work that may be called pohotoreal is work that can pass as a photo. Its an extremely high bar.

Edited by Tommy L
Link to comment
Share on other sites

Hello, my name is Panos and I'm one of the Redshift cofounders/developers.

 

I accidentally bumped into this thread and just wanted to make a couple of comments.

 

First of all, I am in complete agreement that GPU rendering speedup claims are sometimes exaggerated to a ridiculous point. And it personally annoys me because it's creating an unrealistic expectation (and subsequent disappointment) for potential customers, which causes more harm than good to companies like ours.

 

Unless you're talking about some limited test scenario, yes, a GPU renderer will not render a frame (at the same quality) hundreds or a thousands of times faster than a CPU renderer. If it did, we wouldn't be having this discussion! :-)

 

But make no mistake, in the vast majority of cases a GPU renderer will beat a CPU renderer. And it will do it by more than just twice as fast! Depending on which CPU/GPU you compare, you will often hear about performance ratios in the range of 5/10/20/30 times faster. Don't take my word for it: talk to anyone who's using a GPU renderer for their day-to-day work and they should be able to confirm it. If GPU rendering wasn't considerably faster why would anyone spend money buying a GPU and accept any limitations on the featureset?

 

Regarding the comment that NVidia pulling out of GPGPU... can you provide some references? Because from our perspective (and knowing about their roadmaps, SDK development, etc) I'd say it's actually quite the opposite!

 

Look, you guys are pros and you'll obviously use whatever tool you feel most comfortable with. If some limitation of a GPU renderer is a deal breaker for you... well, that's the end of the story, really! :-)

 

But I would certainly keep an eye on GPU rendering. It's not going to go away. And any limitations you find with it today, they might not be there tomorrow.

 

Regards,

 

-Panos

 

Hey, first of all, nice to meet you :- ) After all, your renderer is in fact now the most promising out of the GPU school. I am very impressed myself.

 

But not to point to use it ;- ) [not that I can even try without 3dsMax plugin, but I will once that's possible]

 

I still own licence of Octane (I did, from very alpha, so, few years, so I am still in touch with development), so I can compare to all the renderers I use and I use quite few, namely Corona and Vray most.

It's absolutely not 5-30times faster in any real-life scenario. Almost every single comparison, and the forums are now full of them, have slightly biased path-tracer like Corona on average 4930k i7 beating Octane on average 780Ti. Now Octane is pretty clean unbiased path-tracer foremost, so while a 'nice result' comes in 5 minutes, production quality comes in 50. This is talking as little biased, as mostly photo-real as possible. Same happens with iRay, which might as well be the slowest GPU path-tracer out there. The only production studio I know who use it, DeltaTracing, claim about 1 to 2 hours for full hi-res image, and that is with 6 TeslaK20 (former 580GTX equivailent I think). Definitely not impressive.

 

Now, your renderer seems incredibly fast. But how much of that is speed of GPUs versus the underlying technology ? After all, the produced imagery desn't look very photo-real, as observed by quite few users I have talked to. Vray can give you image in 2 minutes in you accept heavy compromise in realism, it can crunch it out. Or it can last for 50 hours if you want to computer every little GI caustic naturally with brute-force. It's the choice of choosing how the engine deals with it, but it's in no way connected to speed of CPU.

[disclaimer: I don't think you're aiming at hardcore archviz anyway, and that is the only thing I compare, the quality might be excellent compromise for animation/media]

 

Regarding people spending money buying GP and accepting limitations. Most people are stupid, let's be honest, they might not even be aware there are limitation. Before you came, the vram limitation was a very serious thing, that pretty much restricted it to small scale design world and hobby world. And even then, I don't see anyone jumping massively on the bandwagon. Octane came 5 YEARS ago, and it's still little corner player. Even with giant OTOY financial backing. The whole segment is miniscule compared to traditional CPU renderers and I don't see that changing anysoon.

 

Regarding the nVidia pulling slowly out of GP-GPU (not enthousiast market, but full scale farms rivalling IBM) you will have to forgive me to return later with some evidence linked. But to be honest, if we're looking at some roadmap (with Cuda for example), I don't see anything happening much at all. But I am not developer, though my brother is, and he can't decide if he's more dissapointed in existence of Cuda, or non-existest advancement of OpenCl. I still think it will become Abandonware eventually.

 

But don't worrry, if it WILL change, I will be the first to jump on boat :- ) I have no problem eating my words, apologizing and enjoying new benefits. I try to try everything there is.

Edited by RyderSK
Link to comment
Share on other sites

I think we all can agree from that article that clearly Genser imagery = ILM/Weta/Blur. You can tell the article is full of BS when people say crap like, "Well there are movies being made with this technology." That's fine, but without supporting evidence your claim is pretty well useless. What movies? Avengers 3 or Sharknado 45?

 

A seven second render will always look like a seven second render, no matter how much you try to zazz it up.

 

By the way, this post is going to be used in movies. Lots of movies. What movies you ask? Those movies.

Edited by VelvetElvis
Link to comment
Share on other sites

Currently, animations are cost-prohibitive (running upwards of $15,000 in outsourcing costs for a well-executed 60-second clip), and the results are hardly ever impressive. “Animations are particularly problematic,” says Bassett. “We currently subcontract this work out, where a minute of ‘good’ quality video is still no match for Hollywood standards.”

 

 

This 23-second movie clip was produced by Gensler designers in about an hour using Octane Render. It is illuminated by sunlight, seven artificial lights, plus a light box wall with an illuminated image, all taxing on a renderer, according to Gensler's Mark Bassett.

 

:rolleyes:

 

I don't know what to say.

 

edit: I can't help but think they could save themselves a lot of time/effort/research by purchasing Lumion3d based on their target for quality.

Edited by beestee
Link to comment
Share on other sites

Hey, first of all, nice to meet you :- ) After all, your renderer is in fact now the most promising out of the GPU school. I am very impressed myself.

 

But not to point to use it ;- ) [not that I can even try without 3dsMax plugin, but I will once that's possible]

 

I still own licence of Octane (I did, from very alpha, so, few years, so I am still in touch with development), so I can compare to all the renderers I use and I use quite few, namely Corona and Vray most.

It's absolutely not 5-30times faster in any real-life scenario. Almost every single comparison, and the forums are now full of them, have slightly biased path-tracer like Corona on average 4930k i7 beating Octane on average 780Ti. Now Octane is pretty clean unbiased path-tracer foremost, so while a 'nice result' comes in 5 minutes, production quality comes in 50. This is talking as little biased, as mostly photo-real as possible. Same happens with iRay, which might as well be the slowest GPU path-tracer out there. The only production studio I know who use it, DeltaTracing, claim about 1 to 2 hours for full hi-res image, and that is with 6 TeslaK20 (former 580GTX equivailent I think). Definitely not impressive.

 

Now, your renderer seems incredibly fast. But how much of that is speed of GPUs versus the underlying technology ? After all, the produced imagery desn't look very photo-real, as observed by quite few users I have talked to. Vray can give you image in 2 minutes in you accept heavy compromise in realism, it can crunch it out. Or it can last for 50 hours if you want to computer every little GI caustic naturally with brute-force. It's the choice of choosing how the engine deals with it, but it's in no way connected to speed of CPU.

[disclaimer: I don't think you're aiming at hardcore archviz anyway, and that is the only thing I compare, the quality might be excellent compromise for animation/media]

 

Regarding people spending money buying GP and accepting limitations. Most people are stupid, let's be honest, they might not even be aware there are limitation. Before you came, the vram limitation was a very serious thing, that pretty much restricted it to small scale design world and hobby world. And even then, I don't see anyone jumping massively on the bandwagon. Octane came 5 YEARS ago, and it's still little corner player. Even with giant OTOY financial backing. The whole segment is miniscule compared to traditional CPU renderers and I don't see that changing anysoon.

 

Regarding the nVidia pulling slowly out of GP-GPU (not enthousiast market, but full scale farms rivalling IBM) you will have to forgive me to return later with some evidence linked. But to be honest, if we're looking at some roadmap (with Cuda for example), I don't see anything happening much at all. But I am not developer, though my brother is, and he can't decide if he's more dissapointed in existence of Cuda, or non-existest advancement of OpenCl. I still think it will become Abandonware eventually.

 

But don't worrry, if it WILL change, I will be the first to jump on boat :- ) I have no problem eating my words, apologizing and enjoying new benefits. I try to try everything there is.

 

Hi,

 

Nice to meet you too!

 

It's interesting to hear that the Octane comparisons against Corona were unfavorable. I'm pretty surprised by those iRay times too... especially given the 6 GPUs! I did a quick search and only found an Octane/Coronal comparison on the Corona forums. If you can share any other links, I'd really appreciate it! :-)

 

Regarding the lack of realistic images done with Redshift: in terms of tech we're not actually missing much that would prevent someone from making such imagery. Due to the lack of a 3DSMax version the majority of our users (in Softimage/Maya) belong to the Media/Animation segment where, as you know, the requirements are sometimes different: saturated graphics, AO, heavily tweaked light/bounce settings, unconventional shader setups, heavy AOV postprocessing, etc. All of which are often geared towards 'pretty' and 'controllable' instead of 'photoreal'. We hope that, with the release of our 3DSMax plugin, we should see more realistic archviz examples and hopefully get some useful feedback from users to help us improve on that front! :-)

 

On the topic of Redshift's performance: yes a percentage of the performance gains is not because of the GPU itself but because of our own optimizations which could be applied to a CPU renderer too. But I can confidently say that the majority of the performance gain is indeed because of the GPU performance.

 

It's hard to predict the future of hardware but one trend that has been apparent in the last 2-3 years is that CPU progress has somewhat slowed down. There are not too many killer apps that need 8-16 core CPUs so the market for them is becoming more and more limited. On the other hand, GPUs keep getting faster because of increasing videogame graphics requirements. While the progress has slowed down on that front too, it is expected to start growing faster again given the new videogame console releases (more advanced rendering techniques need better GPUs and more VRAM) as well as the introduction of 4K monitors. Whether this means that the CPU/GPU performance gap will widen or not... this remains to be seen.

 

Finally, regarding those marketing images/videos that some other user posted here... yep, those are the type of exaggerated marketing claims that can easily backfire! :-)

 

-Panos

Link to comment
Share on other sites

It's true GPUs are evolving much more rapidly in recent years compared to CPU, but still with focus primarily towards gaming, but the fear that major player like nVidia will again artificially cripple the computing performance like it happened between Fermi and Kepler is still there. It's definitely looking very positive for real-time engines though, but I am still very sceptical about regular off-line renderers. But maybe we'll just see in few years how it goes.

 

Well, let's see then once the Max plugin comes out ! :- )

 

[i've read the article again...oh god, 15k for animation and it's still NOT hollywood grade level ? what are we doing...]

Link to comment
Share on other sites

yes but clients expet that 15K exceeds Hollywood standards, anyone remember that awful Spire animation a few years ago? Hollywood couldn't exceed our standards :)

 

jhv

 

Well the Spire animation was pretty good I thought, it just didnt live up to its price tag.

Link to comment
Share on other sites

May I ask, is "good enough" today's new normal? Architects and designers are using tools meant to speed up the collaboration process as the final stop before it heads out the door for print. And they are presenting it to their client as "spot on". This technology is obviously helping to aid in the evolution of how we work and think. Gensler presented this particular solution as a beauty shot and animation tool, not so much as a collaboration tool. So I ask my fellow artists, If the client is seeing what is being produced by designers and Architects and believes it to be "a spot on beauty shot", is this new "Real Time" technology hurting us or helping us?

Link to comment
Share on other sites

End clients are not blind, they are not fools and they are paying the bills. Whether something is good enough is up to them. I know thta if I tried to shoehorn a second rate piece past my clients they would send it back with red scribbles all over it.

I rarely use realtime stuff, I occasionally use VrayRT in the initial stages of setting up complex lighting, but thats about it.

With my current setup I can output 6k images in about an hour at final settings. Thats pulling in the whole farm (12 nodes). How does realtime (such as Octane) perform for high res stills?

Link to comment
Share on other sites

For example, I work in an architectural office that utilizes Revit for design and construction docs. Recently designers have been producing renders using Revit and Photoshop. That's slow. Utilizing Octane however their production level would increase. That's great as long as it is for design and collaboration purposes. In this particular case the client is using one of the designers renders for a fund raising presentation, simply because they don't see the need to pay for a high end render after seeing the designers render. The designer's render is the final render and the client is ok with that. I'm simply saying that I believe we will be seeing more of this. I know I am certainly seeing more of this. Within the architectural office the dynamics are changing. I don't know how that will affect render houses and consultants, but 10 years from now architectural firms will produce presentation materials differently than they do now. I'm watching it happen.

Link to comment
Share on other sites

I was just reading an article yesterday about how in a few years many of the jobs that we once thought of as safe will be taken over by machines. Architects, lawyers and even some doctors were on that list and I have to say I agree with that prediction at least in part. Why would someone hire a person to create construction drawings or draw up a contract if they could get a computer to do it, consider how much liability comes with an employee. This is the way the world is moving whether it's right or wrong people will always look for the cheaper option unless there's a good reason not to. I think we need to be ready for when the one button render engine makes it's appearance, still renderings especially may become the domain of the designer not the illustrator.

Link to comment
Share on other sites

My only argument against that is until computers that "Think" are mainstream, they will not take over the world.

 

One thing that all those professions have in common is listening to people talk and being able to not only understand but interpret what they are and are not saying. Same goes for our jobs, I for one welcome one button rendering, then it will truly become like photography, leaving me to get on with the "artsy" side of the job. As it is far too much of our time is taken up fiddling with settings.

 

jhv

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...