Jump to content

panagiotiszompolas

Members
  • Posts

    3
  • Joined

Personal Information

  • Country
    United States

panagiotiszompolas's Achievements

Newbie

Newbie (1/14)

10

Reputation

  1. Hi, I agree with pretty much everything you said except for the statement of GPU rendering introducing a more 'gamey' look. Unless you're talking about true realtime GPU renderers (like Unreal, Unity, etc), the majority of GPU ray tracers out there execute very similar rendering algorithms to their offline CPU counterparts, so there shouldn't be any differences in the look. Some of them do have faster rendering modes which are taking shortcuts (like that Octane video above, hence all this blue light leak) but these modes typically exist along with more accurate rendering options. -Panos
  2. Hi, Nice to meet you too! It's interesting to hear that the Octane comparisons against Corona were unfavorable. I'm pretty surprised by those iRay times too... especially given the 6 GPUs! I did a quick search and only found an Octane/Coronal comparison on the Corona forums. If you can share any other links, I'd really appreciate it! :-) Regarding the lack of realistic images done with Redshift: in terms of tech we're not actually missing much that would prevent someone from making such imagery. Due to the lack of a 3DSMax version the majority of our users (in Softimage/Maya) belong to the Media/Animation segment where, as you know, the requirements are sometimes different: saturated graphics, AO, heavily tweaked light/bounce settings, unconventional shader setups, heavy AOV postprocessing, etc. All of which are often geared towards 'pretty' and 'controllable' instead of 'photoreal'. We hope that, with the release of our 3DSMax plugin, we should see more realistic archviz examples and hopefully get some useful feedback from users to help us improve on that front! :-) On the topic of Redshift's performance: yes a percentage of the performance gains is not because of the GPU itself but because of our own optimizations which could be applied to a CPU renderer too. But I can confidently say that the majority of the performance gain is indeed because of the GPU performance. It's hard to predict the future of hardware but one trend that has been apparent in the last 2-3 years is that CPU progress has somewhat slowed down. There are not too many killer apps that need 8-16 core CPUs so the market for them is becoming more and more limited. On the other hand, GPUs keep getting faster because of increasing videogame graphics requirements. While the progress has slowed down on that front too, it is expected to start growing faster again given the new videogame console releases (more advanced rendering techniques need better GPUs and more VRAM) as well as the introduction of 4K monitors. Whether this means that the CPU/GPU performance gap will widen or not... this remains to be seen. Finally, regarding those marketing images/videos that some other user posted here... yep, those are the type of exaggerated marketing claims that can easily backfire! :-) -Panos
  3. Hello, my name is Panos and I'm one of the Redshift cofounders/developers. I accidentally bumped into this thread and just wanted to make a couple of comments. First of all, I am in complete agreement that GPU rendering speedup claims are sometimes exaggerated to a ridiculous point. And it personally annoys me because it's creating an unrealistic expectation (and subsequent disappointment) for potential customers, which causes more harm than good to companies like ours. Unless you're talking about some limited test scenario, yes, a GPU renderer will not render a frame (at the same quality) hundreds or a thousands of times faster than a CPU renderer. If it did, we wouldn't be having this discussion! :-) But make no mistake, in the vast majority of cases a GPU renderer will beat a CPU renderer. And it will do it by more than just twice as fast! Depending on which CPU/GPU you compare, you will often hear about performance ratios in the range of 5/10/20/30 times faster. Don't take my word for it: talk to anyone who's using a GPU renderer for their day-to-day work and they should be able to confirm it. If GPU rendering wasn't considerably faster why would anyone spend money buying a GPU and accept any limitations on the featureset? Regarding the comment that NVidia pulling out of GPGPU... can you provide some references? Because from our perspective (and knowing about their roadmaps, SDK development, etc) I'd say it's actually quite the opposite! Look, you guys are pros and you'll obviously use whatever tool you feel most comfortable with. If some limitation of a GPU renderer is a deal breaker for you... well, that's the end of the story, really! :-) But I would certainly keep an eye on GPU rendering. It's not going to go away. And any limitations you find with it today, they might not be there tomorrow. Regards, -Panos
×
×
  • Create New...