Jump to content

How will the upcoming IRay and GPU's affect the individual and small render houses?


Recommended Posts

I currently use Vray exclusively and am curious how the upcoming new technologies are going to impact my workflow.


Is Iray going to be integrated with Mental Ray in upcoming Max versions?


Will this new technology force one to use Mental Ray to take advantage of it?


Is the cost of new hardware to utilize this technology affordable for small and individual render houses?


Can the hardware be integrated with a current system by simply swapping out graphics cards, adding cards etc?

Link to comment
Share on other sites

from what i believe and reading the blurb on the max 2011 announcement, iray will not be integrated into mental ray with the next release of max anyway.


instead the next release has quicksilver renderer, a hardware based renderer that relies on graphics card resources.


so unfortunately all the render farms that wish to utilise the hardware renderers will need to upgrade with decent graphics cards which is something alot of them won't have as it's never been a vital ingredient to a successful one.


however i may be wrong and people may correct me, my understanding is limited

Link to comment
Share on other sites

This is an interesting question.

In my mind your question is actually three separate questions.

How will iRay affect my workflow?

How will iRay affect my pipeline?

And, how will iRay affect my hardware needs?


One thing to consider is; iRay is a platform, not an application. It’s a way of harnessing the power of multiple GPUs over a network (including the World Wide Web). This means that from the hardware standpoint, you’ll l need not invest in new hardware. Application developers will either provide the computing power as part of their application (SAAS), or will subcontract the hardware portion to render farms which utilize GPUs – or both. Of course you will be able to deploy your own cluster of GPUs if you want to, but the idea is that your computation needs will be satisfied only when you need them, and utilized by others when you don’t – an idling render farm is quite a waste of money.


From the pipeline perspective, this really has to do with who will port their tools – or design new ones – to iRay. The current available applications which use iRay are somewhat limited (see Jeff’s video of the iRay presentation at the nVidia conference last year). Basic things like changing light direction or camera position can be done with very fast visual feedback, but the ability to truly author the content by accessing the entire gamut of visual parameters is currently an available on the iRay platform. We are yet to see tools that can do more than provide fast navigation to understand how iRay will affect the common pipeline and how will tools that use it, will replace existing ones.


The workflow is in reality the most important piece of the puzzle. However, it’s too early to understand how it will change; since as I said above, there are no tools yet, which harness iRay for complete 3D visual authoring. In order to understand how GPUs will affect the way we create CGI, we can take a wider look at the GPU – visual - computing arena. Tools like Octane Render (http://www.refractivesoftware.com), Furry Ball (http://furryball.aaa-studio.cz/) and MachStudio Pro (http://www.studiogpu.com) are already changing the workflow landscape by harnessing the power of GPUs to author visual content in real-time and near real-time.


I don’t know much about Octane or Furry Ball, but I can tell you about MachStudio Pro as I work for StudioGPU.


In a GPU tool like MachStudio Pro (http://www.studiogpu.com/machstudio), the workflow is akin to “painting with light”. In fact, the ability to author visual content in real time has created the means to put a “Digitally Tactile” sense of working with paints on a canvas. Such ability puts the emphasis back on the artistic skills of the user; way more so then the technical ones.” You can make it look any which way you want. So… How do you want to make it look?” This question can be answered through imagination, or experimentation. Whatever. The point is that having the tools which help you imagine, and experiment, are among your most valuable tools as an artist, and the industry knows this. That is why tools which empower you to be an artist, are the tools which will eventually affect your workflow.


The notion that one day we will be modeling while viewing our content in photorealism is simply misleading. So is the notion that someday there will be a “Magic Button” which will produce photo real image in one click - we like to clarify and ask, “So you want a “Make Art” button?”. Even in the most classic 3D arts (like sculpting) the artist will deal with surface qualities (= shaders) and presentation qualities (= lighting) only after the shape has been sufficiently developed; if not completely finished. The artist is then using different tools and materials then one he used to create the shape. So will it be with visualization tools. The current move is to provide CG artists with the means to interact with the visual qualities of their scene in real time; effectively empowering them to express themselves with the least amount of technical limits.


This is the big workflow news; it’s not the technology but how we use it; it’s not the hardware, but the application. As the field of CG is developing we are being handed better and better tools to express ourselves. And since expression is so important for us, the tools which will change the workflow-landscape will be those which empower and inspire us to express ourselves in most natural and direct ways.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...