Jump to content

How come Maxwell isn't mainstream?


M V
 Share

Recommended Posts

I was just browsing through the Visualizer of the Week Forum and almost every person in there is using Vray. My understanding is that Vray's speed may be why its the renderer of choice, but I am really interested in using Maxwell because of its accurate results. Will Maxwell ever be able to compete with Vray?

Link to comment
Share on other sites

  • Replies 59
  • Created
  • Last Reply

Top Posters In This Topic

Its used in my office and I won't touch it. It can produce great results very easily, BUT its a render hit of about 10-15X. My colleagues render to about 2000 pixels with limited polys (no trees or cars) and it takes them about 15 hours to get an image. I render to 4000 pixels with millions of polys and 3D entourage in generally under 8 hours using C4D.

 

In my opinion, its just far too slow. Its so slow, I can't even commit the time to learn it. Which is sad, it has some really nice features.

Link to comment
Share on other sites

It's slow. When it was new we were all very hopeful for it, but in the time since Vray and mental ray have improved a lot, while a lot of customers soured on Nextlimit (and many who liked the idea of a Maxwell-like product but did not like Nextlimit switched to similar software like Fry) because of the PR disaster / customer service SNAFU with the nonfunctional "RC" versions leading up to the Maxwell 1.0 release.

Link to comment
Share on other sites

Short answer:

I agree with Matt and the others who posted with the exception of Maxwell being boring.

 

Longer answer:

If anyone tried rendering full blown GI with Vray when it first came out, it was equally as unproductive and frustrating. Thankfully, they've stood the test of time and it has grown to really become a powerful and, more importantly, very productive tool. NextLimit's product management team also was a problem right out of the gate. It's unfortunate as their RealFlow support has always been top notch from my experience.

I also think Maxwell's strength is its biggest weakness (same for any unbiased engine). For example, I can light an interior space with Maxwell in about 5 minutes and know with certainty that it'll be correct and look great when I come back in the morning (or on Monday morning :) ). That has to do with integrating fixture schedule blocks in autocad, a fairly solid background in architectural lighting design, predefined light fixture library setup for Max and a handy script that replaces all of the autocad blocks with Maxwell fixtures at the push of a button. I still haven't had a lot of success doing that with Vray. There are just too many variables to consider.

 

I also think the development teams for pretty much all other engines took a sincere interest in the approach Maxwell took and are catching up very quickly with similar offerings.

Link to comment
Share on other sites

wasnt there some controversy about maxwell awhile back? No response from them re: bugs? or something like that. THat said, the recent maxwell, 50% off sale couldn't even propel me to buy. :p

 

it did me. frankly, a bargin.

 

i think the 'controversy' was that after the beta the RC versions took a step backwards and caused a lot of problems. users were annoyed there was no instant fix and forums went a bit quiet while everyone beavered away trying to fix the problem. then v1 came and was nice a stable and v2 rocks.

Link to comment
Share on other sites

  • 1 month later...

This is an interesting thread, as I was just looking at the maxwell site and wondering the same thing.

Surely if the gripe with maxwell is purely down to speed, then this a mute point, as computers are only going to get quicker, and if the outcome of maxwell is physically accurate, that sets a very definite bench mark to aspire towards.

 

This morning I have been trying to choose colours and finishes for my new kitchen using vray, and to be honest it is so subjective in terms of the scene lighting and other settings it's almost pointless. Sure I can make things 'look' nice, but whether the kitchen will ever look like that when it's installed is a completely other matter. No whether maxwell offers a more realistic result is another question. But it seems to market itself along those terms. Only the users will have better insight into this.

Link to comment
Share on other sites

A lot of people were turned off by both the technical difficulties and the difficulties dealing with Nextlimit. I think it's telling that most of the user base was seriously pissed off for months at a time. After selling the software when it was in beta with a promised full release date, NL missed the date then released what they referred to as a "release candidate" but was actually, by industry standards, a pre-Alpha (not feature-complete and with too many bugs to let it out the door). The company reps and their internet proxies then took a very belligerent attitude towards the customers on the web forums, and many customers were lost for good, and because of that Maxwell doesn't seem to have as much market presence as people originally thought it would.

 

Personally I never managed to make the software anywhere near as useful to me as mental ray, but I never had very much money in it so I've written the whole thing off and tried to sell my license. There are others who bought more copies at full price and haven't forgiven them as easily.

Link to comment
Share on other sites

But it seems to market itself along those terms. Only the users will have better insight into this.

 

I think this is subjective. ...for me the market is after something that is tactile. ...which is different than something that is photo-real. There is a general unidentifiable generic emotion towards imagery that "feels" computer generated, where as something that "feels" natural is more acceptable to the brain. Things that feel natural are photography, watercolor, oils, acrylic, pencil, etc... The thing they all have in common is that they all have a tactile feel to their look. For me this tactile feeling increases you level of engagement. A feeling that you could reach out and touch the project, even though it is still a 2d surface on paper. This is not the 3d type of rendering that technology will give you, but rather an emotional pull to a tactile object.

 

Maxwell is an engine that give you very nice color balance and looks towards the photo-real. It does this more so than other engines towards the photo-real, but that is only one realm in which you can create an emotional reaction.

Link to comment
Share on other sites

I think this is subjective. ...for me the market is after something that is tactile. ...which is different than something that is photo-real. There is a general unidentifiable generic emotion towards imagery that "feels" computer generated, where as something that "feels" natural is more acceptable to the brain. Things that feel natural are photography, watercolor, oils, acrylic, pencil, etc... The thing they all have in common is that they all have a tactile feel to their look. For me this tactile feeling increases you level of engagement. A feeling that you could reach out and touch the project, even though it is still a 2d surface on paper.

 

All of which I agree with, but forgive me if I'm wrong, isn't the premise of maxwell to essentially mimic photographic techniques, which is an art form in itself? Certainly some of the examples on their site are capable of stirring an emotional response in my view, not just clinical representations of real life. I seem to remember one of my critisisms of V-Ray when if first came out was that every render looked the same.

 

Personally I'd be keen to try out Maxwell, as I'm getting pretty bored with the way my V-Ray renders are looking, but that could just be purely down the user rather than the tools.:rolleyes:

Link to comment
Share on other sites

All of which I agree with, but forgive me if I'm wrong, isn't the premise of maxwell to essentially mimic photographic techniques, which is an art form in itself? Certainly some of the examples on their site are capable of stirring an emotional response in my view, not just clinical representations of real life. I seem to remember one of my critisisms of V-Ray when if first came out was that every render looked the same.

 

Personally I'd be keen to try out Maxwell, as I'm getting pretty bored with the way my V-Ray renders are looking, but that could just be purely down the user rather than the tools.:rolleyes:

 

Yes, Maxwell can produce nice photoreal renders, I think I was partially working through what makes or breaks a render for me, which is how tactile does it feel.

 

Maybe you can look into different lighting and post production color grading techniques? You might be able to get a different look that will reinvigorate you, while still letting you use Vray.

 

At full disclosure I am a bit of naysayer when it comes to Maxwell in general. I can not justify the time it takes to render an image. I need speed and from everything I see, Maxwell can not produce the speed I need. I need to be able to render several iterations quickly to make sure everything is looking and feeling like I need it to. I still haven't used an engine that can compare to Vray in this sense.

Link to comment
Share on other sites

Yes, Maxwell can produce nice photoreal renders, I think I was partially working through what makes or breaks a render for me, which is how tactile does it feel.

 

Maybe you can look into different lighting and post production color grading techniques? You might be able to get a different look that will reinvigorate you, while still letting you use Vray.

 

At full disclosure I am a bit of naysayer when it comes to Maxwell in general. I can not justify the time it takes to render an image. I need speed and from everything I see, Maxwell can not produce the speed I need. I need to be able to render several iterations quickly to make sure everything is looking and feeling like I need it to. I still haven't used an engine that can compare to Vray in this sense.

 

Sure, I havn't tried it yet, so I'm curious. I think the issue regarding speed for me is not so important. Not to say that I dont have to consider production schedule, quite the opposite, but inevitably computers are getting faster and faster, and byt the looks of it v2 is a faster version of maxwell than v1. So the question is, once the cpu speed is no longer a big issue what is it we want in the end, and for me that would be superior quality renderings, whether maxwell has the edge I dont know from experience.

 

Whatever the route, Vray RT is obviously a big step in this direction to be fair, as real time feedback on changes is king.

Link to comment
Share on other sites

Is there any talk of giving Maxwell a GPU powered engine? That would make sense to me.

 

If the node licenses were free, like (Vray or Mental Ray) then Id buy a license. As a sole practitioner with a farm that would be a nice purchase, but having to license my nodes as well is a bummer.

 

Oh and one other thing, that speed comparison link really shows how BAD the times are for version 1.7. Im not sure comparing to yourself is too good an idea. Maybe they should have posted some tests against another unbiased engine like Fry....

Edited by Tommy L
Link to comment
Share on other sites

SSo the question is, once the cpu speed is no longer a big issue what is it we want in the end, and for me that would be superior quality renderings, whether maxwell has the edge I dont know from experience.

 

I don't think CPU or GPU are fast enough for production non-biased rendering. They will be some day, but I feel we are still missing the next big step that will make non biased results really shine. IMO we are missing the SSD equivalent in the CPU/GPU world.

 

But then again, I am just one man's opinion.

Link to comment
Share on other sites

realtime engine for maxwell studio is under development, and i have to say looks incredibly promising.

 

 

sure they will be able to include it in the 3d app at some later stage.

 

EDIT - see here - http://www.maxwellrender.com/pdf/Maxwell_Render_Interactive_Preview_Info.pdf

 

In a few words, what is this?

It is our new interactive engine. The goal of this new feature is to provide a much more intuitive and efficient workflow, dramatically reducing setup times and learning curve, and improving the user experience. Under the hood this render engine is a hybrid of the Maxwell Render v2 core technology - plus other optimization algorithms we have been working on.

Do users need any specific graphic hardware to run it?

No. Any system capable of running Maxwell Render v2 can be used with the new interactive engine. Although this engine uses the GPU for certain tasks, it is mostly CPU based so no special graphic hardware is required.

So is it using CUDA?

No. While CUDA is a very powerful technology, especially promising in the rendering area, we consider that for a renderer like Maxwell Render it is still not good to force customers to spend money on dedicated hardware that might be expensive and could be obsolete soon, given the high speed of changes in this area. We do not want to tie customers to specific hardware vendors when there is no standard in this area yet. OpenCL looks like a very interesting option for the future, but the fact is that it still needs time to evolve into any kind of standard used in complex development cycles. For simple renderers, GPUs can be extremely fast, but for a state of the art raytracer like Maxwell Render that can work with large geometries under any kind of complex lighting environment, using multilayered materials, generating several render channels, etc. etc. modern CPUs are able to provide similar performance, even better in some cases, as shown in the videos of this new engine.

What are the limitations of this new preview engine? Is it biased? Does it provide a different set of parameters that users have to learn? Can users use normal Maxwell materials...?

The same as the Maxwell Render v2 core render engine, the interactive engine is unbiased, so in the end it converges to the same solution as the normal core engine. The main difference is that it provides a much faster preview than the normal engine so it is perfect for scene setups. For complex indirect lighting, caustics etc. the normal engine might provide better performance. As Siggraph attendees have seen this week, the new preview engine works with normal scenes of any size (some of the demos have scenes of 2 million triangles) and with any kind of material. The render options are also the same used in the normal engine.

How much will the new interactive preview feature cost?

It will be free for Maxwell Render v2 customers.

Edited by mattclinch
Link to comment
Share on other sites

Of course, I agree with Travis.

 

Here's the problem with relying on the GPU to do "unbiased" (as if a little bias is such a bad thing, whatever those disinforming wanks were saying on that hype video about that new CUDA render project that we all saw, and no complaining, you know it's true) rendering. You compare a CPU and a GPU, architecturally, the CPU handles fewer simultaneous operations which can be quite complex, while the GPU handles many simultaneous operations that are not at all complex. Factors working against the GPU are lack of complexity, and the limitations of the concept of many in computation. GPUs have more transistors than CPUs, but only by a factor of two or three. A lot of those GPU transistors are going toward managing all the protocol for that absurd number of simultaneous operations. So the number of transistors that can be actively used for doing tasks isn't very different.

 

What this all adds up to is that the assertion that a Geforce card has far more computing power than an i7 is absurd on its face. What it has is much more very specialized capacity for doing a pretty small number of things.

 

Which is why it can, by hook (just monte carlo brute force it, you've got freakin' 800 threads) or by crook (just put the damnable model through DirectX already like we should have been doing years ago) either very quickly produce an image that you can't really use in a high end presentation, or very slowly produce one that you can. Which you've been able to do for years.

 

So don't put all your eggs in the GPU basket. Somebody's going to have to make some kind of breakthrough before they'll be useful for the same tasks as CPUs, and there is no way to tell when or if that will happen.

Link to comment
Share on other sites

I'm not quite sure why no one has brought up multilight in this discussion. I know it affects render time, but being able to essentially have a day and night rendering in "one" render session is extremely valuable. In fact, with proper preparation, you have many different lighting scenarios.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×
×
  • Create New...