Jump to content

3ds Max 2010 Architectural Visualization chapter


Fillock
 Share

Recommended Posts

Hi all

I just read the 3ds Max 2010 Architectural Visualization chapter about color management, have been working many years as a 3D generalist but this is not a topic I know much about. I have some maybe stupid questions but I still don’t understand some main concepts around gamut:

How do gamut and bit-depth ad up? Both relate to the amount of color, or do gamut only make sense when we talk about 8 bit-pr-channel colors? In Photoshop I can have the Adobe RGB profile on both 16 bits and floating points pictures too.

When Windows don’t support color management but can use millions of colors, then gamut tell something about how many or how few colors for example your screen can show of that amount of colors the OS can output?

If I take a picture with Adobe RGB and convert it to srgb and back again, have the picture lost information sins sRGB has a smaller gamut? Or do it only relates to how for example Photoshop “read” pictures?

Any words or links would be greate, thanks J

Link to comment
Share on other sites

  • Administrators

As I wrote the chapter you are speaking about I'll answer your questions here one by one.

 

How do gamut and bit-depth ad up? Both relate to the amount of color, or do gamut only make sense when we talk about 8 bit-pr-channel colors?

 

Bit depth quantifies how many unique colors there in an image or on a display. Color gamut defines which colors are in an image or that a device like a printer of display can reproduce.

 

In Photoshop I can have the Adobe RGB profile on both 16 bits and floating points pictures too.

 

If your source image natively has a large bit depth, like a RAW image from a digital camera or EXR output from a rendering engine it may be more appropriate to archive and edit your images in a very large color space like ProPhotoRGB, but in many cases AdobeRGB or sRGB can work just as well. Really depends on what you are trying to do with the images.

 

When Windows don’t support color management but can use millions of colors, then gamut tell something about how many or how few colors for example your screen can show of that amount of colors the OS can output?

 

By default windows assumes that everything viewed in non-color managed application is in the sRGB color space. Windows 7 is somewhat color management aware, but XP is not. However, whether or not windows or an application is color management aware has nothing to do with which colors a device like your display can output. The gamut (which colors) a display can output is determined solely by the characteristics of the display. I don't know if that answered your question. What you asked did not really make a lot of sense.

 

If I take a picture with Adobe RGB and convert it to srgb and back again, have the picture lost information sins sRGB has a smaller gamut? Or do it only relates to how for example Photoshop “read” pictures?

 

When a digital camera takes a photo it is in RAW format, but in order to edit it, it must be converted into an RGB format, which have color spaces. If your camera allows you to save in RAW then this RGB conversion happens when you open the photo in your application. (Photshop, Lightroom etc). If you have a consumer camera that saves only to JPG etc, then this conversion happens in camera. This is why you generally have the option of the sRGB or AdobeRGB colorspace in the camera. For people who don't know how to use color management or very basic users (your mom, dad, grandparents), then selecting sRGB is the best option as this will result in the most predictable results. However, in doing so you're loosing a ton of color data. If you must do an in-camera conversion, and you're a pro and using color management, always select the largest color space you can to preserve as much of the color data as possible. If you save in RAW, then convert your images into ProPhotoRGB and edit and archive them in this space, even though you may convert to a smaller color space for other outputs.

 

Once your image is opened into Photoshop, it will now have a profile assigned to it (either the one you converted into from the RAW file, or the one you set in camera). You can "Assign" a different color space to an image and this only changes the interpretation of the underlying RGB values. This is non-destructive. Of course coming from a camera, you would not want to do this, but I wanted to explain the different between this and "Convert" which DOES physically change the underlying RGB data values. If you convert from a larger color space like AdobeRGB to a smaller color space like sRGB then you do lose color data. How that conversion takes places is determined by your rendering intent. Some rendering intents crop colors, while others will compress colors during the conversion. If you go from a smaller color space to a larger color space, you can end up with problems with banding depending upon the colors in the image.

 

Hopefully that answers some of your questions.

 

Cheers,

Jeff

Link to comment
Share on other sites

Yes I know you wrote the chapter J fantastic to find a 3D book at this high level. Thanks for answering, it really help me sort things out.

So if I understand you right bit-depth relates to how many chunks a color can be divided in, but a higher bit depth doesn’t give a higher amount of unique colors?

Ok. My question wasn’t easy to understand, I try to explain better; gamut tells something about witch and how many colors, but witch and how many of what kind of colors? Since you say it don’t relate to how many colors the OS can handle, then gamut maybe refer all colors the eye can see?

Thanks for clarify about photography and colorspace, I take a lot of photos but feel I don’t need the RAW format since the pictures only serve as a starting point for my 3D texturing. Now I still can use jpg but contain more color information in them.

In you chapter you wrote most 3D applications don’t support color management (and then I won’t get color managed output ether). But how do 3D applications handle color managed textures as ”input”? And same for realtime applications, is it possible to say something general about how to handle color manage texture for input to this kind of applications?

Tanks ones again J

Link to comment
Share on other sites

  • Administrators
So if I understand you right bit-depth relates to how many chunks a color can be divided in, but a higher bit depth doesn’t give a higher amount of unique colors?

 

If your talking about a display then a higher bit display WILL be capable of displaying a higher number of unique colors. The same applies to an image, but it depends how many colors are in the image. If the image is all black, then you don't have any more unique colors, but you could.

 

gamut tells something about witch and how many colors, but witch and how many of what kind of colors? Since you say it don’t relate to how many colors the OS can handle, then gamut maybe refer all colors the eye can see?

 

No quite. In terms of color management, gamut refers to which colors are contained within a color space. A color space can be a synthetically generated mathematical description like sRGB, AdobeRGB etc, or a color space that describes the colors a particular device like a display or printer can output. If you look at the diagram below you'll see a chromaticty diagram.

 

300px-Colorspace.png

 

That horseshoe shape in the background describes the gamut of colors humans can see. On top are the three most common synthetically generated color spaces (sRGB, AdobeRGB and ProPhotoRGB). You'll note however that the ProPhotoRGB color space contains colors within its color space that are actually outside the range of colors we can see. So it could also be said that the gamut of colors contained within the ProPhotoRGB color space is larger than the visible spectrum of light. In very simple terms you could consider a color space an empty box of crayons. The gamut of that color space is defined by WHICH crayons you put into that box. Gamut and Colorspace are very closely related to one another. Gamut is a more generic description of WHICH colors. Whereas a Colorspace describes WHICH colors as it relates to the visible spectrum of light. Does that make sense?

 

In you chapter you wrote most 3D applications don’t support color management (and then I won’t get color managed output ether). But how do 3D applications handle color managed textures as ”input”? And same for realtime applications, is it possible to say something general about how to handle color manage texture for input to this kind of applications?

 

An image that is color managed would have an ICC profile embedded into it. That profile is simply a definition of the color space in which the raw RGB numbers in the images are to be mapped into. Consider it a scale. Like the number 6 means nothing unless you say 6KM, 6Miles, 6cm etc. So if you had an image that was mapped into that colorspace (ie. it had the sRGB profile embeded into it), and you opened this into a 3d application which was not color managed, it would simply ignore that profile.

 

Those raw RGB values would then be interpreted by the assumed underlying colorspace of the OS. Which is by default sRGB. Now this is where is gets a bit more complicated. 5+ years ago when most displays could only display colors which approximated the sRGB colorspace, everything worked pretty well. Even though 3d apps are not color managed the OS assumes sRGB and your display could output approx. sRGB so your textures would look correct without color management in the framebuffer and in Photoshop. Once wide gamut displays came out (which have gamuts/colorspaces) which approximate the much larger AdobeRGB, things become more complicated. Now you have an image that the OS is assuming to be in the sRGB colorspace, but your display is interpreting it as AdobeRGB. All of those raw RGB values are being mapped to a much larger color space. In essence you're assigning the wrong scale to the RGB values. As a result the images will look much more saturated.

 

In the chapter I describe a process to open images from a rendering engine into photoshop using the display profile. If you're working on a texture though that you will eventually put into the rendering engine, I would convert it to the sRGB color space first and embed the sRGB profile to it when you save. The profile will be ignored, but the underlying RGB values will at least be converted into that space which is what the OS is going to assume. If you're on a wide gamut display it won't look the same as in Photoshop though.

 

 

Hopefully this has helped clarify some things. Color management is not an easy thing to understand. It took me a long time to fully understand how all of this works too.

Link to comment
Share on other sites

Well I’m far from fully understand color management, but this was my first serious round with the topic and I have a much higher understanding of it now thanks to both your help here and your chapter. Not often I find a writer helping to clarify questions related to the print as you do here. That’s very generous and helped me a lot.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...