Research notes.

During the first term, I did a lot of research in to lighting and rendering for: The Deep, Unhatched and Patience. Some of it I scanned and uploaded earlier on, but here are the rest of my notes.

mia_light_surface node.

Simulate a direct light source on the shader itself.

You would use the mia architectural material and attach the mia_light_surface node under the advanced tab to the light contribution called additional colour.

Ensure Final Gather is enabled.

Increase FG on light surface node [Final Gather contribution].

Refl contrib- this attribute calculates reflections.

This node is good to use conjunction with a direct and illuminating light source, not just by itself.

Under the mental ray tab in the attribute editor you can enable the light shape option, such as: cylinder, sphere etc.

Detailed Overview of Depth-Map Shadows.

Increasing resolution will make depth map shadows less jaggard, however this will increase render time. Do it in K’s; so 1024 (1K), 2048 (2K) etc.

Mid Dist point- averages distance to determine where a shadow should be.

When mid-point dist is turned off, noise in the shadow increases.

Filter size- controls the degree of softness in the shadows

Filter size- blurs reflection.

Tip- low res map with large filter size.

Bias- offsets whether the shadow is close or further away from the light source.

Shadow colour- increases lightness of shadows to soften them. You can also change the colour as well.

The down side about depth map shadows is that it does not work well with transparent objects or textures with an alpha.

Raytraced Shadows.

More accurate than depth map shadows.

However it increases render time.

Light radius- controls how soft and blurry the shadow gets the further away it gets from the light source.

Shadow rays- increases the amount of shadow rays calculated, this reduces how noisy the shadows are.

Ray depth limit- determines how many surfaces it calculates.

You can also go in to the render globals and change the quantity of shadows- this increases the realism and quality of the raytrace shadows.

General principle for most of these attributes is that if they are increased then the render time will be increased as well.

Mia_Physical_Sky node.

Multiplier: in effect the intensity. You can over drive this.

RGB conversion– controlling the luminance values that are returned to the camera. Once these options are set higher then the luminance values are increased when returned to the camera.

Haze controls the overall amount of medium that is in the environment. This can help scatter the light to shift things more to the red side of the spectrum.

Red/blue shift- counteracts haze. This can be done by increasing or decreasing the amount of red/blue shift. This is the kind of attribute where a little bit of change can make a dramatic difference in the render.

Saturation– controls the overall strength or coloration of the environment.

Horizon height– controls the cut off point where the sky meets the horizon of the physical sun and sky.

Horizon height- this will have an effect on the overall colour of the image.

I would expect the horizon height to be quite low for unhatched so that the tree(s) can be in focus.

Horizon blur- controls the blur line that will happen between the sky and horizon.

Ground colour- controls the colour of the horizon below the sky. Application- can emulate grass for unhatched.

Sun disk intensity- controls visibility of the inner disk of the sun.

Sun glow intensity- controls out glow of the sun. Tip: you can reduce glow intensity in order to see what you’re doing when controlling the sun disk intensity.

Sun disk scale- control the size of the inner disk.

Use background: allows the use of a custom background (i.e. texture or movie)

This option is good for compositing purposes.

To use this option you have to:

connect an mib_lookup node to brighten the texture use a muliply and divide node, connect texture to input 2 of M/D node and connect M/D node in to texture area of mib_lookup.

Application of use background- this could be used to plug in a blue sky with white clouds background, this could avoid the process of tracking in after effects when adding a background in on a moving camera.

Mia_Physical_Sun Node.

Good for realistic outdoor scenes.

Found inside mental ray lights.

Can be connected to any maya light.

Be easier and quicker to create physical sun and sky in render globals.

Rotating the main directional light can give us a decent representation of the time of day.

There are only a few controls for the physical sun because most of the control is in the physical sky options.

The options for the mia_physical sun node are:

Shadow softness- softens shadows can give an impression pf the scale of the environment.

Softer= bigger environment.

Samples- reduces noise but increases render time.

Shadow softness, a good application of this could be for Unhatched to give the impression that this environment is bigger than actually modelled, this way giving a false impression of the scale of the environment.

Final Gather.

Ensure Mental Ray is turned on.

Under mental ray tab enable Final Gather (FG).

With FG- use raytrace shadows.

The ambient light attribute on the Lambert can trick the renderer in to thinking it’s giving off light.

Irradiance (under mental ray tab)- controls quantity of Final Gather contribution.

Irradiance- similar to ambiant color. This attribute works better when an image is attached (like a ramp). These FG points will then display some illumination from the color on its surroundings.

Irradiance color- controls the amount of environment colour that is brought back on to the object. White= full colour. Black= None.

Final Gather Accuracy.

Tip: diagnose final gather displays the FG points.

FG accuracy controls how many rays the FG points it will emit with more complex scenes. FG accuracy of 400-500 is not out of the question.

Density attribute is the quantity of points assigned to the scene.

Point interpolation- controls the number of points that are going to be calculated before indirect illumination is displayed.

Final Gather Intensity.

Can use a planar as a light source with final gather. Increase ambiant color.

Render globals-

Scale- increase or decrease the light calculation. This can give the scene a tint.

Save and re-use FG point data.

Two options for this: rebuild and final gather file.

This will allow you to save and re-use FG point data.

Rebuild “on” means that point data is re-calculated every frame or every time I render.

Rebuild “off” means if the camera moves then new FG points will be recalculated and added to the existing map.

Rebuild “freeze” will not do any kind of calculation and will rely entirely on the stored information.

Render Passes.

Render pass utilises something in Mental Ray called frame buffers.

Frame buffers work by independently calculating components in the scene. So it will do a calculation phase for the direct light, calculation for indirect, shadows, reflections, refractions etc. All of those aspects are calculated independently of each other and then compressed in to one image, however with render passes all of this information is generated anyway but rendered out in to individual files.

Where as render layers its doing a separate render for each layer. So when rendering out The Deep, and we were using render layers we could easily be doubling and/or tripling render time.

So the benefit of Render Passes is that you don’t really add any additional render time when comparing render time of a standard colour render.

Application- have to mindful of what materials we would be using in the scene:

Maya materials work fine with render passes (completely compatible).

Mental Ray materials can be an issue: Render passes are only compatible with materials that have “_passes” at the end of the name. Due to the limitations of mental ray materials with render passes, the mia_material_x_passes can only have the render passes:

Beauty.

Diffuse.

Direct irradiance.

Indirect.

Reflection.

Refraction.

Specular.

Transluscence.

Mental Ray Physical light node.

Can be attached to any Maya light.

Go to light attributes, under mental ray.

Attach through the light shader attribute of the light.

Physical light is based on real light physics.

Intensity of physical light node is derived from the colour.

Cone- this is the cone angle if node is attached to a spot light.

Thresh hold- cut off point that you can define for the overall intensity of the light.

If you tick the option “use light shape” you can dictate what angle you want the area light when rotating the area light.

Using the physical light node can create hotspots or points of burn. To fix these you can use lens shaders such as simple_exposure or photographic_exposure.

Camera Settings.

Film Gate- would be used if you were to send this back out to be put on film, match up with film footage so you want things to be aligned correctly and then matchmoved. This film gate would be very important.

Resolution gate is the most appropriate because all the films that I’m going to be working on are 100% computer animation, no live action will be incorporated in the films and so this will be the camera attribute that I will be using.

Caustic light patterns.

Caustic photons- which create light patterns which you see through glass and water.

Global illumination- responsible for secondary illumination and colour bouncing.

Photon intensity determines the amount of energy of the photons leaving the light.

Exponent is the dropoff energy rate of the photons (1.000 is pretty much constant) at it increases the more quickly the photons will lose their energy.

Caustic photons determines the look of your caustic pattern.

Portal light- this attribute is good for lighting indirect lighting scenes. This has to be used with an area light.

You attach this to normal light, through the light shader (under custom shader).

Turn on the “use light shape” forces mental ray to be used and tick “visible”

Portal light: used in conjunction with final gather.

The “visible” attribute blocks final gather rays coming from an external source and uses indirect illumination.

The standard area light options such as colour, colour and intensity are overridden when portal light is applied.

The portal light has to be adjusted to adjust these kind settings.

Photons.

Ensure global illumination is on.

The quantity of photons is dependant on the size of your scene.

Caustic photons- create light patterns (see through glass and water)

Global illumination- responsible for secondary illumination and colour bouncing.

Before I researched this I thought there would be applicable to The Deep, but after I researched photons I came to the conclusion that this will be useful information to have and apply later on when the opportunity presents it self, but at present I cannot see an application for this in either The Deep or Unhatched.

Depth of Field Lens Shader.

This will work on a per camera bases.

Select camera

scroll down to mental ray

select lens shader checker

scroll down to lenses- select physical lens_dof

Two simple options

plane is the attribute to determine what is in focus. This is the distance from the camera, anything out of this range will be out of focus.

To determine the distance from the camera you need to turn on “object details” under the “Heads Up Display” menu.

When entering numbers in to the plane option, make sure they’re negative.

Radius controls strength of blur.

To reduce noise, go in to render globals, under anti-aliasing and in raytrace/scanline quality, increase max sample level.

Mia_light_surface node.

Could be applicable for patience- film noir and The Deep- bedroom and living room scene.

Simulates a direct light source on the shader itself.

You would use the mia architectural material and attach the mia_ light_surface node under the advanced tab to the light contribution called additional colour.

Ensure final gather is enabled.

Increase FG on light surface node [FG contrib]

Refl contrib- calculates reflections.

Good to use in conjunction with a direct and illuminating light source, not just by itself.

Under the mental ray tab you can enable the light shape option and you can also change shape. You can have: cylinder, sphere, cuboid etc

Advertisements
Posted in Uncategorized | Leave a comment

Render Tests & Solutions.

First successful batch render test was done in Tiff uncompressed. In total it took 4 hours, 12 minutes and 35 seconds to render out 2700 images.

One of things that I forgot when checking over the render settings was the principle: don’t render what you don’t see, and I set the first (number of frames) on the ice-berg layer. Something that I will remember next time when I send a render job like this again.

The only issue with the rendered images was the Submarine Colour render layer. The problem with the render is that the submarine is in silhouette, even when I have placed lights on to the render layer. Fahran my supervisor and Kofi looked over the render settings and they could not see anything wrong.

A rendered image, as can be seen, pretty much a silhouette.

The first batch render was in Tiff uncompressed. From a photographers perspective having as much data in the file is better, which is why I used this format due to the uncompressed data, which could be useful for compositing.

But for our case Andy wants Targa.

So Fahran, Kofi and myself came to conclusion that I should send a short targa sequence to the render farm, would be the next best thing to test. In addition we also suspect that the silhouette problem is due to the picture format.

This next test didn’t work, the render farm yielded the same results as before.

The next test to resolve this problem is to create a new render layer and only change the render settings, none of the layer attributes (in the attributes you can change the render layer to be: occlusion, normal map, diffuse, shadow etc) which is what I suspect the problem to be.

00:08:14- 5 frames. Targa

This test was unsuccessful, yielded the same results.

In the afternoon, I submitted about 7 render tests to the render farm, each time the renders were coming out the same where the textures were not showing up.

I then had a talk with Alex Caldow and we talked about directory settings on the render farm before submitting a job. It then occurred to me, in the render farm settings there is an area to type in a file directory called “file dir”, this could be the field where the render farm references the textures from. This turned out to be the problem.

So I sent off a 900 frame render to the render farm. So if 2,700 frames takes 4hrs then 900 frames should take this should be 1hr 20mins.

An advantage to working with Targa as apposed to Tiff uncompressed is that it renders out faster, which was very noticeable when doing the 5 image renders test.

Conclusion.

So in conclusion the problem was a lack of understanding of what to put in to the file dir[rectory], which was down to the documentation provided, it did not elaborate on the different fields for submitting a maya job in terms of the purpose of the fields and what you are meant to put in to these fields.

Posted in Uncategorized | Leave a comment

Render Job Submitted.

I’ve submitted a job to the render farm.

To test the render farm I set the quality up quite high, perhaps more than was necessary for the occlusion.

I used the following render layers:

Occlusion.

Submarine color.

Iceberg color.

The reason why I separated the iceberg and submarine is because this allows for a greater control over these objects in post production, particularly with Depth of Field. Previously when compositing a single frame image from Maya in Photoshop, I found it difficult isolating the submarine and iceberg because they were all one image; so I decided this time round, I would use render layers to separate these objects.

I followed the render farm documentation, but I found certain areas lacking information but I figured out what was what through trial and error.

One of the main things that I have learnt from this experience is that always use back slash \ not a forward slash /. What this did was make the render job fail quickly. So a method I will use for next time will be to copy and paste the directory and then replace all the / with \.

Andy also wanted bubbles to be put in the Maya scene which I forgot to do. So what I will do is send off another render farm job later with the bubbles as a separate render layer and then composite all these layers in After Effects.

Posted in The Deep | Leave a comment

Render Farm Software.

Info.

The render farm software at college is now available. Andy is after a 3D pan around the submarine underwater, like a still frame. There are two applications for this camera animation:

  1. One of the things that we will show in our formative presentation.
  2. To do a render farm test to understand the interface and see what the render farm is capable of.

The nearest example of what Andy wants is Super Smash Bro’s; when the game is paused you can move the camera around to see the fight scene at different angles.

I am currently going through the documentation now (6/12/10). The intention is to submit a render job to see how long it takes.

The benefits of doing testing the render farm now are:

Find out if there are any technical issues with the software and see if these are avoidable for the future.

Practice- the more I do it the quicker I will be so submitting jobs near the end of the year won’t take so long and hopefully less mistakes will be made.

See what the Render Farm is capable of. If I overload it then I will know and understand the limits of it and be able to think how I can organise the renders so that they are more manageable.

Frame Breakdown.

Dimensions in Maya: 1280 by 533.

Dimensions in Photoshop: 1280 by 720. This will allow for the black bars to be put in After Effects.

Final Note.

The small amount of fog light that I had spent a long time on trying to get like Andy’s concept (see below) has now been fixed. Instead of using a maya light with a ramp shader attached to it, I used a cone volume primitive. When ever a volume primitive is created a fog light is created at the same time. So when the cone volume primitive was made, so all it took were some settings to be tweaked such as fog depth and strength and I had the result I was looking for.

Posted in The Deep | Leave a comment

Light Test for Storyboard.

I showed some of the underwater light tests I had done over the weekend, and so far positive feedback. Fahran and I discussed what render layers we would need for the outside shots to be composited. These render layers are: alpha, occlusion and colour. I also added a normal map render layer so that it could have a slight bit of colour.

For the render layers, I’m considering splitting the submarine, ice berg or any other objects in the scene on to different render layers, this way giving us more control in post production for areas like depth of field.

Storyboard is making a lot of progress, and so I decided to take an exterior shot from the storyboard so that we can have a finished shot for our presentation.

I got started on it instantly and got quite a quick result…

First light test for the storyboard shot (the submarine slowly approaching an iceberg). I made the shot dark because the name of the film is called The Deep, in addition I thought at this stage the submarine was deep underwater, therefore less light reaching at this depth.

The feedback I got was that it was way too dark. Andy showed me a concept of the type of lighting he was after. Here’s the concept piece:

Andy's concept piece he sent me to use as reference for the lighting.

This second attempt took me two days to get to. What took the longest was trying to get that lovely misty orange light being emitted from the submarine’s interior, I was trying all these complex methods by attaching different textures and mental ray lights to different maya lights, in the end it was all just not working. It then occurred to me, try something simple like adding a ramp to the color and creating some sort of gradient. Again this took many attempts to try and get something similar to this concept piece, in the end I settled with this. It’s no where near as accurate and I’m sure there’s a better method for getting this kind of light.

In order to get this piece to look finished, I imported the composited photoshop file of this shot in to After Effects and added some bubbles. And then to finally finish this off I added a hue and saturation adjustment layer to give the shot a cool blue/green grade. So here’s the lighting test…

Using pre-comps and layer masks I was able to use gaussian blur to blur the iceberg and feather the edges of the submarine. This is why I think seperating these objects in to different render layers will help avoid using layer masks.

Once I get the go ahead from Andy I will animate this shot of the submarine slowly approaching the iceberg.

Here’s a selection of renders that were a product of the tests I did just to get the light coming from the submarine right.

One thing that I’ve learnt from this experience is that once I get a result that I’m happy with I should do three things

1) Save the file and name it accordingly.

2) Save the rendered image.

3) Take a screenshot of the light settings.

Posted in The Deep | Leave a comment

Interior Lighting Tests.

These are the back log of renders that have been saved during the different lighting tests I’ve done for The Deep. So here’s the collection:

Light bulb lighting test. This sums up the process I went through to get a more real life result with the look of the light bulb and foglight.

environment fog experiment. My view is that this will just add to the render time and it would be easier to add something like this in a Post Production software like After Effects.

More to come as I update this post…

Posted in The Deep | Leave a comment

Unhatched Lighting Test Feedback.

Showed Tom Ritchie the Director of Unhatched the light tests I had done for him. due to the projects’ early stage of development I didn’t feel that I had much to go on, only 9am on a summers day.

Tom had a look at the lighting tests with the paint effects tree that I had done last week. The feedback was positive and here are some of the next things Tom would like me to test:

  • Try a slight yellow tone
  • Try a three point lighting set up.
  • Multiple directional lights (imitating the sun).
  • Background is too plain, but this has yet to be developed.
Posted in Uncategorized | Leave a comment