I’ve seen a couple of posts recently on getting more control over HDR based lighting setups, specifically in terms of getting crisper shadows. In particular Reyante Martinez and Greg Zaal each posted some great setups that used the colour of an HDRI map as the input to the strength node of a background shader. I thought I’d add my own experiments to the mix.
Here’s a basic world setup for cycles. Just a HDR map plugged into a background node. No other lights in the scene.
Here’s a setup based on one that Greg posted in his original article. It gives stronger shadows by plugging the HDRI map into both the colour and intensity inputs of the background node.
Of course there are many ways to achieve this sort of effect, and so the answer becomes which is the most effective, and which gives you the most control. The setup above gives you some control over the shadows, but by plugging the map into the strength input it becomes more difficult to affect the overall strength of the environment lighting (incidentally, Greg posted some great improvements to the above setup in his original article). What I really want to do is to be able to control the contrast of the lighting without affecting it’s colour, and whilst still maintaing control over it’s overall brightness. I tried a few setups with this aim in mind. This was my first one:
This setup was my first attempt, and I was mainly concerned with making sure my math checked out so that I could get my head around future setups. Here I normalised the input to the colour node of the background shader. This gave the colour input for the background node a uniform intensity. I then extracted the value from the HDRI map and processed this separately, then used this as the input for the strength node of the background shader. By putting this input through a power (math) node first (or any other manipulation you prefer) I could control the strength and contrast of the lighting (raising it’s intensity to a higher power results in more contrast).
Notably, this setup resulted in a lot more noise. I think this is because my setup messes up the Multiple Importance Sampling for the world (which I had turned on for all my renders). This convinced me that I should try and modify the input to the colour socket of the background node, rather than the strength, which should be kept to a uniform value, in order to avoid unnecessary noise. This setup also results in some crazy backgrounds, and glossy reflections that don’t make sense. To fix these issues I made use of the light paths node and a few mix nodes to blend in the unaltered HDRI map where appropriate, namely for glossy, transmission and camera rays. I also switched to using a Brightness/Contrast node to affect the lighting strength. The result is the following setup:
The node group of which contains the following nodes:
This node group provides a lot of flexibility whilst remaining easy to work with. You can control how much your environment affects your lightings contrast and overall brightness and adjust how this affects other aspects of your render like glossy reflections, transmission rays and the background as viewed by the camera. Here’s a few examples:
Different contrast adjustments:
Adjusting the look of Glossy Reflections:
The same effect shown above for Glossy reflections can also be controlled for transmission rays. So far it seems to work pretty well. If you give it a go I’d enjoy hearing your thoughts on it and how it worked for you. You can download it here.
Note: The HDR map I used is not included. You can download it from BlendedSkies.com, which is a great resource for HDR panoramas as well as other stuff like pre-tracked footage and backplates for compositing renders onto.
This fella was just a very quick little model I made for a larger project, but I kinda liked him and wanted to share him, so I stuck him up on blendswap, with a CC-0 licence. He’s ready to go if you’re rendering in cycles. Hope you like him!
Not much to look at is it? But it was an interesting experiment. The idea was to push cycle’s physical realism to the absurd extreme of building a pinhole camera in blender. A pinhole camera has no lens or aperture, instead the light just passes through a small hole in the front of the camera, and forms an inverted image on the camera’s back wall (this is why the image above appears upside-down). The construction of a pinhole camera is very simple – it’s a box with a hole in one side, so I figured that because Blender now has a physically accurate ray-tracer in the form of Cycles, it was probably possible to build one that worked in blender. Here’s how mine looked:
The “real” camera – i.e. a Camera object for rendering, is situated inside the pinhole camera, facing the back wall. The scene needed to be lit extremely brightly, in order for enough light to find it’s way through the tiny hole in the camera, to illuminate the back wall. The two lamps have intensities of 100,000 of the key light and 20,000 for the fill light. The cycles preview outside the camera looked something like this:
As you can see from the final render, the results are very noisy. Even more so when you consider that the small, noisy image you see is the result of 100,000 samples. I set the number of bounces to bounces for rendering to 3 (i.e. one bounce direct lighting, one for a small amount of indirect lighting, plus one extra bounce because we are viewing everything on the diffuse surface of the inside of the camera). It was actually really quick to render, as it was only a small image, and a relatively simple one at that ignoring the rather strange setup. It took about an hour, the only post processing I did was to brighten the image a bit.
Whilst the final result isn’t that impressive, you can clearly make out suzanne and the cube and cone. You can also see that the image is slightly blurry. With pinhole cameras there is no depth of field; instead the focus of the whole image is determined by the size of the pinhole – the smaller the hole the sharper the image. Of course the smaller you make the pinhole, the less light gets in, and so the dimmer the image becomes. This also means that for our virtual pinhole camera we get more noise if we try to bump the image up to the same brightness, so there is a tradeoff between noise and sharpness that we have to take into account.
Anyway, it’s hardly a useful way to go about creating images, but it is an interesting experiment, and a great demonstration of what cycles can do.
You can download the blend file to have a go with it yourself from blendswap (CC-Zero).
I’ve been experimenting more with cycles lately, and one thing I’m really impressed with already is the speed with which it handles instanced objects. I thought I’d share some silly experiments I made in the process and also a tip for using instances.
It’s great fun to use instancing to create fractal-like structures out of repeating objects. The image above I call the monkeybulb – after the mandelbulb fractal – and is made up of over 9000 suzannes (though even this is small fry compared to Agus3D’s instanced forest). It’s made by repeatedly extruding all the individual faces of a cube, and then smoothing the results. I then create a suzanne object that I parent to the fractal mesh, and turn on Dupli Faces to create repeated instances of suzanne. The image below uses a similar strategy, using a few array modifiers to duplicate a plane.
One thing that’s important to note is that I dont use an array modifier to duplicate the cubes seen in the image. This would result in a non-instanced verison of the scene that would render slower – as the array modifier generates new geometry rather than instancing the same cuber over and over. Instead I use array modifiers to duplicate a plane, then apply these modifiers, and parent the cube to the applied mesh, once again using Dupli Faces to handle the duplication of the cube. The difference in render time that this produces is quite something, as demonstrated by the renders below. Because the array modifier doesn’t produce instances, both rendering and creation of the bvh structure are significantly slower than when using Dupli Faces.
Anyway, just thought it was a fun thing to experiment with and also a worthwhile tip to know. So far as I know, both duplifaces/verts and particle systems create genuine duplicates, whilst the array modifier does not. Hope it’s useful and not trivially obvious to all but me!
Edit: Here’s one more fractal-like render. This time an apollonian gasket made up of spheres. I used a development build to get use of the Object Data node, so that I could randomise the colour of each sphere.