I’ve seen a couple of posts recently on getting more control over HDR based lighting setups, specifically in terms of getting crisper shadows. In particular Reyante Martinez and Greg Zaal each posted some great setups that used the colour of an HDRI map as the input to the strength node of a background shader. I thought I’d add my own experiments to the mix.
Here’s a basic world setup for cycles. Just a HDR map plugged into a background node. No other lights in the scene.
Here’s a setup based on one that Greg posted in his original article. It gives stronger shadows by plugging the HDRI map into both the colour and intensity inputs of the background node.
Of course there are many ways to achieve this sort of effect, and so the answer becomes which is the most effective, and which gives you the most control. The setup above gives you some control over the shadows, but by plugging the map into the strength input it becomes more difficult to affect the overall strength of the environment lighting (incidentally, Greg posted some great improvements to the above setup in his original article). What I really want to do is to be able to control the contrast of the lighting without affecting it’s colour, and whilst still maintaing control over it’s overall brightness. I tried a few setups with this aim in mind. This was my first one:
This setup was my first attempt, and I was mainly concerned with making sure my math checked out so that I could get my head around future setups. Here I normalised the input to the colour node of the background shader. This gave the colour input for the background node a uniform intensity. I then extracted the value from the HDRI map and processed this separately, then used this as the input for the strength node of the background shader. By putting this input through a power (math) node first (or any other manipulation you prefer) I could control the strength and contrast of the lighting (raising it’s intensity to a higher power results in more contrast).
Notably, this setup resulted in a lot more noise. I think this is because my setup messes up the Multiple Importance Sampling for the world (which I had turned on for all my renders). This convinced me that I should try and modify the input to the colour socket of the background node, rather than the strength, which should be kept to a uniform value, in order to avoid unnecessary noise. This setup also results in some crazy backgrounds, and glossy reflections that don’t make sense. To fix these issues I made use of the light paths node and a few mix nodes to blend in the unaltered HDRI map where appropriate, namely for glossy, transmission and camera rays. I also switched to using a Brightness/Contrast node to affect the lighting strength. The result is the following setup:
The node group of which contains the following nodes:
This node group provides a lot of flexibility whilst remaining easy to work with. You can control how much your environment affects your lightings contrast and overall brightness and adjust how this affects other aspects of your render like glossy reflections, transmission rays and the background as viewed by the camera. Here’s a few examples:
Different contrast adjustments:
Adjusting the look of Glossy Reflections:
The same effect shown above for Glossy reflections can also be controlled for transmission rays. So far it seems to work pretty well. If you give it a go I’d enjoy hearing your thoughts on it and how it worked for you. You can download it here.
Note: The HDR map I used is not included. You can download it from BlendedSkies.com, which is a great resource for HDR panoramas as well as other stuff like pre-tracked footage and backplates for compositing renders onto.
This fella was just a very quick little model I made for a larger project, but I kinda liked him and wanted to share him, so I stuck him up on blendswap, with a CC-0 licence. He’s ready to go if you’re rendering in cycles. Hope you like him!
This post is about how we perceive colour and how this relates to art and computer generated images. I find working digitally that I sometimes forget that there’s a real world out there, of which a digital image is only an approximate representation. It’s interesting to step back from that once in a while and learn a bit more about how light and colour are perceived by our eyes and our brains. What follows is a grossly simplified description of colour vision and a little of how that relates to colours in digital images.
Why are leaves green?
Perhaps this isn’t even the right question. Colour exists primarily in our minds – leaves are only “green” because we perceive them to be green. But I don’t want to go down a philosophical rabbit-hole, so I’ll stick to why – from a biological perspective – we perceive them this way.
Our perception of the colour of an object depends on several things:
- The range of wavelengths present in the incident light.
- The wavelengths of light that the object reflects or absorbs.
- The receptor cells in our eyes that detect the reflected light.
- Our brain’s interpretation of the signals from those receptors.
It’s important to realise that our categorising things into colours has more to do with our brains than it does with light itself. White light consists of a mix of all the visible wavelengths of light, from around 400 to 700 nanometers in wavelenth, and a given object will absorb some and reflect others, depending on what it is made of. It is a mistake then to think that a red object only reflects “red” light, it just happens to reflect wavelengths in the red region of the spectrum more strongly.
The principal molecule that gives leaves their colour is called chlorophyll. Leaves are packed full of chlorophyll because it is a key component in photosynthesis, which lets plants use energy from sunlight to convert water and carbon dioxide into sugars. The absorbance spectrum of chlorophyll looks something like this:
To make the best use of the incident light from the sun, which after scattering through the atmosphere is blueish in colour (observe the colour of the sky), chlorophyll absorbs as much blue light as possible – as evidenced by the main peak in the absorbance spectrum above around the indigo-blue wavelengths. The trough of low absorption we see in the middle around the yellow-green region of the spectrum gets reflected – and results in the green colour of chlorophyll.
However, when we compare the the abundance of different wavelengths present in the light reflected from a leaf, with how we represent colour digitally, it initially appears like something is seriously lacking. Colour in digital images is represented by a a measly three numbers – our RGB values. How can this simple triplet of numbers compare to the far more complex spectrum of wavelengths we receive from a real object?
Colour and the Eye
The reason concerns how colour works in the eye. The cells in our retina that detect colour (called cone cells) are full of pigments that respond to light. When light hits them they change shape, and through a chain reaction with other molecules in the cell they generate an electrical impulse. But these pigments – like chlorophyll – also respond to a spectrum of wavelengths, and can’t really differentiate between them. In order to detect colour then we need three different kinds of receptor cell, each with a different photopigment with a slightly different absorbance spectrum, that responds more or less strongly to different regions of the visible spectrum of light. It’s a misconception that these correspond to “red”, “green” and “blue” light, but the net effect is that by comparing the responses from these different receptors, we can tell whether the light we see contains more reddish, greenish or blueish wavelengths.
It isn’t a coincidence then that the digital RGB colour model uses three colour values, because by mixing different amounts of these three colours we can replicate the effect of (almost) a whole spectrum of wavelengths. The cells in our eyes crunch the data down into three channels in a very similar way, so we notice very little difference. (Indeed, this is in fact why the RGB colour model was designed this way).
But what about Magenta?
The next thing to talk about is how we get from the concept of wavelengths as representing colours, to the familiar idea of a colour wheel. Whilst I’ve shown already that most colours we see are made up of a whole mix of wavelengths, I’ve also been acting as each wavelength of light could in theory be assigned a specific colour. Indeed they can – as evidenced by certain kinds of light sources like lasers that can produce light of a single wavelength, which we see as coloured light, or by looking at light through a prism, split up into a rainbow of different wavelengths. However, the wavelengths of light extend off in either direction from the ends of the visible spectrum, into kinds of radiation we can’t see. So how do we end up perceiving colours like magenta which we perceive as being somewhere between red and blue?
The answer this time lies in the brain. Once the receptors in our eyes receive light, they pass this on as an electrical signal up the optic nerve to the visual cortex, at the back of the brain, in the occipital lobe. Here the three-channel model of colour is abandoned and replaced with a four-colour opponent-processing model. Neurons early in the visual cortex are clustered in regions called blobs, some of which compare the signals from the different cone cells, to decide whether light is more blue or more yellow, others whether it is more red or more green. The output from these clusters determine how we perceive colour , and the grouping of colours into opposing pairs results in our idea of complimentary colours, blue with yellow, red with green and so on.
This is also why we can’t imagine such colours as blueish yellow or reddish green – because the parts of the brain that perceive these colours treat them as being each others opposites. Our eyes could easily receive light consisting mostly of blue and yellow wavelengths, but we would simple perceive it as being more or less colourless. Non-opposing colours on the other hand can mix together, giving us orange between red and green, and cyan between green and blue. This also results in magenta – a mix of blue and red light from either end of the spectrum that we perceive as a single colour, even though it has no analogue on the visible spectrum itself (this is called an extra-spectral colour).
So, why are leaves green?
Because we perceive them to be green. A burning ball of hydrogen 150 million kilometres away emits light of all kinds of wavelengths, which travels to earth, scatters through the atmosphere, and gets partially absorbed by photosynthetic chemicals. The rest of that light gets reflected, then absorbed by pigments in cells in our eyes. The signal gets simplified, then processed and compared, passed on to other parts of the brain and we finally come to the conclusion that leaves are green. Simple right?
(Apologies for the crappy audio.)
A quick mini tutorial on a couple of sculpt brush settings I came up with. They work quite nicely for refining planes quickly when sculpting both hard and soft surfaces, and can be used to quickly refine sharp planes and to smooth transitions between surfaces.
I made two versions: one that works a scrape brush, and one that works as a fill brush. Both are pretty useful. You can download a blend containing the brushes, called Trim Scrape and Trim Fill from Blendswap . Alternatively if you just want to make them yourselves you can see the relevant settings highlighted below. The key to how the brush works is having a nice hard-edged falloff on the brushe’s curve, and making use of autosmooth. So far I’ve been messing about with them doing some dynamic topology sculpting and they work pretty nicely. Hope you find them useful!
You can find Roberto’s blog, featuring loads more awesome sculpting time-lapses and some great resources at ThisRoomThatIKeep.blogspot.co.uk.
Just want the blend? Skip to the end.
This is something I made for work recently and have since been playing around with. It’s pretty fun, so I thought I’d share it with you. It started life as a node setup for rendering images or video as a halftone pattern, similar to how images in a newspaper look when viewed close up. It was an interesting challenge as it required mimicking the CMYK colours used in traditional printing. To do this the input image has to be converted to CMYK values, and then further manipulated to create the halftone pattern. (more…)
If you’ve noticed I’ve been updating the site a bit less frequently of late, this (and busy times at Gecko Animation) is pretty much why. Along with the talented and patient folks at No Starch Press, I’ve been writing a book about creating art with Blender and GIMP. It covers everything from modeling to sculpting, through to textures, materials, lighting and rendering.
The book takes you through three different projects: a gruesome bat monster, a robotic spider, and an overgrown temple deep in the jungle. But this isn’t just a simple step by step tutorial. Whilst you can use the book that way, I chose each of the projects to provide a unique set of challenges, and I use the projects to help explain how to use GIMP and Blender in your own projects. The book is filled with examples from my other works too, as well as detailed descriptions of blenders tools, and guides to getting the most out of Blender and GIMP with your own custom UIs, Brushes and Materials.
Plus, the book comes with a DVD containing all of the project files and resources used in creating each of the projects, plus some extra goodies like brushes, mat-cap materials, textures, and sculpting alphas.
Here’s a more detailed breakdown of what’s covered in the book:
- Introductions to Blender and GIMP for new users.
- Working with reference images and concept art in Blender and GIMP.
- Modeling, from blocking out basic forms, to creating complex meshes.
- Sculpting both organic and hard-surface models.
- Retopology to turn complex sculpts into simple models with good topology.
- Creating hair and fur with Blender’s particle systems.
- Baking textures (Ambient Occlusion, Displacement, Normals, Colours) from models.
- Painting textures using both Blender and GIMP.
- Creating materials for Blender Internal and Cycles renderers. Creating materials for BI with the Properties editor, and building up complex cycles shader with the Node editor.
- Lighting, again with both Blender Internal and Cycles renderers.
- Rendering and compositing the final scenes, adding post-processing effects with compositing nodes and adding final touch-ups in GIMP.
The book will be published in February/March. You can pre-order it now from Amazon, the Blender.org Store or from the No Starch Press website. If you order from No Starch, you get a free E-Book edition of the book when you purchase the print edition.
It’s been a big project putting the book together, and I hope it’s resulted in something really useful. So if you’ve enjoyed the tutorials on this site I hope you’ll give it a look.
I created a couple of base meshes recently to give me something to start with when sculpting heads and bodies. I thought I’d make them avaliable. I’ve also included a GLSL matcap material I created that works quite well for sculpting. You’re free to use them for whatever you want. If you just want the matcap you can download the image directly to use in whatever application you prefer. Here’s a sculpt I made with the bust basemesh:
Download Basemeshes (zipped .blend format, at BlendSwap.com)
Download Basemeshes (zipped .obj format)
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
I got Virgilio Vasconcelos’ new book, Blender 2.5 Animation Cookbook in the post from PACKT publishing the other day, and they asked me to write a review. I’ve been doing a bit of rigging of my own lately, so it came along at a useful time for me; I’ve already picked up a couple of useful tips. It seems like a good guide to rigging so far, with info on most common rigging tasks, and I think enough information for beginners to get their heads around the topic.
Whilst the title suggests that the book centres on Animation, the book is really split about 50/50 between rigging and animation, which means as long as you can model it covers pretty much all you need to get animating, even if you don’t want to use a ready made rig like Mancandy or Pantin. The rigging portion takes you through rigging all the common aspects of rigging a biped, and whilst I’d differ on how to implement one or two aspects of some of the rigs, I’m by no means a champion rigger. It never hurts to know more than one method either. The book covers both creating and weight painting a deform rig, and then creating separate control rigs for different control methods (i.e. IK/FK), and keeps the two nice and separate (as they should be!). I think the lack of much discussion on using python for rigging is a bit of an oversight, given how useful knowing even a little bit of python can be when building a rig. Also a few of the controls that Virgilio outlines, like the isolation controls for the head and shoulders, only switch between on and off, and it isn’t difficult to make controls that smoothly interpolate between the two (go see Nathans mammoth rigging tutorials on CMI VFX for how to do that). Overall though the coverage is pretty solid though, and the book takes you through all the common hang ups, like creating a foot roll rig, stretchy and bendy limbs, rigging eyes, facial rigging with lattices and shapekeys, and creating interfaces for options like IK/FK switching, limb isolation and changing parent spaces. The book is particularly focussed on cartoony rigging, but it’s all applicable to more realistic rigs too.
On the animation side (which I know less about) the book teaches the layered technique, and has a nice progression through creating key poses, then extremes, breakdowns, and finally refining timing and tweaking curves in the graph editor. After demonstrating the basics the book moves on to a few common animation tasks, like having a character interact with props, creating walk cycles and animating speech. There’s also plenty of discussion of the principles of animation, which really is more important than the technical side. Anticipation, moving holds, squash and stretch, and symmetry are all talked about, and there are also some great tips on rendering silhouettes and mirrored previews of your animation to help spot your mistakes. There’s also a little discussion at the very end of the book on using grease pencil to plan and refine your animations. As I’m not much of an animator I can’t speak to any shortcomings the book might have but the book has a nice breadth, and I like the deference it pays to traditional animation too.
All the source .blend files for the book are available through Virgilio’s website, which are a great resource when trying to pick apart how a rig works so you can implement something in your own projects. If you’re interested in buying the book, I’d suggest you go and check them out. Also available is the main character rig “Otto”, used in the book. All in all the book should be a nice reference for anyone looking to start with rigging and animation. It’s an easy book to flip through and find the topic you’re stuck on, and whilst I disagreed with a couple of solutions, the book has an answer for most problems that might stump newbie blender-heads.
This is a rig I created as a test for the Two Rivers Partnership, a small animation and VFX studio in London I work for. They’ve kindly allowed me to release the rig to the blender community.
You can download the rig here.
Hair used to be something I really hated having to do in CG, and to this day you’ll see more than a fair share of baldies amongst my works. However with more and more updates to blenders hair tools, it ‘s getting easier (and even fun!) to create characters and creatures with hair. This tutorial/guide covers working with hair particles in blender, including particle systems, combing/cutting/styling hair, and using the child particle settings.
I’ve been trying to replicate some of the brushes I’ve been using a lot in zbrush in blender, in particular the rake brush, which I find very useful. Here are three brushes as well as some extra alphas to try with them (or whichever other sculpting app you use).
- A rake brush, great for refining forms (see for examples).
- A rasp brush, basically a textured scrape brush that removes material and adds a fine texture to the surface.
- A slash brush that scrapes wrinkles/scratches into the surface. Use it softly to create a base for wrinkles, or scrape harder to Create rough, rocky surfaces.
You can download the brushes from Blendswap.com.
And here are some rake alphas I made:
My favourite, you probably know this one already. Textures are free (though super high res textures require premium membership), the licence allows for use in pretty much any kind of project as long as you don’t redistribute the textures themselves, and they have a huge range of fantastic quality textures. In particular the site has a fantastic range of metal and concrete textures, as well as some great tiling textures (though the hi res versions of most of the latter require paid membership).
Featuring a similar licence to CGTextures, though not quite the range, this is another great texture resource. It has some great ref for different kinds of buildings too, as well as some cool aerial views of cities.
One of the few texture libraries to feature a Creative Commons (CC-BY) licence. Campbells collection features some great man made stuff like air-con units, various containers, buildings and some nice textures of roads and other manmade terrain elements. The quality isn’t quite on a par with that of CGTextures and the like, but there’s a great range, and most of the textures are shot under good (overcast/diffuse) lighting conditions, which makes them nice and easy to work with.
This is a relatively new site, but already it has a fantastic range and great quality images. Requires registration, but the license is fairly liberal (no direct redistribution, and give credit if possible).
Features a decent range of photo textures, including a good sized collection of old medieval brick walls. I can’t find specific licence info, but they say the textures are free to use in both commercial and non-commercial projects.
Features a lot of really nice textures, though perhaps geared towards web design more than CG. Whilst the site has a search function it lacks a thumbnail gallery view which makes it difficult to find specific textures. Still worth browsing through as it has a few gems tucked away. There’s also a pretty neat flickr pool associated with the site too.
This one only came up on Blender Nation today, so I haven’t had the chance to look around it much. I did notice though that it has a great collection of leaves with the backgrounds already aplha-ed out. Very handy indeed.
Has some great stuff, though a lot of textures are bundled into packs without full previews of what’s inside. Many are links to external sites too, but there are some really nice ones to find, so check them out.
A bit of a bonus, this site just has a rather cool collection of free high resolution iris textures taken by specially designed equipment for photographing peoples eyes.
Anatomy is a subject dear to my heart, and drawing and modelling anatomical forms is a skill I’m always striving to improve. With that in mind I wanted to write a post about some of the resources that I have found most useful in improving my anatomy knowledge, as well as sharing some of my own studies.
Edit: I’ve updated the .blend download to fix an issue with the rendered matcap images not aliasing correctly when used as matcaps, but to get the most out of the matcap images at the bottom you may want to crop the outer few pixels from the outside of the image. Alternatively if you’re using them in blender, just set the “size” option in the texture mapping options to 0.98 in each direction.
Whilst doing some sculpting today I found myself casting around for some nice matcap images to apply to my sculpt. Whilst zbrush central has some great ones that are well worth checking out, these all come in a rather unhelpful .zmt format, which if you dont have zbrush are difficult if not impossible to export to something you can use in blender. So I thought it would be simple enough to build my own matcap generator in blender that I could use to generate whatever material I liked and render it as a matcap style sphere. It was a fairly quick process and the results work rather well as matcap images for my sculpts. I thought I’d share the generator .blend and a few of the matcaps I generated for anyone interested in doing some sculpting in blender, or just showing off the results.
You can download the generator .blend file here , see after the jump for the matcap images themselves.