I’ve seen a couple of posts recently on getting more control over HDR based lighting setups, specifically in terms of getting crisper shadows. In particular Reyante Martinez and Greg Zaal each posted some great setups that used the colour of an HDRI map as the input to the strength node of a background shader. I thought I’d add my own experiments to the mix.
Here’s a basic world setup for cycles. Just a HDR map plugged into a background node. No other lights in the scene.
Here’s a setup based on one that Greg posted in his original article. It gives stronger shadows by plugging the HDRI map into both the colour and intensity inputs of the background node.
Of course there are many ways to achieve this sort of effect, and so the answer becomes which is the most effective, and which gives you the most control. The setup above gives you some control over the shadows, but by plugging the map into the strength input it becomes more difficult to affect the overall strength of the environment lighting (incidentally, Greg posted some great improvements to the above setup in his original article). What I really want to do is to be able to control the contrast of the lighting without affecting it’s colour, and whilst still maintaing control over it’s overall brightness. I tried a few setups with this aim in mind. This was my first one:
This setup was my first attempt, and I was mainly concerned with making sure my math checked out so that I could get my head around future setups. Here I normalised the input to the colour node of the background shader. This gave the colour input for the background node a uniform intensity. I then extracted the value from the HDRI map and processed this separately, then used this as the input for the strength node of the background shader. By putting this input through a power (math) node first (or any other manipulation you prefer) I could control the strength and contrast of the lighting (raising it’s intensity to a higher power results in more contrast).
Notably, this setup resulted in a lot more noise. I think this is because my setup messes up the Multiple Importance Sampling for the world (which I had turned on for all my renders). This convinced me that I should try and modify the input to the colour socket of the background node, rather than the strength, which should be kept to a uniform value, in order to avoid unnecessary noise. This setup also results in some crazy backgrounds, and glossy reflections that don’t make sense. To fix these issues I made use of the light paths node and a few mix nodes to blend in the unaltered HDRI map where appropriate, namely for glossy, transmission and camera rays. I also switched to using a Brightness/Contrast node to affect the lighting strength. The result is the following setup:
The node group of which contains the following nodes:
This node group provides a lot of flexibility whilst remaining easy to work with. You can control how much your environment affects your lightings contrast and overall brightness and adjust how this affects other aspects of your render like glossy reflections, transmission rays and the background as viewed by the camera. Here’s a few examples:
Different contrast adjustments:
Adjusting the look of Glossy Reflections:
The same effect shown above for Glossy reflections can also be controlled for transmission rays. So far it seems to work pretty well. If you give it a go I’d enjoy hearing your thoughts on it and how it worked for you. You can download it here.
Note: The HDR map I used is not included. You can download it from BlendedSkies.com, which is a great resource for HDR panoramas as well as other stuff like pre-tracked footage and backplates for compositing renders onto.
Previously I posted some pictures of the 3D printed version of the spider bot from my book, Blender Master Class. In my original post I promised I’d put together a post detailing some of the process of altering the model for 3D printing, as a guide for others wishing to 3D print their own models. The book itself describes the modeling of the original model so I wont go into that, but I’ll cover the most important changes I made.
(Apologies for the crappy audio.)
A quick mini tutorial on a couple of sculpt brush settings I came up with. They work quite nicely for refining planes quickly when sculpting both hard and soft surfaces, and can be used to quickly refine sharp planes and to smooth transitions between surfaces.
I made two versions: one that works a scrape brush, and one that works as a fill brush. Both are pretty useful. You can download a blend containing the brushes, called Trim Scrape and Trim Fill from Blendswap . Alternatively if you just want to make them yourselves you can see the relevant settings highlighted below. The key to how the brush works is having a nice hard-edged falloff on the brushe’s curve, and making use of autosmooth. So far I’ve been messing about with them doing some dynamic topology sculpting and they work pretty nicely. Hope you find them useful!
You can find Roberto’s blog, featuring loads more awesome sculpting time-lapses and some great resources at ThisRoomThatIKeep.blogspot.co.uk.
Just want the blend? Skip to the end.
This is something I made for work recently and have since been playing around with. It’s pretty fun, so I thought I’d share it with you. It started life as a node setup for rendering images or video as a halftone pattern, similar to how images in a newspaper look when viewed close up. It was an interesting challenge as it required mimicking the CMYK colours used in traditional printing. To do this the input image has to be converted to CMYK values, and then further manipulated to create the halftone pattern. (more…)
We’ve been busy updating the Gecko Animation site at work lately, and we decided to get rid of some older content on there. Whilst this tutorial by @Laxy doesn’t really fit on the Gecko site anymore, it’s still a cool resource, so I said I’d put it up here. It’s slightly older but the principles of animation don’t really change, so it should still be of intrest!
I’ve been experimenting more with cycles lately, and one thing I’m really impressed with already is the speed with which it handles instanced objects. I thought I’d share some silly experiments I made in the process and also a tip for using instances.
It’s great fun to use instancing to create fractal-like structures out of repeating objects. The image above I call the monkeybulb – after the mandelbulb fractal – and is made up of over 9000 suzannes (though even this is small fry compared to Agus3D’s instanced forest). It’s made by repeatedly extruding all the individual faces of a cube, and then smoothing the results. I then create a suzanne object that I parent to the fractal mesh, and turn on Dupli Faces to create repeated instances of suzanne. The image below uses a similar strategy, using a few array modifiers to duplicate a plane.
One thing that’s important to note is that I dont use an array modifier to duplicate the cubes seen in the image. This would result in a non-instanced verison of the scene that would render slower – as the array modifier generates new geometry rather than instancing the same cuber over and over. Instead I use array modifiers to duplicate a plane, then apply these modifiers, and parent the cube to the applied mesh, once again using Dupli Faces to handle the duplication of the cube. The difference in render time that this produces is quite something, as demonstrated by the renders below. Because the array modifier doesn’t produce instances, both rendering and creation of the bvh structure are significantly slower than when using Dupli Faces.
Anyway, just thought it was a fun thing to experiment with and also a worthwhile tip to know. So far as I know, both duplifaces/verts and particle systems create genuine duplicates, whilst the array modifier does not. Hope it’s useful and not trivially obvious to all but me!
Edit: Here’s one more fractal-like render. This time an apollonian gasket made up of spheres. I used a development build to get use of the Object Data node, so that I could randomise the colour of each sphere.
I started learning python a bit more seriously late last year, and since then it’s saved my bacon more times than I can count. It’s a terrific tool both inside and outside of blender, for automating tedious batch processes and adding little functions to blender that you just wish were there sometimes. Whilst learning python and the bpy API more fully will take some (well spent) time, you can get to know blenders python API very easily thanks to the Autocomplete function (Ctrl-Space) which will present you with a list of available options when exploring the api from blenders python console.
One use of the python console that is very quick to get the hang of is using for loops to the properties of multiple objects at the same time. To get a list of all the selected objects, you can use bpy.context.selected_objects. Then you can run through the list with a for loop, which will do the same thing to each entry in the list. For example the following sets all of the selected objects display level to wireframe:
for x in bpy.context.selected_objects: x.draw_type = 'WIRE'
An example that might be useful if you’re working on a complex render setup might be to batch assign object indices to a selection:
for x in bpy.context.selected_objects: x.pass_index = 1
Another useful example is setting the display level for all your objects subsuf modifiers to zero. This greatly speeds up your viewport performance when working with a complex scene, and whilst blender lets you do this using the simplification tools in the scene tab, you can’t (to my knowledge) do it just for objects in the 3D Viewport (the simplify options affect both the viewport and rendertime). This time, we to use try to allow blender to ignore objects that don’t have subsurf modifiers that need changing:
for x in bpy.context.selected_objects: try: x.modifiers['Subsurf'].levels = 0 except KeyError: print("No subsurf on this object")
A handy one when working with blenders camera tracking tools (or something else that generates lot of empties) is to use a for loop to change the draw type and size for a whole bunch of empties. It’s handy for shrinking them down to keep them out of the way:
for x in bpy.context.selected_objects: try: x.empty_draw_type = 'CUBE' x.empty_draw_size = 0.1 except KeyError: print("This one isn't an empty.")
You can find more options that you can change this way by exploring the api with autocomplete, or if you have a specific property in mind that you want to change you can simply right click it in blenders UI and select Copy Data Path to copy the last part of the data path to the clipboard. You can either save these little snippets as scripts or simply type them in when you need them - they’re pretty short. If you’re saving them as a script remember to add the line “import bpy” at the beginning of the script (you don’t need to do this in the console). Anyway, I just wanted to share something in python that might be pretty easy for beginners to grasp, and that I’ve found really useful. Let me know in the comments if you have any good ones of your own!
I’ve been playing around with Alchemy for a while now, and it’s a suberbly fun little program for coming up with visual ideas. For those of you who don’t know of it, its a 2D drawing application with all sorts of chaotic tools for creating shapes and patterns, that you can then start picking out shapes to develop ideas from. With the recent addition of .xcf import to blender, allowing you to import layered xcf files and automatically set them up for rendering, I started getting some ideas about combining alchemy and blender via GIMP. Here’s a little video showing the process I used to create the images above, plus a hint at how I applied the same method in 3D to create an animated abstract 3D character.
Here are a couple of other results (click the second for an animated walkcycle):
Anyway, let me know if you think of anything cool using this technique.
I’ve been working on some character rigs recently, and trying to learn and use more python to speed up the process and improve my rigs. One of my favourite uses that I’ve come across so far is using a “for” loop to do batch operations on bones, which is super handy when building complex rigs, as you can do batch renaming and changing of options on a whole bunch of bones at once rather than having to do things one-bone-at-a-time by hand. I thought I’d share some of my favourite python snippets that have come in handy so far.
When in edit mode, select the bones you want to change and put the following in a new text block, comment out any operations you dont want to perform, and hit alt-P to run the script:
bones_list = bpy.data.armatures['RIG NAME'].bones
bones_selected = bpy.context.selected_bones
##Some tools for renaming bones and removing trailing numbers once you’ve renamed them. Handy when duplicating parts of your deform rig to create your control rigs. Switch bones_selected for bones_list to perform an operation on every bone in your rig.##
for item in bones_selected:
item.name = item.name.replace(“DEF-”,”CON-”)
item.name = item.name.replace(“.001″,”")
item.name = item.name.replace(“.002″,”")
##Some tools for changing bone properties. use_deform turns off the bones “deform” option, always do this for bones you don’t want to directly deform your mesh (control/helper bones) . show_wire makes bone shapes visible in solid view mode, even if they only have edges.##
for item in bones_selected:
item.use_deform = False
item.show_wire = True
##Here are a couple of useful ones to try in pose mode. rotation_mode lets you set whether your bones use Euler or quarternion rotations, and custom_shape lets you set the bone shape used for all the bones you have selected. ##
bones_selected_pose = bpy.context.selected_pose_bones
for item in bones_selected_pose:
item.rotation_mode = ‘XYZ’
item.custom_shape = bpy.data.objects['NAME OF BONE SHAPE OBJECT']
Depth of Field can be a beautiful effect, adding both aesthetic interest to an image, and serving a narrative purpose in drawing the viewer’s attention to the centre of attention. But getting DoF right using Blenders compositing nodes can be tricky, as there are a few things to know about how the defocus node works, and some limitations it has that you need to know about. This post documents my own investigations, and hopefully should be useful for those new to the topic. It also touches on a few things that I haven’t worked out how to fix yet, so if there’s anyone out there with input I’d love to know your opinions.
So I got a nice shiny new monitor delivered yesterday, and relegated the old one to being my secondary display. With that in mind, I wanted an elegant solution for switching my wacom tablet between the two screens. By default, the tablet area gets stretched over the whole two screens, which is no good as the aspect ratio is all wrong, and even the slightest sideways motion of the pen results in the cursor going zooming across the screen. After a bit of research and fiddling, and more than a little guidance from @thejikz and @DavisSorenson on twitter I’ve finally got things set up how I want, and so I thought I’d write up a post on my findings for anyone else who’s stuck on the subject. Also next time I reinstall or update ubuntu I’ll have something to look up.
Restricting the Tablet to A Single Screen
On Ubuntu 11.04, with recent versions of the wacom tools, this is accomplised via a coordinate transform matrix. I know, not exactly the kind of user friendly terminology you were perhaps hoping for, but once you get the hang of it it’s not to difficult to work out what you need. I won’t explain the process here, as the ubuntu forums have a great thread on the topic with lots of explanation. Scroll to post #8 for a neat explanation for what each component of the matrix should be. Once you have that it’s just a matter of using the following commands.
Find out the device names for your tablet:
Then set the coordinate transform matrix for your device using:
xinput set-prop "Wacom Intuos3 9x12 stylus" --type=float "Coordinate Transformation Matrix" 0.533333 0 0 0 1 0 0 0 1
Switching the device name for the one you got from “xinput –list” and replacing the CTM with whatever yours happens to be. For me the above command restricts my tablet to my left hand monitor. To switch to the right I use:
xinput set-prop "Wacom Intuos3 9x12 stylus" --type=float "Coordinate Transformation Matrix" 0.466666 0 0.533333 0 0.875 0.125 0 0 1
Setting up buttons
With those commands worked out, all I had to do was map them to my tablet buttons. This was done by first creating new keyboard shortcuts using ubuntus regular keyboard shortcuts editor. I just created new shortcuts, that issued the commands above, and mapped them to something reasonably obscure so they wouldn’t conflict with other apps. In my case I chose shift + print screen and shift + pause.
Then I mapped my tablet buttons to activate these keyboard shorcuts as follows:
xsetwacom --set "Wacom Intuos3 9x12 pad" Button 1 "key Shift Pause"
xsetwacom --set "Wacom Intuos3 9x12 pad" Button 3 "key Shift Print"
Finally I created a script that issues these commands at start-up and added it to my start-up applications, so that I get my tablet configured how I like every time I start ubutu. And that’s pretty much it. Not exactly plug and play, but it get’s the job done!
This is just a quick little video tutorial to test out my new screen recording setup and to show off how to use GIMP’s resynthesize plugin. One thing to note that I thought of afterwards: using a harder selection gives slightly better results than the soft selection I used. Just use a harder brush or select the areas with the lasso tool.
Recorded using Nathan Vegdahl’s handy record screen.py script.
If your version of GIMP doesn’t come with the plugin pre-installed, you can find it in Ubuntu’s package browser, or download linux and windows versions from: http://www.logarithmic.net/pfh/resynthesizer
Hair used to be something I really hated having to do in CG, and to this day you’ll see more than a fair share of baldies amongst my works. However with more and more updates to blenders hair tools, it ‘s getting easier (and even fun!) to create characters and creatures with hair. This tutorial/guide covers working with hair particles in blender, including particle systems, combing/cutting/styling hair, and using the child particle settings.
It seemed a fitting name for him. This was a project I worked on just for fun, that I recorded as a timelapse. I thought I’d put together a post here as well with some supplementary information. Here is the timelapse video to get things rolling:(more…)
Something that is really handy to bear in mind when creating a material, is how it would interact with light in the real world. Even if you aren’t interested in creating photo realistic renders, it’s likely that you still want to be able to create materials like wood, metal, glass etc, and a vital part of knowing how to do this is knowing how real world materials interact with light. Thankfully this is simply a matter of knowing some basic physics, and being able to apply this knowledge to blenders material editor. The following are a few rules from the real world about how light works, and how this applies to CG.
With the recent google summer of code project by Jason Wilkins to improve the sculpt tools, there have been some massive improvements to blenders capabilities for organic and even hard surface sculpting. I wanted to share some experiments of mine plus a few tips.
I got a pair of 3D anaglyph glasses off amazon the other day, and had a go at creating anaglyph 3D images. The method is actually extremely simple – all you need is a right eye and a left eye image (place two cameras slightly apart but pointed at the same point in the scene, and render both), which are combined to make a single anaglyph. (more…)
Modelling a head in Blender. This video covers me modelling the head poly-by-poly and then doing some low-detail sculpting to fill out the forms.
The basic modelling was done in Blender 2.49, and I switched to JWilkins’ Google summer of code sculpt build for the sculpting, so I could play with the new tools. In particular I think the improved clay brush and the scrape and fill tools are brilliant, and well worth seeking out a build yourself if you’re into sculpting. You can find a recent build at graphicall.org, just look for a JWilkins GSoC build appropriate to your OS.
Hopefully I will find time to follow this one up with a timelapse of some higher detail sculpting and maybe texturing at a later date.
GIMP may not have the technical bells and whistles of photoshop, but if you know your way around it, it still gets the job done. The following are a few of the tools and techniques I find most useful when working on textures in the GIMP.
What do I know about lighting after all? After my previous post on setting up a three layer SSS shader, a few people asked for tips on how to light their characters to best show them off. As it happens, this something I’m rather interested in, and I’ve had a blog post on the subject brewing for a while, so I’ve finally found the motivation to finally write it up. I didn’t want to spend too much time talking about specific settings for this tutorial, so instead you can download this blendfile from BlendSwap.com for some ideas on the specifics of how to set up your scene.
Something that gives a lot of people trouble when creating characters is implementing convincing subsurface scattering (SSS). Blender’s SSS shader comes with a wealth of options that make it easy to customise how light scatters under a surface, but also makes it tough to hit on what options exactly make for convincing skin. Using a node based approach, one can create a three layer SSS shader that gives good, and reasonably physically correct results, and also makes making adjustments fairly straightforward. in this tutorial I use blender 2.5, but I used almost the exact same setup for my blending life entry in 2.49 and got much the same results (just without such fast ray-tracing, thanks blender devs!).