Last week I did a live webcast presentation for O’Reilly, where I talked about tips and tricks for creating cycles node materials. I covered different ways of working with node groups, tips for using vertex paint in your materials, as well as how the Light Paths node works and how it can be used for creating different kinds of glass shaders. I also had a nice Q&A with the live audience at the end.
I’ve seen a couple of posts recently on getting more control over HDR based lighting setups, specifically in terms of getting crisper shadows. In particular Reyante Martinez and Greg Zaal each posted some great setups that used the colour of an HDRI map as the input to the strength node of a background shader. I thought I’d add my own experiments to the mix.
Here’s a basic world setup for cycles. Just a HDR map plugged into a background node. No other lights in the scene.
Here’s a setup based on one that Greg posted in his original article. It gives stronger shadows by plugging the HDRI map into both the colour and intensity inputs of the background node.
Of course there are many ways to achieve this sort of effect, and so the answer becomes which is the most effective, and which gives you the most control. The setup above gives you some control over the shadows, but by plugging the map into the strength input it becomes more difficult to affect the overall strength of the environment lighting (incidentally, Greg posted some great improvements to the above setup in his original article). What I really want to do is to be able to control the contrast of the lighting without affecting it’s colour, and whilst still maintaing control over it’s overall brightness. I tried a few setups with this aim in mind. This was my first one:
This setup was my first attempt, and I was mainly concerned with making sure my math checked out so that I could get my head around future setups. Here I normalised the input to the colour node of the background shader. This gave the colour input for the background node a uniform intensity. I then extracted the value from the HDRI map and processed this separately, then used this as the input for the strength node of the background shader. By putting this input through a power (math) node first (or any other manipulation you prefer) I could control the strength and contrast of the lighting (raising it’s intensity to a higher power results in more contrast).
Notably, this setup resulted in a lot more noise. I think this is because my setup messes up the Multiple Importance Sampling for the world (which I had turned on for all my renders). This convinced me that I should try and modify the input to the colour socket of the background node, rather than the strength, which should be kept to a uniform value, in order to avoid unnecessary noise. This setup also results in some crazy backgrounds, and glossy reflections that don’t make sense. To fix these issues I made use of the light paths node and a few mix nodes to blend in the unaltered HDRI map where appropriate, namely for glossy, transmission and camera rays. I also switched to using a Brightness/Contrast node to affect the lighting strength. The result is the following setup:
The node group of which contains the following nodes:
This node group provides a lot of flexibility whilst remaining easy to work with. You can control how much your environment affects your lightings contrast and overall brightness and adjust how this affects other aspects of your render like glossy reflections, transmission rays and the background as viewed by the camera. Here’s a few examples:
Different contrast adjustments:
Adjusting the look of Glossy Reflections:
The same effect shown above for Glossy reflections can also be controlled for transmission rays. So far it seems to work pretty well. If you give it a go I’d enjoy hearing your thoughts on it and how it worked for you. You can download it here.
Note: The HDR map I used is not included. You can download it from BlendedSkies.com, which is a great resource for HDR panoramas as well as other stuff like pre-tracked footage and backplates for compositing renders onto.
I’ve been working on a new script at Gecko for a side-project of ours in for which we need to generate a lot of video files at different sizes and in different codecs. Because we were processing the frames for these videos in blender we wanted a solution that could just take those rendered frames and produce all of the videos we needed. Enter a nifty command line tool called qt_tools and some python scripting!
The script I came up with lets you specify a list of different output files that you want to create, and define settings for each using different settings files. Then you can either manually set off an export of each of these videos from your rendered frames, or have the export automatically happen after you finish rendering your animation. And because qt_tools has access to all of the codecs and soforth that quicktime does, you have a lot of flexibility in what kinds of video you can make (more than blender has natively as far as quicktime is concerned anyway).
You can also use the script just to create a single file. This is great for when you want to render out a playblast or a low quality render, and get a quicktime I can quickly send to a client, but I don’t want to render straight to quicktime in case I get a dropped-frame, or I already have some of the animation rendered. I’ve even included an Auto open option to automatically open the resulting file in Quicktime Player 7 once finished.
Important: The script requires you have qt_tools installed for it to work in its current state (see To Do list), which means you’ll need to be on a Mac too.
1. Download, Install, and Enable the addon.
2. Look in the Render Tab of the Properties Editor, under Multi-Quicktime.
3. Click “Add Multi Quicktime Output” to create at least one output file.
4. Configure a new settings file or browse for an existing one if you’ve created one before. Clicking on the configure button will bring up the same settings dialog that Quicktime uses natively. Set other properties as desired:
5. Either manually generate your quicktime outputs with “Generate Outputs fro Frames”, or enable “Auto Generate After Render” and render your animation.
The script is currently very Mac OS centric, and in particular it is very tailored to my personal setup.
- I use Djv for playing back frames, but it doesn’t open quicktime files, so the auto open feature defualts to using Quicktime Player 7.
- qt_tools could easily be replaced by FFMPEG or another command line video tool if this was what you preferred (though you would then have to do more work defining codec options in the script I think).
So far we’re still testing the script but I’m pretty happy with how it’s working. If you do try it out please let me know of any bugs/usability issues.
Previously I posted some pictures of the 3D printed version of the spider bot from my book, Blender Master Class. In my original post I promised I’d put together a post detailing some of the process of altering the model for 3D printing, as a guide for others wishing to 3D print their own models. The book itself describes the modeling of the original model so I wont go into that, but I’ll cover the most important changes I made.
Not much to look at is it? But it was an interesting experiment. The idea was to push cycle’s physical realism to the absurd extreme of building a pinhole camera in blender. A pinhole camera has no lens or aperture, instead the light just passes through a small hole in the front of the camera, and forms an inverted image on the camera’s back wall (this is why the image above appears upside-down). The construction of a pinhole camera is very simple – it’s a box with a hole in one side, so I figured that because Blender now has a physically accurate ray-tracer in the form of Cycles, it was probably possible to build one that worked in blender. Here’s how mine looked:
The “real” camera – i.e. a Camera object for rendering, is situated inside the pinhole camera, facing the back wall. The scene needed to be lit extremely brightly, in order for enough light to find it’s way through the tiny hole in the camera, to illuminate the back wall. The two lamps have intensities of 100,000 of the key light and 20,000 for the fill light. The cycles preview outside the camera looked something like this:
As you can see from the final render, the results are very noisy. Even more so when you consider that the small, noisy image you see is the result of 100,000 samples. I set the number of bounces to bounces for rendering to 3 (i.e. one bounce direct lighting, one for a small amount of indirect lighting, plus one extra bounce because we are viewing everything on the diffuse surface of the inside of the camera). It was actually really quick to render, as it was only a small image, and a relatively simple one at that ignoring the rather strange setup. It took about an hour, the only post processing I did was to brighten the image a bit.
Whilst the final result isn’t that impressive, you can clearly make out suzanne and the cube and cone. You can also see that the image is slightly blurry. With pinhole cameras there is no depth of field; instead the focus of the whole image is determined by the size of the pinhole – the smaller the hole the sharper the image. Of course the smaller you make the pinhole, the less light gets in, and so the dimmer the image becomes. This also means that for our virtual pinhole camera we get more noise if we try to bump the image up to the same brightness, so there is a tradeoff between noise and sharpness that we have to take into account.
Anyway, it’s hardly a useful way to go about creating images, but it is an interesting experiment, and a great demonstration of what cycles can do.
You can download the blend file to have a go with it yourself from blendswap (CC-Zero).
If you’ve noticed I’ve been updating the site a bit less frequently of late, this (and busy times at Gecko Animation) is pretty much why. Along with the talented and patient folks at No Starch Press, I’ve been writing a book about creating art with Blender and GIMP. It covers everything from modeling to sculpting, through to textures, materials, lighting and rendering.
The book takes you through three different projects: a gruesome bat monster, a robotic spider, and an overgrown temple deep in the jungle. But this isn’t just a simple step by step tutorial. Whilst you can use the book that way, I chose each of the projects to provide a unique set of challenges, and I use the projects to help explain how to use GIMP and Blender in your own projects. The book is filled with examples from my other works too, as well as detailed descriptions of blenders tools, and guides to getting the most out of Blender and GIMP with your own custom UIs, Brushes and Materials.
Plus, the book comes with a DVD containing all of the project files and resources used in creating each of the projects, plus some extra goodies like brushes, mat-cap materials, textures, and sculpting alphas.
Here’s a more detailed breakdown of what’s covered in the book:
- Introductions to Blender and GIMP for new users.
- Working with reference images and concept art in Blender and GIMP.
- Modeling, from blocking out basic forms, to creating complex meshes.
- Sculpting both organic and hard-surface models.
- Retopology to turn complex sculpts into simple models with good topology.
- Creating hair and fur with Blender’s particle systems.
- Baking textures (Ambient Occlusion, Displacement, Normals, Colours) from models.
- Painting textures using both Blender and GIMP.
- Creating materials for Blender Internal and Cycles renderers. Creating materials for BI with the Properties editor, and building up complex cycles shader with the Node editor.
- Lighting, again with both Blender Internal and Cycles renderers.
- Rendering and compositing the final scenes, adding post-processing effects with compositing nodes and adding final touch-ups in GIMP.
The book will be published in February/March. You can pre-order it now from Amazon, the Blender.org Store or from the No Starch Press website. If you order from No Starch, you get a free E-Book edition of the book when you purchase the print edition.
It’s been a big project putting the book together, and I hope it’s resulted in something really useful. So if you’ve enjoyed the tutorials on this site I hope you’ll give it a look.
Over at GeckoAnimation, @laxy and I recently got to do some visual effects for the last episode in the latest series of Red Dwarf. We’re both fans of the show and really proud to have been able to contribute to it, and after the Premier last night I can safely say its as funny as ever. We were dead chuffed to see that some of our work had made it into the opening titles too, so now we get to see a bit of our stuff in every episode!
Anyway, the series starts tonight at 9pm on Dave, with Episode 6 (the one we worked on) going out sometime in November. Watch it!
I’ve been experimenting more with cycles lately, and one thing I’m really impressed with already is the speed with which it handles instanced objects. I thought I’d share some silly experiments I made in the process and also a tip for using instances.
It’s great fun to use instancing to create fractal-like structures out of repeating objects. The image above I call the monkeybulb – after the mandelbulb fractal – and is made up of over 9000 suzannes (though even this is small fry compared to Agus3D’s instanced forest). It’s made by repeatedly extruding all the individual faces of a cube, and then smoothing the results. I then create a suzanne object that I parent to the fractal mesh, and turn on Dupli Faces to create repeated instances of suzanne. The image below uses a similar strategy, using a few array modifiers to duplicate a plane.
One thing that’s important to note is that I dont use an array modifier to duplicate the cubes seen in the image. This would result in a non-instanced verison of the scene that would render slower – as the array modifier generates new geometry rather than instancing the same cuber over and over. Instead I use array modifiers to duplicate a plane, then apply these modifiers, and parent the cube to the applied mesh, once again using Dupli Faces to handle the duplication of the cube. The difference in render time that this produces is quite something, as demonstrated by the renders below. Because the array modifier doesn’t produce instances, both rendering and creation of the bvh structure are significantly slower than when using Dupli Faces.
Anyway, just thought it was a fun thing to experiment with and also a worthwhile tip to know. So far as I know, both duplifaces/verts and particle systems create genuine duplicates, whilst the array modifier does not. Hope it’s useful and not trivially obvious to all but me!
Edit: Here’s one more fractal-like render. This time an apollonian gasket made up of spheres. I used a development build to get use of the Object Data node, so that I could randomise the colour of each sphere.
I’ve been experimenting with Growl lately, which is a great notifications system for osx with a whole load of useful controls, including notification by email, as well as a python api for doing notifications. Afters seeing Jason Van Gumster’s render music script, I thought it would be a good project to try and combine the two. The result is a Growl notifier addon for blender that gives you notifications about rendering, which is really useful to have if you’re setting off an animation to render over night and you want be kept up to date with how it’s doing, or even if you’re just web browsing whilst you wait for a quick render and want to be notified when it’s done.
I started learning python a bit more seriously late last year, and since then it’s saved my bacon more times than I can count. It’s a terrific tool both inside and outside of blender, for automating tedious batch processes and adding little functions to blender that you just wish were there sometimes. Whilst learning python and the bpy API more fully will take some (well spent) time, you can get to know blenders python API very easily thanks to the Autocomplete function (Ctrl-Space) which will present you with a list of available options when exploring the api from blenders python console.
One use of the python console that is very quick to get the hang of is using for loops to the properties of multiple objects at the same time. To get a list of all the selected objects, you can use bpy.context.selected_objects. Then you can run through the list with a for loop, which will do the same thing to each entry in the list. For example the following sets all of the selected objects display level to wireframe:
for x in bpy.context.selected_objects: x.draw_type = 'WIRE'
An example that might be useful if you’re working on a complex render setup might be to batch assign object indices to a selection:
for x in bpy.context.selected_objects: x.pass_index = 1
Another useful example is setting the display level for all your objects subsuf modifiers to zero. This greatly speeds up your viewport performance when working with a complex scene, and whilst blender lets you do this using the simplification tools in the scene tab, you can’t (to my knowledge) do it just for objects in the 3D Viewport (the simplify options affect both the viewport and rendertime). This time, we to use try to allow blender to ignore objects that don’t have subsurf modifiers that need changing:
for x in bpy.context.selected_objects: try: x.modifiers['Subsurf'].levels = 0 except KeyError: print("No subsurf on this object")
A handy one when working with blenders camera tracking tools (or something else that generates lot of empties) is to use a for loop to change the draw type and size for a whole bunch of empties. It’s handy for shrinking them down to keep them out of the way:
for x in bpy.context.selected_objects: try: x.empty_draw_type = 'CUBE' x.empty_draw_size = 0.1 except KeyError: print("This one isn't an empty.")
You can find more options that you can change this way by exploring the api with autocomplete, or if you have a specific property in mind that you want to change you can simply right click it in blenders UI and select Copy Data Path to copy the last part of the data path to the clipboard. You can either save these little snippets as scripts or simply type them in when you need them - they’re pretty short. If you’re saving them as a script remember to add the line “import bpy” at the beginning of the script (you don’t need to do this in the console). Anyway, I just wanted to share something in python that might be pretty easy for beginners to grasp, and that I’ve found really useful. Let me know in the comments if you have any good ones of your own!
Here are some models I made over the past year, I put together some turntable renders for a little showreel. Most of these are modelled in blender/sculpted in zbrush. The turntable renders were done with v-ray and composited back in blender.
Hmm, maybe it was a power station, maybe a factory or a warehouse. All we know now is it’s falling apart. A project I began a long time ago, which has been gathering dust on my hard drive, so I thought I’d post it up. At some point in the future I might resurrect it in its originally intended animated form, but for now here are some stills.
All the modelling was done in blender 2.49, with some sculpting and texturing in blender 2.5. Most of the texturing was done in GIMP, with a whole bunch of textures from CGTextures.com. Rendering was done in V-Ray, thanks to Andrey Izrantsev’s fantastic blender 2.49 to vray exporter (though the newer 2.5 version is even better). Compositing was done in blender 2.5.
Click images for full size (1920×1080) versions.
Done for Gecko Animation Ltd, which I am now a part of!
A short experimental piece featuring a blend of Macro Photography and CG Animation. It started with an idea from David Parvin, of Two Rivers Partnership, involving abstract forms and macro photography. With the footage shot, Jonathan Lax and myself worked on constructing a narrative from the forms we saw by combining CG elements with the original footage.
The CG elements were all done in blender (though some of the fluid sim stuff was done with RealFlow) and rendered in BI. Lots of animated node materials for the shifting surfaces and displacements.
Director of Photography – David Parvin
Live Action Shoot – David Parvin, Jonathan Lax, Tobin Brett
CG Animation and Effects – Ben Simonds, Jonathan Lax
Editing – Jonathan Lax
Music and Sound FX – Alistair Lax
I’ve been playing around with Alchemy for a while now, and it’s a suberbly fun little program for coming up with visual ideas. For those of you who don’t know of it, its a 2D drawing application with all sorts of chaotic tools for creating shapes and patterns, that you can then start picking out shapes to develop ideas from. With the recent addition of .xcf import to blender, allowing you to import layered xcf files and automatically set them up for rendering, I started getting some ideas about combining alchemy and blender via GIMP. Here’s a little video showing the process I used to create the images above, plus a hint at how I applied the same method in 3D to create an animated abstract 3D character.
Here are a couple of other results (click the second for an animated walkcycle):
Anyway, let me know if you think of anything cool using this technique.
I created a couple of base meshes recently to give me something to start with when sculpting heads and bodies. I thought I’d make them avaliable. I’ve also included a GLSL matcap material I created that works quite well for sculpting. You’re free to use them for whatever you want. If you just want the matcap you can download the image directly to use in whatever application you prefer. Here’s a sculpt I made with the bust basemesh:
Download Basemeshes (zipped .blend format, at BlendSwap.com)
Download Basemeshes (zipped .obj format)
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
I got Virgilio Vasconcelos’ new book, Blender 2.5 Animation Cookbook in the post from PACKT publishing the other day, and they asked me to write a review. I’ve been doing a bit of rigging of my own lately, so it came along at a useful time for me; I’ve already picked up a couple of useful tips. It seems like a good guide to rigging so far, with info on most common rigging tasks, and I think enough information for beginners to get their heads around the topic.
Whilst the title suggests that the book centres on Animation, the book is really split about 50/50 between rigging and animation, which means as long as you can model it covers pretty much all you need to get animating, even if you don’t want to use a ready made rig like Mancandy or Pantin. The rigging portion takes you through rigging all the common aspects of rigging a biped, and whilst I’d differ on how to implement one or two aspects of some of the rigs, I’m by no means a champion rigger. It never hurts to know more than one method either. The book covers both creating and weight painting a deform rig, and then creating separate control rigs for different control methods (i.e. IK/FK), and keeps the two nice and separate (as they should be!). I think the lack of much discussion on using python for rigging is a bit of an oversight, given how useful knowing even a little bit of python can be when building a rig. Also a few of the controls that Virgilio outlines, like the isolation controls for the head and shoulders, only switch between on and off, and it isn’t difficult to make controls that smoothly interpolate between the two (go see Nathans mammoth rigging tutorials on CMI VFX for how to do that). Overall though the coverage is pretty solid though, and the book takes you through all the common hang ups, like creating a foot roll rig, stretchy and bendy limbs, rigging eyes, facial rigging with lattices and shapekeys, and creating interfaces for options like IK/FK switching, limb isolation and changing parent spaces. The book is particularly focussed on cartoony rigging, but it’s all applicable to more realistic rigs too.
On the animation side (which I know less about) the book teaches the layered technique, and has a nice progression through creating key poses, then extremes, breakdowns, and finally refining timing and tweaking curves in the graph editor. After demonstrating the basics the book moves on to a few common animation tasks, like having a character interact with props, creating walk cycles and animating speech. There’s also plenty of discussion of the principles of animation, which really is more important than the technical side. Anticipation, moving holds, squash and stretch, and symmetry are all talked about, and there are also some great tips on rendering silhouettes and mirrored previews of your animation to help spot your mistakes. There’s also a little discussion at the very end of the book on using grease pencil to plan and refine your animations. As I’m not much of an animator I can’t speak to any shortcomings the book might have but the book has a nice breadth, and I like the deference it pays to traditional animation too.
All the source .blend files for the book are available through Virgilio’s website, which are a great resource when trying to pick apart how a rig works so you can implement something in your own projects. If you’re interested in buying the book, I’d suggest you go and check them out. Also available is the main character rig “Otto”, used in the book. All in all the book should be a nice reference for anyone looking to start with rigging and animation. It’s an easy book to flip through and find the topic you’re stuck on, and whilst I disagreed with a couple of solutions, the book has an answer for most problems that might stump newbie blender-heads.
Depth of Field can be a beautiful effect, adding both aesthetic interest to an image, and serving a narrative purpose in drawing the viewer’s attention to the centre of attention. But getting DoF right using Blenders compositing nodes can be tricky, as there are a few things to know about how the defocus node works, and some limitations it has that you need to know about. This post documents my own investigations, and hopefully should be useful for those new to the topic. It also touches on a few things that I haven’t worked out how to fix yet, so if there’s anyone out there with input I’d love to know your opinions.
Sculpted in ZBrush and rendered in blender. Just a rough sculpt but I liked the look of him.
Testing out the rig and shapekeys for a face I made.
Some testing with blenders v-ray exporter and Lee Perry Smith’s awesome free head model and textures (link). Just trying to match some lighting from a few movies.
I also did some testing with blender internal using a three layer setup (see my tutorial for more on that), and was pretty impressed at the results:
This is a rig I created as a test for the Two Rivers Partnership, a small animation and VFX studio in London I work for. They’ve kindly allowed me to release the rig to the blender community.
You can download the rig here.