(Apologies for the crappy audio.)
A quick mini tutorial on a couple of sculpt brush settings I came up with. They work quite nicely for refining planes quickly when sculpting both hard and soft surfaces, and can be used to quickly refine sharp planes and to smooth transitions between surfaces.
I made two versions: one that works a scrape brush, and one that works as a fill brush. Both are pretty useful. You can download a blend containing the brushes, called Trim Scrape and Trim Fill from Blendswap . Alternatively if you just want to make them yourselves you can see the relevant settings highlighted below. The key to how the brush works is having a nice hard-edged falloff on the brushe’s curve, and making use of autosmooth. So far I’ve been messing about with them doing some dynamic topology sculpting and they work pretty nicely. Hope you find them useful!
You can find Roberto’s blog, featuring loads more awesome sculpting time-lapses and some great resources at ThisRoomThatIKeep.blogspot.co.uk.
Not much to look at is it? But it was an interesting experiment. The idea was to push cycle’s physical realism to the absurd extreme of building a pinhole camera in blender. A pinhole camera has no lens or aperture, instead the light just passes through a small hole in the front of the camera, and forms an inverted image on the camera’s back wall (this is why the image above appears upside-down). The construction of a pinhole camera is very simple – it’s a box with a hole in one side, so I figured that because Blender now has a physically accurate ray-tracer in the form of Cycles, it was probably possible to build one that worked in blender. Here’s how mine looked:
The “real” camera – i.e. a Camera object for rendering, is situated inside the pinhole camera, facing the back wall. The scene needed to be lit extremely brightly, in order for enough light to find it’s way through the tiny hole in the camera, to illuminate the back wall. The two lamps have intensities of 100,000 of the key light and 20,000 for the fill light. The cycles preview outside the camera looked something like this:
As you can see from the final render, the results are very noisy. Even more so when you consider that the small, noisy image you see is the result of 100,000 samples. I set the number of bounces to bounces for rendering to 3 (i.e. one bounce direct lighting, one for a small amount of indirect lighting, plus one extra bounce because we are viewing everything on the diffuse surface of the inside of the camera). It was actually really quick to render, as it was only a small image, and a relatively simple one at that ignoring the rather strange setup. It took about an hour, the only post processing I did was to brighten the image a bit.
Whilst the final result isn’t that impressive, you can clearly make out suzanne and the cube and cone. You can also see that the image is slightly blurry. With pinhole cameras there is no depth of field; instead the focus of the whole image is determined by the size of the pinhole – the smaller the hole the sharper the image. Of course the smaller you make the pinhole, the less light gets in, and so the dimmer the image becomes. This also means that for our virtual pinhole camera we get more noise if we try to bump the image up to the same brightness, so there is a tradeoff between noise and sharpness that we have to take into account.
Anyway, it’s hardly a useful way to go about creating images, but it is an interesting experiment, and a great demonstration of what cycles can do.
You can download the blend file to have a go with it yourself from blendswap (CC-Zero).
Blender Master class will begin shipping next week, and for a week starting today you can get the book (Print or E-Book edition) for 40% off at Nostarch.com. Just use the coupon code WILLITBLEND at the checkout.
For more about the book, check out this post.
— Ton Roosendaal (@tonroosendaal) February 15, 2013
You can now buy my book, Blender Master Class on the Blender.org E-Store, it’s roughly the same price as ordering anywhere else, and you’ll also be supporting the Blender Foundation.
Just want the blend? Skip to the end.
This is something I made for work recently and have since been playing around with. It’s pretty fun, so I thought I’d share it with you. It started life as a node setup for rendering images or video as a halftone pattern, similar to how images in a newspaper look when viewed close up. It was an interesting challenge as it required mimicking the CMYK colours used in traditional printing. To do this the input image has to be converted to CMYK values, and then further manipulated to create the halftone pattern. (more…)
We’ve been busy updating the Gecko Animation site at work lately, and we decided to get rid of some older content on there. Whilst this tutorial by @Laxy doesn’t really fit on the Gecko site anymore, it’s still a cool resource, so I said I’d put it up here. It’s slightly older but the principles of animation don’t really change, so it should still be of intrest!
If you’ve noticed I’ve been updating the site a bit less frequently of late, this (and busy times at Gecko Animation) is pretty much why. Along with the talented and patient folks at No Starch Press, I’ve been writing a book about creating art with Blender and GIMP. It covers everything from modeling to sculpting, through to textures, materials, lighting and rendering.
The book takes you through three different projects: a gruesome bat monster, a robotic spider, and an overgrown temple deep in the jungle. But this isn’t just a simple step by step tutorial. Whilst you can use the book that way, I chose each of the projects to provide a unique set of challenges, and I use the projects to help explain how to use GIMP and Blender in your own projects. The book is filled with examples from my other works too, as well as detailed descriptions of blenders tools, and guides to getting the most out of Blender and GIMP with your own custom UIs, Brushes and Materials.
Plus, the book comes with a DVD containing all of the project files and resources used in creating each of the projects, plus some extra goodies like brushes, mat-cap materials, textures, and sculpting alphas.
Here’s a more detailed breakdown of what’s covered in the book:
- Introductions to Blender and GIMP for new users.
- Working with reference images and concept art in Blender and GIMP.
- Modeling, from blocking out basic forms, to creating complex meshes.
- Sculpting both organic and hard-surface models.
- Retopology to turn complex sculpts into simple models with good topology.
- Creating hair and fur with Blender’s particle systems.
- Baking textures (Ambient Occlusion, Displacement, Normals, Colours) from models.
- Painting textures using both Blender and GIMP.
- Creating materials for Blender Internal and Cycles renderers. Creating materials for BI with the Properties editor, and building up complex cycles shader with the Node editor.
- Lighting, again with both Blender Internal and Cycles renderers.
- Rendering and compositing the final scenes, adding post-processing effects with compositing nodes and adding final touch-ups in GIMP.
The book will be published in February/March. You can pre-order it now from Amazon, the Blender.org Store or from the No Starch Press website. If you order from No Starch, you get a free E-Book edition of the book when you purchase the print edition.
It’s been a big project putting the book together, and I hope it’s resulted in something really useful. So if you’ve enjoyed the tutorials on this site I hope you’ll give it a look.
Over at GeckoAnimation, @laxy and I recently got to do some visual effects for the last episode in the latest series of Red Dwarf. We’re both fans of the show and really proud to have been able to contribute to it, and after the Premier last night I can safely say its as funny as ever. We were dead chuffed to see that some of our work had made it into the opening titles too, so now we get to see a bit of our stuff in every episode!
Anyway, the series starts tonight at 9pm on Dave, with Episode 6 (the one we worked on) going out sometime in November. Watch it!
I’ve been experimenting more with cycles lately, and one thing I’m really impressed with already is the speed with which it handles instanced objects. I thought I’d share some silly experiments I made in the process and also a tip for using instances.
It’s great fun to use instancing to create fractal-like structures out of repeating objects. The image above I call the monkeybulb – after the mandelbulb fractal – and is made up of over 9000 suzannes (though even this is small fry compared to Agus3D’s instanced forest). It’s made by repeatedly extruding all the individual faces of a cube, and then smoothing the results. I then create a suzanne object that I parent to the fractal mesh, and turn on Dupli Faces to create repeated instances of suzanne. The image below uses a similar strategy, using a few array modifiers to duplicate a plane.
One thing that’s important to note is that I dont use an array modifier to duplicate the cubes seen in the image. This would result in a non-instanced verison of the scene that would render slower – as the array modifier generates new geometry rather than instancing the same cuber over and over. Instead I use array modifiers to duplicate a plane, then apply these modifiers, and parent the cube to the applied mesh, once again using Dupli Faces to handle the duplication of the cube. The difference in render time that this produces is quite something, as demonstrated by the renders below. Because the array modifier doesn’t produce instances, both rendering and creation of the bvh structure are significantly slower than when using Dupli Faces.
Anyway, just thought it was a fun thing to experiment with and also a worthwhile tip to know. So far as I know, both duplifaces/verts and particle systems create genuine duplicates, whilst the array modifier does not. Hope it’s useful and not trivially obvious to all but me!
Edit: Here’s one more fractal-like render. This time an apollonian gasket made up of spheres. I used a development build to get use of the Object Data node, so that I could randomise the colour of each sphere.
I’ve been experimenting with Growl lately, which is a great notifications system for osx with a whole load of useful controls, including notification by email, as well as a python api for doing notifications. Afters seeing Jason Van Gumster’s render music script, I thought it would be a good project to try and combine the two. The result is a Growl notifier addon for blender that gives you notifications about rendering, which is really useful to have if you’re setting off an animation to render over night and you want be kept up to date with how it’s doing, or even if you’re just web browsing whilst you wait for a quick render and want to be notified when it’s done.
I started learning python a bit more seriously late last year, and since then it’s saved my bacon more times than I can count. It’s a terrific tool both inside and outside of blender, for automating tedious batch processes and adding little functions to blender that you just wish were there sometimes. Whilst learning python and the bpy API more fully will take some (well spent) time, you can get to know blenders python API very easily thanks to the Autocomplete function (Ctrl-Space) which will present you with a list of available options when exploring the api from blenders python console.
One use of the python console that is very quick to get the hang of is using for loops to the properties of multiple objects at the same time. To get a list of all the selected objects, you can use bpy.context.selected_objects. Then you can run through the list with a for loop, which will do the same thing to each entry in the list. For example the following sets all of the selected objects display level to wireframe:
for x in bpy.context.selected_objects: x.draw_type = 'WIRE'
An example that might be useful if you’re working on a complex render setup might be to batch assign object indices to a selection:
for x in bpy.context.selected_objects: x.pass_index = 1
Another useful example is setting the display level for all your objects subsuf modifiers to zero. This greatly speeds up your viewport performance when working with a complex scene, and whilst blender lets you do this using the simplification tools in the scene tab, you can’t (to my knowledge) do it just for objects in the 3D Viewport (the simplify options affect both the viewport and rendertime). This time, we to use try to allow blender to ignore objects that don’t have subsurf modifiers that need changing:
for x in bpy.context.selected_objects: try: x.modifiers['Subsurf'].levels = 0 except KeyError: print("No subsurf on this object")
A handy one when working with blenders camera tracking tools (or something else that generates lot of empties) is to use a for loop to change the draw type and size for a whole bunch of empties. It’s handy for shrinking them down to keep them out of the way:
for x in bpy.context.selected_objects: try: x.empty_draw_type = 'CUBE' x.empty_draw_size = 0.1 except KeyError: print("This one isn't an empty.")
You can find more options that you can change this way by exploring the api with autocomplete, or if you have a specific property in mind that you want to change you can simply right click it in blenders UI and select Copy Data Path to copy the last part of the data path to the clipboard. You can either save these little snippets as scripts or simply type them in when you need them - they’re pretty short. If you’re saving them as a script remember to add the line “import bpy” at the beginning of the script (you don’t need to do this in the console). Anyway, I just wanted to share something in python that might be pretty easy for beginners to grasp, and that I’ve found really useful. Let me know in the comments if you have any good ones of your own!
My friend and partner in crime at Gecko Animation, Jonathan Lax, recently put together some breakdowns of the shots we did for our short film Assembly: Life in Macrospace, which won Best Designed short film at the suzanne awards last year. Have a watch if you’re interested in how it was made.
All of the CG shots were done in blender, though some of the fluid sim was done with realflow and then imported into blender as .obj files using a little python script I wrote. The rendering was all done in blender internal, and post processing with a mix of blender and After Effects. We mainly used AE for the depth of field as blenders defocus node is kind of slow and has some issues with foreground blur, whereas the Frischluft lens blur plugin for AE is pretty darn fantastic. The grading and other effects were all done with blender.
If you haven’t seen the original itself you can find it here.
Here are some models I made over the past year, I put together some turntable renders for a little showreel. Most of these are modelled in blender/sculpted in zbrush. The turntable renders were done with v-ray and composited back in blender.
Hmm, maybe it was a power station, maybe a factory or a warehouse. All we know now is it’s falling apart. A project I began a long time ago, which has been gathering dust on my hard drive, so I thought I’d post it up. At some point in the future I might resurrect it in its originally intended animated form, but for now here are some stills.
All the modelling was done in blender 2.49, with some sculpting and texturing in blender 2.5. Most of the texturing was done in GIMP, with a whole bunch of textures from CGTextures.com. Rendering was done in V-Ray, thanks to Andrey Izrantsev’s fantastic blender 2.49 to vray exporter (though the newer 2.5 version is even better). Compositing was done in blender 2.5.
Click images for full size (1920×1080) versions.
Done for Gecko Animation Ltd, which I am now a part of!
A short experimental piece featuring a blend of Macro Photography and CG Animation. It started with an idea from David Parvin, of Two Rivers Partnership, involving abstract forms and macro photography. With the footage shot, Jonathan Lax and myself worked on constructing a narrative from the forms we saw by combining CG elements with the original footage.
The CG elements were all done in blender (though some of the fluid sim stuff was done with RealFlow) and rendered in BI. Lots of animated node materials for the shifting surfaces and displacements.
Director of Photography – David Parvin
Live Action Shoot – David Parvin, Jonathan Lax, Tobin Brett
CG Animation and Effects – Ben Simonds, Jonathan Lax
Editing – Jonathan Lax
Music and Sound FX – Alistair Lax
I’ve been playing around with Alchemy for a while now, and it’s a suberbly fun little program for coming up with visual ideas. For those of you who don’t know of it, its a 2D drawing application with all sorts of chaotic tools for creating shapes and patterns, that you can then start picking out shapes to develop ideas from. With the recent addition of .xcf import to blender, allowing you to import layered xcf files and automatically set them up for rendering, I started getting some ideas about combining alchemy and blender via GIMP. Here’s a little video showing the process I used to create the images above, plus a hint at how I applied the same method in 3D to create an animated abstract 3D character.
Here are a couple of other results (click the second for an animated walkcycle):
Anyway, let me know if you think of anything cool using this technique.
I created a couple of base meshes recently to give me something to start with when sculpting heads and bodies. I thought I’d make them avaliable. I’ve also included a GLSL matcap material I created that works quite well for sculpting. You’re free to use them for whatever you want. If you just want the matcap you can download the image directly to use in whatever application you prefer. Here’s a sculpt I made with the bust basemesh:
Download Basemeshes (zipped .blend format, at BlendSwap.com)
Download Basemeshes (zipped .obj format)
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
I’ve been working on some character rigs recently, and trying to learn and use more python to speed up the process and improve my rigs. One of my favourite uses that I’ve come across so far is using a “for” loop to do batch operations on bones, which is super handy when building complex rigs, as you can do batch renaming and changing of options on a whole bunch of bones at once rather than having to do things one-bone-at-a-time by hand. I thought I’d share some of my favourite python snippets that have come in handy so far.
When in edit mode, select the bones you want to change and put the following in a new text block, comment out any operations you dont want to perform, and hit alt-P to run the script:
bones_list = bpy.data.armatures['RIG NAME'].bones
bones_selected = bpy.context.selected_bones
##Some tools for renaming bones and removing trailing numbers once you’ve renamed them. Handy when duplicating parts of your deform rig to create your control rigs. Switch bones_selected for bones_list to perform an operation on every bone in your rig.##
for item in bones_selected:
item.name = item.name.replace(“DEF-”,”CON-”)
item.name = item.name.replace(“.001″,”")
item.name = item.name.replace(“.002″,”")
##Some tools for changing bone properties. use_deform turns off the bones “deform” option, always do this for bones you don’t want to directly deform your mesh (control/helper bones) . show_wire makes bone shapes visible in solid view mode, even if they only have edges.##
for item in bones_selected:
item.use_deform = False
item.show_wire = True
##Here are a couple of useful ones to try in pose mode. rotation_mode lets you set whether your bones use Euler or quarternion rotations, and custom_shape lets you set the bone shape used for all the bones you have selected. ##
bones_selected_pose = bpy.context.selected_pose_bones
for item in bones_selected_pose:
item.rotation_mode = ‘XYZ’
item.custom_shape = bpy.data.objects['NAME OF BONE SHAPE OBJECT']
I got Virgilio Vasconcelos’ new book, Blender 2.5 Animation Cookbook in the post from PACKT publishing the other day, and they asked me to write a review. I’ve been doing a bit of rigging of my own lately, so it came along at a useful time for me; I’ve already picked up a couple of useful tips. It seems like a good guide to rigging so far, with info on most common rigging tasks, and I think enough information for beginners to get their heads around the topic.
Whilst the title suggests that the book centres on Animation, the book is really split about 50/50 between rigging and animation, which means as long as you can model it covers pretty much all you need to get animating, even if you don’t want to use a ready made rig like Mancandy or Pantin. The rigging portion takes you through rigging all the common aspects of rigging a biped, and whilst I’d differ on how to implement one or two aspects of some of the rigs, I’m by no means a champion rigger. It never hurts to know more than one method either. The book covers both creating and weight painting a deform rig, and then creating separate control rigs for different control methods (i.e. IK/FK), and keeps the two nice and separate (as they should be!). I think the lack of much discussion on using python for rigging is a bit of an oversight, given how useful knowing even a little bit of python can be when building a rig. Also a few of the controls that Virgilio outlines, like the isolation controls for the head and shoulders, only switch between on and off, and it isn’t difficult to make controls that smoothly interpolate between the two (go see Nathans mammoth rigging tutorials on CMI VFX for how to do that). Overall though the coverage is pretty solid though, and the book takes you through all the common hang ups, like creating a foot roll rig, stretchy and bendy limbs, rigging eyes, facial rigging with lattices and shapekeys, and creating interfaces for options like IK/FK switching, limb isolation and changing parent spaces. The book is particularly focussed on cartoony rigging, but it’s all applicable to more realistic rigs too.
On the animation side (which I know less about) the book teaches the layered technique, and has a nice progression through creating key poses, then extremes, breakdowns, and finally refining timing and tweaking curves in the graph editor. After demonstrating the basics the book moves on to a few common animation tasks, like having a character interact with props, creating walk cycles and animating speech. There’s also plenty of discussion of the principles of animation, which really is more important than the technical side. Anticipation, moving holds, squash and stretch, and symmetry are all talked about, and there are also some great tips on rendering silhouettes and mirrored previews of your animation to help spot your mistakes. There’s also a little discussion at the very end of the book on using grease pencil to plan and refine your animations. As I’m not much of an animator I can’t speak to any shortcomings the book might have but the book has a nice breadth, and I like the deference it pays to traditional animation too.
All the source .blend files for the book are available through Virgilio’s website, which are a great resource when trying to pick apart how a rig works so you can implement something in your own projects. If you’re interested in buying the book, I’d suggest you go and check them out. Also available is the main character rig “Otto”, used in the book. All in all the book should be a nice reference for anyone looking to start with rigging and animation. It’s an easy book to flip through and find the topic you’re stuck on, and whilst I disagreed with a couple of solutions, the book has an answer for most problems that might stump newbie blender-heads.
I’ve started doing some anatomy practice based on the idea outlined at the Art of Anatomy forums, to try and improve my anatomy knowledge. The aim is to study each part of the body piece by piece, and I’ve started with the feet. (more…)
Depth of Field can be a beautiful effect, adding both aesthetic interest to an image, and serving a narrative purpose in drawing the viewer’s attention to the centre of attention. But getting DoF right using Blenders compositing nodes can be tricky, as there are a few things to know about how the defocus node works, and some limitations it has that you need to know about. This post documents my own investigations, and hopefully should be useful for those new to the topic. It also touches on a few things that I haven’t worked out how to fix yet, so if there’s anyone out there with input I’d love to know your opinions.