I had never seen such a good example of how much the ears can move up and down when the face stretches. We are talking about nearly 1 cm here!!!
2018.10.01. v02.01 section 17 added relative to colourblindness
This is an updated version of a previous post I had written 5 years ago and that I think will still be very educational after having spent 5 more years in the industry.
If you want to become a rigger/character TD and even if you are already one, you might be interested to read the following notes, you will make animators’ life easier, allowing them to pull off better shots in the same amount of time or even quicker which will, in return, make the director and producers happy.
The same way it is important for a modeler to understand part of the rigging process, a rigger should also get a grasp of the animation workflow.
So how can you become a rigger that animators are happy to work with?
– Start with a good attitude, ask for feedback from the animators and embrace changes, be willing to discard that crazy automated function you thought would be really cool but animators find unusable when doing actual animation work. As Eamonn Butler put it when we visited Double Negative London in 2006, character TDs work for the animators, not the other way round.
– Educate yourself and stay informed by reading rigging books and dvds, read blogs! Rigging Dojo is one of my favorite blogs, they keep posting remarquable interviews. Participate to public forums where you can interact with other professionals, check other rigs! Currently I am watching Josh Sobel tutorials, it is a set of tutorial where he explains how he made the adorable Kayla rig. Watch showreels and even students showreels! There has been instances where Interns or fresh graduates we hired were more knowledgeable than staff who had been at the company for years.
Personally, I have learnt rigging through several dvds and those should be compulsory for anyone wanting to become a good character TD so here they are:
Jason Schleifer “Maya Fast animation rigs”. The DVD is well over 5 years old and only 1h20 but still is a must for someone who want to be liked by his peers. Interestingly enough I just checked and Jason is now offering his educational material on a donation basis so you can pay what you think is fair.
Jason Schleifer “Animator Friendly Rigging”. That set will take few hours to go through but exposes the most important rigging concepts
Fahrenheit DVDs. I actually started with those which was really hardcore but that was a very good learning experience. I am not sure those are still available.
Jason Osipa “Stop starring, facial modeling and animation done right”. A new edition just came out and I should probably get that one too. Jason goes in length about facial rigging and you should definitely spend some time understanding what he is talking about.
You have ordered them all? Okay so while waiting for them, I want to mention few things that irritate seasoned animators when using a new rig:
1. Slow rigs. Rigs speed is the hot topic of 2015. Disney, in a talk given at Siggraph this year said the following: “Rig speed is of paramount importance to animation pipelines. Real-time performance provides immediate feedback to artists thereby increasing the number of possible iterations and ultimately leading to higher quality animation.”
Dreamworks and Pixar had already understood it and realtime performance was one of goal they have achieved with the recent release of Premo and Presto, their respective animation software last year. On their footsteps, Autodesk finally unlocked the full potential of modern computers with the Parallel Rig Evaluation toolset in Maya 2016. Maya users can now troubleshoot slowdowns and calculate mesh deformation and some deformers using all the CPU AND GPU cores present on a system, speeding up dramatically the framerate in Maya’s viewport. Make sure you read this previous article I wrote http://www.olivier-ladeuix.com/blog/2015/11/26/maya-monday-maya-2016-parallel-rig-evaluation/
– If you are stuck with Maya 2015 and below, the old school method of speeding up rig involves replacing all expressions with nodes and create low poly or proxy version for the animators to work with. When working on body mechanics, animators don’t need the facial blend shapes or corrective shapes to be activated and when working on facial performance, body corrective blend shapes don’t need to be activated either, just a proxy version of the body to get an idea of the overall motion.
Disney Feature artist Sergi Caballer seems to have a nice feature where the animators can quickly isolate part of the body to speed up the rig. (read this great interview with Sergi on the Rigging Dojo website)
2. Rotation orders.
This is probably my biggest pet peeve. When creating your rig, pay special attention to rotation orders and use smart rotation orders for the limbs. By default Maya uses the default XYZ order, meaning that Z is carrying all the other axis so you want to stay away from this as often it is not the desirable order. In the situation of the head for example, it means that as soon as the animator will rotate the head in Y, X and Z will start to line up causing a gimbal lock and making it difficult for the animators to work in the graph editor. Instead by using ZXY or XZY, the lateral rotation of the head would never line up with the forward rotation. Jason Schleifer spends a chapter on rotation orders in his DVDs so please refer to it.
3. Consistent translation direction. (thanks Richie)
Animators need to be able to animate in the graph editor to troubleshoot problems or translate a character without using the global control so make sure your translations are consistent. If Y means up for the IK foot, it should also mean up for the hips and IK hands, same for X and Z.
4. GUIs As I am editing this paragraph 5 years later, it looks like Dreamworks is agreeing with me on that one so go and read: “Premo, the Dreamworks animation software“.
Personally I don’t like GUIs. Maya’s viewport is already so cluttered that I can’t see the point of having to keep looking sideway at an additional window covering, at best, a quarter of my screen when, controls could just be where they belong, on the surface of the geometry and toggled with a hotkey. Most of the time an other quarter of the screen will be covered by the pose library leaving the user with hardly enough space for camera view, perspective view and graph editor.
I tried an interesting concept a while back with a rig where instead of hitting the usual nurbs controls, you select an invisible polygon encompassing the surface of the limb. Keith Lango posted a similar setup a while ago using Zootrigger, myself I just parent constraint the Nurbs control or the joint to an invisible proxy box instead.
This could prove complex when selecting facial controls though so, fast forward to 2015, I just realised that Nurbs controls AND Geometry can be easily toggled through a script so animators don’t need GUIs as often, as long as the controls are streamlined and don’t overlap each others too much obviously (see point 9). I perfected few toggle scripts and I will share them with you in few days.
5. IF/FK switches WITH no-pop
It is obvious that animators need IK and FK for legs and hands but you should also consider a script that would allow the animators to switch very easily from IK to FK and reverse without going back to the T pose. Some people call this IK snap, IK match. A script was provided with the goold old Norman rig to do that kind of thing
6. Elbow and Knee lock. The animators should be able to lock an elbow to a table
7. Scaling Rigs should be scalable! Numerous time I ran into situations where we needed to scale the rigs and the rigs didn’t allow for this.
8. World orient/local orient switches for head, spine, arms …. Animators should be able to switch the alignment of the FK Head, neck, arms to the world or their parent by default and ideally a tool should allow to switch the parent space on the fly.
9. Bend bow and noddle arms. I am liking this a lot as it helps to make the rig look more organic or more cartoony in the case of noodel arms as seen below.
10. Cluttered viewports. Nurbs controls don’t need to cover the entire model, make them clean and simple, if possible make them invisible (cf GUI). A simple circle is more than enough for selecting controls, no need of an entire box shape especially for fingers or arms that will irremediably start overlapping and rendering impossible to select anything.
As a side note, when using circles for the fk spine, try to add an indentation at the front and back of the circle just to be able to quickly assess the offset of the spine controls. I normally take the center vertex of the nurbs circle and shift it up a bit then adjust the bezier curves like below.
11. Non consistency in the control’s boolean channel. In boolean operations, 0 should mean “No” and 1 should mean “Yes”. ie Shoulder parent 0 should mean that the IK hand is not parented to the shoulder. You could actually do the opposite, but as long as it it consistent throughout the rig.
12. Set me free. Animators are artists and they might want to break an elbow, pull fingers to create smear frames, pull the lips much higher that reality would suggest in order to stylise the motion so don’t restrain the animators to what is anatomically possible, unless you want them to come up with really stiff animation.
13. Pole vector switches. Pole vector should be centered right in front of the knees or elbow, not closer to one or the other joint and they should have a switch to the world just so they don’t follow the orientation of the foot or IK hand at all time.
14. Automated overlap/bakeable simulation. Automated overlaps for secondary elements can add so much life to a rig without the animator to do much work. Think about little appendages like ears, tails, ponytails, little bit of clothing or foot straps in the case of the following Disney demo however, animators should ultimately be able to make changes to the automated result which is why the joints animation should be bakeable.
On “Messy goes to Okido”, I would first pose Messy’s tail in it’s trademark curled pose, then ran a script to simulate overlaps and would finally created a motion based on a blending of the two. This gave a life like feel that was still designed.
15. Autorigs/programming. In production nowadays, there isn’t much room for a TD to rig all the characters from scratch,by hand, which is why I don’t consider myself a real character TD. Instead, you might want to start automating the way you create rigs by using Mel or Python since that’s what the cool kids use those days.
When I worked at EA I had to work on several prototype games and autorigs are what offered us the ultra quick turnaround required. Rigging and skinning were so fast that we were able to start animating the same day instead of relying on weekly turnarounds. I have been a long time advocate of the AbAutorig so click the link below to see how autorigs work.
16. Pose and animation libraries. In production it is paramount for the rigs to be compatible with standard pose and animation library scripts like Kurt Rathjen’s Studio Library, Lionel Gallat’s PoseLib, or the very user friendly Sal Pose Manager by Salwan Badra for 3dsmax. Pose libraries allow the animators to work with extremely complex rigs and stay “on model” which is very important when hiring junior staff and a time saver for senior animators. Believe it or not, I have worked in TV productions where the rigs weren’t compatible with those scripts and animators had to waste an incredible amount of time recreating some poses or animation from scratch and often this meant they didn’t bother.
At AnimSquad which is often referred to as the Disney animation school since most teachers are Disney Animation Studios supervisors, we had a pose library that separated poses for when the character’s head was facing the camera, looking screen right or looking screen left as you normally use every single tertiary control to cheat the pose so it looks appealing from every angle.
17. Controllers colours and colour blindness. 1 in 10 men and 1 in 200 women are affected by colour blindness. For most, green is a difficult colour to see so refrain from using the red/green or green/yellow combination. Instead you should favour colours that are widely contrasting like red and blue for example. I know it looks pretty ugly in term of colour harmony but rigs primary goal is to be efficient, not pretty. If you don’t know if you are colour blind, test your sight here, personally I am moderate colourblind and can’t see half of the plates http://www.color-blindness.com/ishihara-38-plates-cvd-test/#prettyPhoto.
Alright, that’s it for now and as a treat for making it so far, check out some really cool rigging showreels and “making-of” videos to get an additional creative boost:
Pixar animation software part 1
Pixar animation software part 2
Premo, the Dreamworks animation software
Rob and Ron by Tumblehead
World orient Head and shoulder
Norman World Align spine
Looney tunes shorts online
Have fun rigging!
3dsmax TD have become increasingly envious with the release of Maya 2016 and the parallel rig evaluation toolkit allowing Maya TDs to finally use all the CPUs and GPUs cores in parallel, in order to speed up the viewport display but until now I hadn’t see such a thorough demonstration as the following one.
Have a look it is really interesting for both TDs and animators as they give some really good tips on how to speed up rigs for animators. Also, have a look at the related Rigging Dojo article which is pretty funny:
and here one new article from Autodesk
I can’t wait to use Maya 2016 in production!
I was watching this today and couldn’t resist highlighting the following. I could watch this in loop so I made this a loop! ;-)
More seriously, I am depicting the situation as being black or white at Disney when the reality is far more complex.
Some movies like Pirates of the Caribbean are well known to rely heavily on motion capture and Disney research has published several papers featuring attempts to replace keyframe animators. The following one is particularly chilling as they are trying to prove how human motion capture can even be used to animate Non-Humanoid Characters with Human Motion Data using Pixar’s Luxo as an example. If someone called this blasphemy I would probably agree…..
Academy Originals just posted an inspirational video interview with Dreamworks Head of Character Animation Simon Otto which in the Dreamworks lingo would probably translate into HTTYD HOCA ;-) (click above for the video)
As a fine art hobbyist and compulsory doodlers I couldn’t help smiling throughout.
“Animator Simon Otto (“How to Train Your Dragon”, “How to Train Your Dragon 2”, “Kung Fu Panda”) takes viewers inside his creative process in an exploration of where ideas come from”
Premo, the Dreamworks animation software
At the begining of Messy goes to Okido”‘s production, our TD needed some help to rig various props and environment so I gave him a hand for few weeks and mostly for the “Taste buddies” episode. I never got credits for this but I don’t care much since I don’t really want to advertise those skills too much. I am an animator and don’t want to land rigging jobs.
Among those props was the rigging of Lolly’s Ice cream van which was a lot of fun.
It was a bit of challenge as I hadn’t done that kind of stuff in Max for a very long time but at the end of the day it didn’t take too long since I was able to use the exact same techniques I would have used in Maya. I could have spend a bit more time on some areas but TV series require a really fast turnaround unfortunately so the entire rig had to be done and tested in less than two weeks I think I remember.
If you live in UK, you can see the rig in action on the BBC Iplayer right here:
I had to dust off this blog to post those really cool character studies for Disney’s latest movie Big Hero 6. Other than that I eventually moved back to London and am currently working on Okido, a fun kids TV show which I will feature very soon. London’s grass is so much greener! ;-)
Character walking into a room, what a better way to depict a character’s personality especially when done by Disney animators.
And in case you don’t know that one
With the rise of companies like Disney, Blue Sky, Sony or Illumination Mac Guff relying entirely on the “off-the-shelf” Autodesk Maya which most animation students are familiar with, Dreamworks and Pixar had to revamp their ageing proprietary softwares to attract and retain talents. Presto for Pixar and Premo for Dreamworks seem to have now totally leapfrogged the commercial Autodesk offering by making the most of the numerous cores that current CPUs have made available for years, to the addition of on board GPUs.
Dreamworks used to be really secretive about EMO, their home made animation software, but things are changing.
With the release of Dean DeBlois’ “How to train your dragon 2”, several videos and articles have emerged showcasing Dreamwork’s new Premo animation software running on the latest Apollo technology. The technology looks so ground breaking that the ASIFA offered Dreamworks an Ub Iwerks award at this years Annie awards.
Premo looks very fast and intuitive. Instead of having to keep a separate sizable GUI on the screen, the controls are right where you expect them to be and they magically appear when the cursor hovers over the actionable areas, signifying to the animator that the highlighted area can be animated, liberating a huge screen real estate compared to GUIs.
Additional controls like IK/FK switches I am guessing can still be accessed through the related spreadsheets when needed obviously.
This is very refreshing as the idea has been suggested for years by Keith Lango and I also relayed the information on this blog in 2010. (read the article here: You want to be a rigger huh!)
Premo also offers a dramatic speed improvement compared to Emo as animators don’t need to recalculate after each action and rig can also be played real time in the viewport without needing to use proxy models.
Don’t believe me? Watch the following videos!
How DreamWorks reinvented animation software to make HTTYD2
Dear blog, it has been a while since we last spoke. I was intending to finish and upload the shots I did while attending AnimSquad but things went a bit crazy since last December ;-)
The last few month have been really busy with a secret project I can finally talk about.
I have been privileged to get involved with “Tofu Fury”, a launch game for Amazon’s Fire phone, a pretty cool mobile device that among many other features, can track the head of the user thanks to four additional cameras and simulate 3d in a manner that hasn’t been seen to that level of depth on a mobile device yet.
If you want to know more about the phone, The Verge got it all covered with several articles. The Mayday feature will be of a great help to my mom and dad who just don’t understand how to operate a smartphone.
I still can’t divulge too much about our game but it is coming very soon and it was awesome to have it demoed on stage by Amazon’s CEO Jeff Bezos himself when the device finally got announced two weeks ago. Funny enough, he called the game “Angry Tofu” because the idea delighted him and the title caught up with the press to the dismay of our producer and our studio owner.
Working on the game was a great challenge as on top of being the animator, I was also the Character TD and it was the first time I relied so much on Morph targets (blend shapes) in Unity (the game engine we used).
Since blend shapes are really new to the version of Unity we used and there were so many horror stories on the Unity forum, I tried everything I could to stay away from them and use bones and joints deformation instead but blend shapes were ultimately the best way to create the stylized deformations we were after.
As the main character is 100% animated with blend shapes, “Tofu Fury” is a testimony that blend shapes are perfectly reliable in Unity. I even went to the extent of using blend shapes for some of the bad guys and “non-playable-character’s” (NPC) skin deformation that are traditionally handled through joints and the result was visually more pleasing and less resource intensive, at least for the test we did we Martin, our programmer mostly involved with character animations.
Ok, I think that is all I can say for now so here is a video of Jeff Bezos demoing our game.
Ah, one more thing. Like most cool kids I just created an Instagram page for my Life drawings and sketches. If you are also an Instagram user, feel free to follow me at
There has been some development in the secrecy surrounding Pixar’s animation software in the past few weeks but before we get started, you might want to refresh yourself with the previous article I wrote about Menv and Presto.
Pixar officially revealed their software Presto (aka Menv 13) to the world, in a jaw-dropping tech demo illustrating the benefit of relying on GPU and Nvidia latest tech for that matter.
Maya and other 3d animation softwares look so antiquated compared to Presto. Unlike Autodesk and their mono-threaded CPU viewport, it is quite obvious that Pixar engineers are listening to the users.
In the following video you will get to see some features animators have been screaming for and no-one seems to be listening to it.
Forget about having to constantly disconnect your sight from your model and having to keep half of your screen free for a silly GUI:
– Invisible on-viewport local trigger controls!
– Realtime animation WITH hair!
– Realtime shadows
The pose library is not that different from other software but some people might be interested to see it :
Here is the extract from the demo followed by the full presentation showing the realtime lighting engine for which ….. I am very partial: