Back from 2 weeks of reservist and another 2 weeks of settling personal agendas.
Been exploring Maya tools for sometime. Read up on steps in character setups particularly in the setup of joints, IK/FK switching, smooth vs rigid binding, IK splines, various constraints, setting driven keys, 'iconic' rigs (Maya Character Animation by Jae-Jin Choi). Facial rigs seem out of the question for the time being. Had spent sometime on soft/rigid bodies and fields (turbulance, uniform, ...) though i'm unsure of their neccesity.
My main concern at this stage has to do with skin binding. Given user inputs of models and geometries, we are supposed to create a skeleton, bind it and rig controls for the model. Character setup is a highly individual process for each different model (i think jovan popovic said something about this in 1 of his papers), though the same rig might be reused for similar models. Using both rigid and smooth binding methods in Maya seem unable to allow user the precise control of vertex weighting (such as painting skin weights in smooth bind, or using deformation lattices in rigid bind, as well as creating joint flexors and manipulating them precisely to give the best animation possible; these are, in my opinion, processes better done manually than automatically). We might be able to create sliding bars for users to adjust the drop-off rate and max influences for each vertex in smooth binding. I guess i would need guidance too on the inclusion of setting driven keys in automatic rigging because it is, according to what i've learnt, a matter of user preference also.
I have also come across few ways of rigging a character but i still find the iconic representation to be the most intuitive of all. I'm trying to rig my own character but apart from IK in the knees
and elbows, and splines in the spine, along with constraints applied to the joints, i am unable yet to replicate the full body rig as achieved in Motion Builder. My idea of autorigging is rather similar to Motion Builder's because it allows users to input preferred settings. The difference would be that Motion Builder requires an input skeleton, whereas we could implement a system that generates it. Perhaps it could be modelled after Jovan Popovic's idea of using a given skeleton. Then again, the weights applied to each vertex could potentially be done by machine learning.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment