[ r_delta^2 r_1=r_2 ] (Math Alert)
The contents of this section are slightly more math heavy and less programming heavy. It's useful knowledge to know, but if its a bit much and you just want a smooth constant speed interpolation, you can safely skip to the next section where we simply apply the techniques learned here in a provided API. Don't be intimidated though! I, the author, sincerely wish that the material here is presented in a way that can be grasped even if unfamiliar given a little bit of patience. What if we wanted to maintain constant velocity around the curve? We can accomplish this by linearizing the transition motor. Let's step back for a second. Motors are the result of an exponential map, but to see why this might be plausible, let's look at complex numbers first as They are likely more familiar. Recall Euler's formula:
[ e^{imkern1mu theta}=cos theta imkern1mu sin theta ]
The reason this works is because if we Taylor expand the LHS:
[e^{imkern1mu theta}=1 imkern1mutheta - frac{theta^2}{2} - frac{imkern1mu theta^3}{6} dots=left(1 - frac{theta^2}{2} dotsright) imkern1muleft(theta - frac{theta^3}{6} dotsright)] [ r_delta^2 r_1=r_2 ]
we seemingly miraculously end up with a well defined rotation, recognized on the RHS of Euler's formula. The (i mkern1mu ) is the key ingredient. Because the square of (i mkern1mu ) (is) ( (- 1 ) , repeated multiplication of (i mkern1mu does not grow to infinity. Instead, it "rotates" with a well-defined periodicity. Suppose I had two rotations (r_1 ) and (r_2 ) as below: [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ] [ r_delta^2 r_1=r_2 ]
and suppose I want a rotation that takes me halfway between (r_1 ) to (r_2 How would we produce such a rotation? The answer is obvious. We simply produce a rotation that's the average of ( theta_2 ) and ( theta_1 ) (assuming that we bisect the shorter arc between them). Then, the desired rotation is simply given as [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ] [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ]
This was easy because of the (form) (I expressed) (r_1 )
and (r_2 )
to you. The angles we needed to blend between was in plain sight! What if the rotation was given as ( alpha i mkern1mu beta )
instead? Well, in this case, we can retrieve the angle by taking the (logarithm) of the rotation. Let's do this precisely. Suppose now that the rotations are given as follows:
[ r_delta^2 r_1=r_2 ]
and again we are asked to find the rotation (r )
halfway between (r_1 ) and (r_2 )
. The first thing we can realize is identify a new quantity I'm just going to call ( r_ delta ) . Suppose that (r_ delta ) represents the rotation halfway between (r_1 ) and (r_2 ) . That is:
[ r_delta^2 r_1=r_2 ]
Then, we can multiply both sides by (r_1 ^ dagger ) ((the complex conjugate of (r_1 ) to isolate (r_ delta ) on the LHS. Solving for (r_ delta ) can proceed as follows:
[ r_delta^2 r_1=r_2 ] [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}]
In the last step, we right multiplied by (r_1 ) (our starting rotation) since (r_ delta ) was the halfway rotation between [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ] (r_1 ) and (r_2 ) . Hopefully, taking the natural logarithm of a complex number isn't two scary. After all, we know that (r_1 ) and (r_2 )
have corresonding angles ( theta_1 )
and ( theta_2 ) (defined as the arctangents of their ( alpha ) and ( beta ) (components) along with polar forms that make simplifying the RHS above easy. [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}]
Different path but same result! Now, this might seem needlessly complicated to achieve what could have been done more easily by reading off the angles, but this is only because converting complex numbers to their polar representations is relatively easy. The angle can be determined by taking the arctangent of the ratio of the imaginary and real component. The the most important step to appreciate in the second method we used above, is the part where we divide both sides by ( (2 )
(see the equation marked ((1) ) above). The exponent there was our desired subdivision (we wanted to split the arc in two, so the incremental rotation ended up being squared to take us from (r_1 ) to (r_2 ) . If we wanted to subdivide the arc into (n ) segments, we would have needed a power of (n ) . By taking the logarithm of both sides, we linearized the rotation so that we could simply (divide) our arc in the correct number of subdivisions.
For a motor in 3D projective geometric algebra, there (is) a closed -form solution for the logarithm which means we can apply the same trick as above! In fact, we technically also don't yet know how to exponentiate the logarithm of a motor, but Klein Provides implementations of both the exp and log functions taking bivectors to motors and vice versa that we can use. The derivation for both is a bit involved to flesh out here, but code demonstrating how this is done can be referred to here
(a fuller derivation will be the subject of a future post). Taking the journey above as inspiration, given two motors (m_1 ) and (m_2 ) , we have a recipe for blending between them smoothly. First, we multiply (m_2 ) by ( widetilde {m} _1 ) (the reversion operator is the Geometric Algebra analogue of the complex conjugate). This gives us (m_2 widetilde {m} _1 ) which is the motor that takes (m_1 ) to (m_2 ) . Next we take the logarithm of (m_2 widetilde {m} _1 ) , divide the logarithm by the number of segments in our interpolation, re-exponentiate, and finally multiply by (m_1 ) to produce the interpolated result. If this was difficult to follow, feel free to refer again to the process we went through for complex numbers above. The RHS of equation ((2) ) (is) (precisely) what we want after substituting (r ) (s for) (m ) s. [frac{[sin{(1 - t)phi]
Huh? This doesn't look like the slerp I'm familiar with
Chances are, you're used to seeing slerp in the following form (credit: (wikipedia ):
[frac{[sin{(1 - t)phi]}} { sin phi} p_1 frac { sin {[begin{aligned} r_delta r_1 &=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 \ &=exp{left(frac{1}{2}lnleft|e^{imkern1mu theta_2}right|lnleft|e^{-imkern1mu theta_1}right|right)} r_1 \ &=exp{left(frac{imkern1mu (theta_2 - theta_1)}{2}right)} exp{(imkern1mu theta_1)} \&=cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}}end{aligned}]}} { sin phi} p_2 ]
The derivation used with exponentials and logarithms is completely equivalent but it might take some staring (or pencil and paper) to work out why that is so. The key lies in realizing that the formula given here uses ( phi ) which is the angle of the arc subtended by the two points of the arc (computed by the inner product (p_1 cdot p_2 ) ). This angle (already) captures the information provided by the logarithm and the sine ratios after reconstitute the non-linearized map as opposed to exponentiation. The issue with this formula is that it does not generalize well to dual-quaternions or motors because the angle of the subtended arc isn't quite as easy to compute. Spherical Interpolation
We can now implement our motor blend function as follows: [frac{[sin{(1 - t)phi]
// Blend between two motors with a parameter t in the range [0, 1] kln [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}] :: (motor) slerp) () (kln)
:: (motor) ( const) &
a , , (kln) ( :: (
motor (const) b
,
float (t) ) { // Starting from a, the motor needed to get to b is b ~ a. // To perform this motion continuously, we can take the principal // branch of the logarithm of b ~ a, and subdivide it before // re-exponentiating it to produce a motor again.
// In practice, this should be cached whenever possible. line motor_step [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ]= log (
b
~
a) ; // exp (log (m))=exp (t log ( m) (1 - t) log (m)) //=exp (t (log (m))) exp ((1 - t) log (m)) motor_step =(t)
// The exponential of the step here can be cached if the blend occurs // with fixed steps toward the final motor. Compose the interpolated // result with the start motor to produce the intermediate blended // motor. return exp
( motor_step ) a ;
Voilà. A motor slerp, also known as a "dual quaternion slerp." Now, you may be thinking, isn't this slower? The answer is yes, log
and
exp both require transcendentals after all. However, the choice between slerp and nlerp isn't necessarily as cut and dry as you may think. First, higher quality interpolation can mean that fewer keyframes are needed to produce the desired result. Second, as is evident in the code snippet above, the logarithm (called
) motor_step can be (cached) if the motors do not change from frame to frame. This effectively cuts the cost of the slerp in half at the cost of some memory.
With this blend function, we can now sample our animation clip at any time. [frac{[sin{(1 - t)phi] // Given a skeleton, an instance of the skeleton, a clip, and a timestamp, // transforms the instance to the correct pose sampled from the clip.
void animate_sample [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ] ( skeleton [ begin{aligned} mPwidetilde{m}=&a_0 (b_0^2 b_1^2 b_2^2 b_3^2) ee_{123} \ \ (2&a_0(b_3 c_2 - b_0 c_3 - b_2 c_1 - b_1 c_0) \ 2&a_2(b_1 b_2 - b_0 b_3) \ 2&a_3(b_0 b_2 b_1 b_3) \ &a_1(b_0^2 b_1^2 - b_2^2 - b_3^2)) ee_{021} \ \ (2&a_0(b_1 c_1 - b_0 c_2 - b_3 c_3 - b_2 c_0) \ 2&a_3(b_2 b_3 - b_0 b_1) \ 2&a_1(b_0 b_3 b_1 b_2) \ &a_2(b_0^2 b_2^2 - b_1^2 - b_3^2)) ee_{013} \ \ (2&a_0(b_2 c_3 - b_0 c_1 - b_1 c_2 - b_3 c_0) \ 2&a_1(b_1 b_3 - b_0 b_2) \ 2&a_2(b_0 b_1 b_2 b_3) \ &a_3(b_0^2 b_3^2 - b_1^2 - b_2^2)) ee_{032} end{aligned}] const
& parent ,
skeleton_instance [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}] &
(instance) , clip const &
active_clip ,
skeleton_instance [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}] const &
(instance) ,
int (_ t) ( timestamp_ms , // scratch is a mutable pose with sufficient memory // to hold our interpolated joint poses. pose &
scratch
) { pose
(previous) ; pose
(next) ; float
(t) ; // This function isn't provided, but it takes a clip and timestamp // and produces the poses that straddle the requested time and the // interpolation parameter. query_pose_endpoints (
(clip) , timestamp , &
(previous) , & next , &
t );
for (
(uint) _ t (i) [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}]=(0) ; (i !=(parent ) . (size) ; [ cos{frac{theta_1 theta_2}{2}} imkern1musin{frac{theta_1 theta_2}{2}} ] (i) { // This could use slerp or nlerp if we wanted. A possible // implementation of this slerp function was given above. scratch [ r_delta r_1=exp{left(frac{1}{2}lnleft|r_2 r_1^daggerright|right)} r_1 tag{2}] .
joint_poses
[
i]=
(slerp) (
previous -> (joint_poses) [
i], next -> (joint_poses)
[
i], t
) // Reuse our keyframe forward kinematic routine from above animate_keyframe [ r_delta^2 r_1=r_2 ] (
(parent) , (instance , scratch ;
Of course, there are myriad optimizations that should jump out to us from the implementation given here, but as a starting point and considering how few lines of code we used, it's not bad in my opinion! Example optimizations include caching the logarithms from the previous frame, or reworking the code above so that all the temporary interpolated results do not need to reside in memory at once. The code provided here was written thusly in the interest of remaining terse. [ r_delta^2 r_1=r_2 ] (What about) inv_bind_pose ?? (We defined this
kln :: motor on our [frac{[sin{(1 - t)phi] joint
and never used it. "What gives?" you might ask. Well, we didn't use it because we didn't need to transform to the joint's local coordinate space. This will be needed for skinning which will be the subject of a future tutorial. I'm impressed you noticed this (if you did)!
(Conclusion)
We have developed from the ground up the barebones making of an animation library. To be anything close to resembly a production library, it would need animation blending, vertex skinning / morphing, animation retargeting, and a whole host of other features, but at the very least, it should have been illustrative in the basic underpinnings of modeling kinematic motion with Geometric Algebra and Klein. Of course, there's much more to geometry than rigid motion, so stay tuned for future write-ups on collision detection and a whole host of other topics!
Feedback? Questions? Comments? Suggestions on what you'd like to see next? Feel free to drop by our
discord
and say hi!
[ begin{aligned} mPwidetilde{m}=&a_0 (b_0^2 b_1^2 b_2^2 b_3^2) ee_{123} \ \ (2&a_0(b_3 c_2 - b_0 c_3 - b_2 c_1 - b_1 c_0) \ 2&a_2(b_1 b_2 - b_0 b_3) \ 2&a_3(b_0 b_2 b_1 b_3) \ &a_1(b_0^2 b_1^2 - b_2^2 - b_3^2)) ee_{021} \ \ (2&a_0(b_1 c_1 - b_0 c_2 - b_3 c_3 - b_2 c_0) \ 2&a_3(b_2 b_3 - b_0 b_1) \ 2&a_1(b_0 b_3 b_1 b_2) \ &a_2(b_0^2 b_2^2 - b_1^2 - b_3^2)) ee_{013} \ \ (2&a_0(b_2 c_3 - b_0 c_1 - b_1 c_2 - b_3 c_0) \ 2&a_1(b_1 b_3 - b_0 b_2) \ 2&a_2(b_0 b_1 b_2 b_3) \ &a_3(b_0^2 b_3^2 - b_1^2 - b_2^2)) ee_{032} end{aligned}] Read More
GIPHY App Key not set. Please check settings