## PrefaceEdit

There seems to be a great deal of confusion and apprehension regarding quaternions in the game industry. In this author's experience, everything from commercial engine source to forum and newsgroup posts to books directed towards game developers, tends to treat it the concept as some sort of a nebulously defined magic box, impossible to comprehend by anyone not initiated into the higher mathematical arcana. The first three sections of this article attempts to dispel this myth, by deriving quaternions and their properties entirely through algebraic manipulation.

To this end, several shortcuts have been taken, of which two stand out in particular. First, both complex numbers and quaternions are assumed to inherit arithmetic properties of real numbers, unless explicitely shown otherwise. At least several proofs make such implicit assumptions -- this is not a good habit, and the reader is urged to not make such assumptions in general (alternatively, the reader is urged to stop giggling, edit the page and make the definitions more robust). Second, complex exponentiation, which gives a far more elegant view of rotations, is not covered in the section on complex arithmetic -- that discussion is postponed to a later section that deals with quaternion derivatives in the context of angular velocity.

We start by illustrating the utility of quaternions in game development. The first issue that quaternions address when compared to rotation matrices, is the issue of numerical drift. It is common, particularly in physics simulations, that a rotation needs to be updated, incrementally, over the lifetime of the object. Rotation matrices can quickly become non-orthogonal as a result; this is generally solved by either orthogonalizing the matrix by Gram-Schmidt, which tends to bias the matrix towards a particular axis, or by least squares, which can be expensive. Quaternion-based rotation shines here, since normalizing a quaternion is trivial, and besides, conjugation by a non-unit quaternion is still a rotation. The second issue is that of interpolation. It is common, particularly in skeletal animation playback, to be given some sort of a smooth path between two endpoint rotations, and needing to find intermediate rotations along that path. This is a non-trivial task with matrices -- not so with quaternions. Similarly, it is often required to interpolate over the *shortest path* between two rotations, also trivial with quaternions (see slerp), and difficult with matrices.

## Complex arithmeticEdit

One way to motivate the definition of quaternions is through complex numbers. A complex number $ z $ is defined as $ z = x + \mathbf{i} y $ , where $ x $ and $ y $ are real numbers, and $ \mathbf{i}^2 = -1 $ . It is tempting to view $ z $ as a two-dimensional vector, with $ x $ and $ y $ viewed as coordinates; then we can identify each complex number $ x+\mathbf{i}y $ with the two-dimensional vector $ (x,y) $ . This allows us to immediately define addition and subtraction: $ (a,b)+(c,d)=(a+c,b+d) \Longrightarrow a+\mathbf{i}b + c+\mathbf{i}d = (a+c)+\mathbf{i}(b+d) $ , with subtraction defined similarly. That is, addition and subtraction are defined as if $ \mathbf{i} $ were any other variable.

Continuing with the vector analogy for the time being, we switch from Cartesian coordinates to polar coordinates. Given a vector $ v = (x,y) $
, we can rewrite it as $ v = |v| (cos \theta, sin \theta) $
, where $ \theta $
is the angle between $ (x,y) $
and the horizontal axis, and $ |v| $
is the magnitude of the vector $ v $
, written $ |v| = \sqrt{x^2+y^2} $
. The angle $ \theta $
is simply $ \tan^{-1}\frac{y}{x} $
. Analogously, given a complex number $ z = x + \mathbf{i}y $
, we can rewrite it as $ z = |z| (cos \theta + \mathbf{i} sin \theta) $
. Here, $ \theta $
is the angle between $ x + \mathbf{i}y $
and the horizontal axis, and $ |z| $
is the *complex modulus* of the number $ z $
, written $ |z| = \sqrt{x^2+y^2} $
. The angle $ \theta $
is defined as in the vector case, and is called the *argument* of $ z $
, written $ \arg{z} $
. The bracketed term runs over the unit circle as $ \theta $
runs from $ 0 $
to $ 2\pi $
, which suggests that we might be able to define 2d rotations with a single complex number.

Moving on to multiplication, we can, once again, treat $ \mathbf{i} $
as any variable, and expand $ (a+\mathbf{i}b)(c+\mathbf{i}d) $
as $ ac+a\mathbf{i}d+b\mathbf{i}c+\mathbf{i}^2bd = (ac-bd)+\mathbf{i}(ad+bc) $
. To define division, we let $ \frac{a+\mathbf{i} b}{c + \mathbf{i}d} = x+\mathbf{i}y $
, or $ a+\mathbf{i}b = (x + \mathbf{i}y)(c + \mathbf{i}d) = (cx-yd) + \mathbf{i}(xd+yc) $
, and solve the resulting system of two equations for $ x $
and $ y $
, yielding $ x=\frac{ac + bd}{c^2 + d^2}, y =\frac{bc - ad}{c^2 + d^2} $
, and therefore $ \frac{a+\mathbf{i} b}{c + \mathbf{i}d} = \frac{(ac + bd)+\mathbf{i}(bc - ad)}{c^2+d^2} $
. Note that $ c^2+d^2 $
is simply the squared modulus of the denominator $ c+\mathbf{i}d $
. If the modulus of the denominator were equal to one, the squared modulus would equal one as well, and the fraction $ \frac{a+\mathbf{i} b}{c + \mathbf{i}d} $
, or, to employ different notation, $ (a+\mathbf{i} b) \cdot (c + \mathbf{i}d)^{-1} $
, would equal simply $ (ac + bd)+\mathbf{i}(bc - ad) $
. Note the similarity to the multiplication formula. Now suppose $ a^2+b^2=1 $
as well; then the modulus of the quotient equals one as well:

$ (ac+bd)^2+(bc-ad)^2 $ |

$ = (a^2 c^2 + b^2 d^2 + 2abcd) + (b^2 c^2 + a^2 d^2 - 2abcd) $ |

$ = a^2 c^2 + b^2 d^2 + b^2 c^2 + a^2 d^2 $ |

$ = a^2(c^2+d^2) + b^2(c^2+d^2) $ |

$ = a^2 + b^2 = 1. $ |

Similarly, we can show that the product of two complex numbers with the modulus of one, results in another complex number with the modulus of one -- in fact, we can generalize this by noting that $ |z_1 z_2| = |z_1| |z_2| $ . This further supports the intuition that it might be possible to describe rotations with complex numbers, since the modulus is simply the distance from the origin -- that is, multiplying two numbers on the unit circle results in another number on the unit circle.

If complex numbers can represent rotations, we're likely going to need inverse rotations, as well. That is, given a complex number $ z $
, we wish to find the number $ z^{-1} $
, such that $ z\cdot z^{-1} = 1 $
. Let $ z=a+\mathbf{i}b $
, and $ z^{-1} = c+\mathbf{i}d $
. Then their product is $ (ac-bd)+\mathbf{i}(ad+bc) $
. If this is to equal $ 1 $
, a real number, the imaginary component $ ad+bc $
must equal zero, and the real component $ ac-bd $
must equal one. Solving the resulting system of equations for $ c $
and $ d $
, we obtain $ c = \frac{a}{a^2+b^2} $
and $ d = -\frac{b}{a^2+b^2} $
, which gives $ z^{-1}=\frac{a-\mathbf{i}b}{|z|^2} $
. The numerator $ a-\mathbf{i}b $
is called the *conjugate* of $ z $
, and is written $ \overline{z} $
. It is clear that if $ z $
is unit length, then its conjugate is its inverse. Further, we obtain the identity $ |z|^2 = z\overline{z} $
.

Let's step aside for a moment and remember rotations of two-dimensional vectors. These can be represented by the matrix $ \begin{bmatrix}cos(t) & sin(t)\\-sin(t) & cos(t)\end{bmatrix} $
. Let $ a = cos(t) $
and $ b = -sin(t) $
. Then, given a vector $ (x,y) $
, the rotated vector is $ (ax-by, ay+bx) $
. But this is just the vector representation of the complex product $ (a+\mathbf{i}b)(x+\mathbf{i}y) $
! Further, we had posited that the number $ a+ib $
is a rotation if $ a^2+b^2=1 $
, but applied to the rotation matrix $ \begin{bmatrix}a & b\\-b & a\end{bmatrix} $
, this is simply the determinant of that matrix -- which we know must equal one if the matrix is a rotation. Further yet, $ \arg(a+ib) $
is the angle of rotation; alternatively, given an angle $ \theta $
, we can generate a rotation by that angle by switching to polar form and letting $ |z|=1 $
.

## QuaternionsEdit

We can now move on to rotations in 3d. The last paragraph defined a 2d rotation with a pair of numbers $ a $ and $ b $ , subject to the restriction $ a^2+b^2=1 $ . These pairs correspond to a single angle $ \theta $ by the relation $ \theta = \arg(a+\mathbf{i}b) $ . In the second definition, different values of $ \theta $ correspond to the same rotation, while in the first definition, the pair of numbers is unique. One way to look at this is as follows; one unconstrained number has one degree of freedom, two unconstrained numbers have two degrees of freedom. Adding a constraint to a pair of numbers reduces the number of degrees of freedom back to one; our particular restriction also limits the range that each individual component can take (specifically, neither $ a $ nor $ b $ may exceed $ 1 $ ).

In 3d, rotations are uniquely defined by *four* numbers. Intuitively, if we take the unit sphere and rotate it in a random direction, exactly two points will remain fixed; the rotation then can be defined by the vector through either fixed point, and the angle of rotation about that vector. If we let the vector be a unit length axis times the angle of rotation, in radians, say, we obtain three numbers; angular velocity is defined this way. Euler angles are another way to define a 3d rotation with three numbers. For rotations, however, both of these approaches suffer from the same problem as the rotation angle in 2d, namely, different length vectors correspond to the same rotation. Picking four numbers and imposing the constraint $ a^2+b^2+c^2+d^2=1 $
solves the problem in the same way: we add a coordinate, remove a degree of freedom from the resulting vector, and in doing so preserve the number of degrees of freedom, while constraining the range of values that each component can take.

This suggests that if we are to build, on top of the complex numbers, a number system that can represent three-dimensional rotations with a single number, we will want to add *two* imaginary components to the definition, not one. We could represent quaternions by pairs of complex numbers $ (a,b) $
, which almost works -- the product $ (ac-bd, bc+ad) $
is fairly close to quaternion multiplication, but isn't quite there. Instead, we have to write $ (a,b)(c,d) = (ac-\overline{b}d, b\overline{c}+ad) $
. Defining it the other way is perfectly fine, and gives an algebra in its own right; the problem is that the resulting algebra has little to do with quaternions. Without investigating this too deeply, we simply note that this definition is still consistent with the concept of complex multiplication, since the conjugate of a real number is the same number.

To switch from vector notation to number notation, we need to postulate a new quantity $ \mathbf{j} $
, in order to imply that the two complex terms that make up a quaternion can not be just added together. This is defined in the same way as $ \mathbf{i} $
, that is, we define $ \mathbf{j} $
by saying that $ \mathbf{j}^2 = -1 $
, but $ \mathbf{i} \neq \mathbf{j} $
. We can now write a quaternion $ q $
as $ q = a + \mathbf{j} b $
, where $ a $
and $ b $
are complex numbers. Applying the multiplication rule from the last paragraph yields $ (a+\mathbf{j}b)(c+\mathbf{j}d) = ac-\overline{b}d + \mathbf{j}(b\overline{c}+ad) $
. Let $ q_1 $
and $ q_2 $
stand for $ a+\mathbf{j}b $
and $ c+\mathbf{j}d $
respectively. Further, let $ a_1, a_2 $
denote the real and imaginary components of $ a $
respectively, and apply the same indexing scheme to the rest of the numbers involved. Then expanding the complex products $ ac, \overline{b}d $
etc and grouping the terms yields, after straightforward if somewhat tedious algebra,

$ q_1\cdot q_2 $ | $ = (a_1 c_1 - a_2 c_2 - b_1 d_1 - b_2 d_2) $ |

$ + \mathbf{i}(a_1 c_2 + a_2 c_1 + b_1 d_2 - b_2 d_1) $ | |

$ + \mathbf{j}(b_1 c_1 + b_2 c_2 + a_1 d_1 - a_2 d_2) $ | |

$ + \mathbf{ij}(-b_1 c_2 + b_2 c_1 + a_1 d_2 + a_2 d_1). $ |

The product $ \mathbf{ij} $ does not reduce to anything we're familiar with; it certainly does not equal either $ \mathbf{i} $ or $ \mathbf{j} $ , since that would imply that one of these quantities equals one. Besides, it seems to denote the fourth component of the quaternion, so it seems natural to give it a name, and let $ \mathbf{k}=\mathbf{ij} $ . Following our tradition of letting squares of letters in the middle of the alphabet equal negative one, we let $ \mathbf{k}^2=-1 $ . At first, this definition seems to conflict with the previous one, since if $ \mathbf{k}=\mathbf{ij} $ , then $ \mathbf{k}^2=(\mathbf{ij})^2 = \mathbf{i}^2\mathbf{j}^2 = (-1)(-1) = 1 $ . The way this is rectified is important. When we wrote $ (\mathbf{ij})^2 = \mathbf{i}^2\mathbf{j}^2 $ , we tacitly assumed that quaternions are commutative, the way that real numbers are. That is, we assumed that $ (\mathbf{ij})^2 = (\mathbf{ij})(\mathbf{ij}) = \mathbf{i}(\mathbf{j}\mathbf{i})\mathbf{j} = \mathbf{i}(\mathbf{i}\mathbf{j})\mathbf{j} = (\mathbf{ii})(\mathbf{jj}) = \mathbf{i}^2\mathbf{j}^2 $ . This is clearly not the case, so we have the inequality $ \mathbf{i}\mathbf{j}\neq\mathbf{j}\mathbf{i} $ . In fact, to make the identity $ \mathbf{k}^2=(\mathbf{ij})^2 = -1 $ work, we must accept that $ \mathbf{i}\mathbf{j} = -\mathbf{j}\mathbf{i} $ .

The rest of the identities regarding $ \mathbf{i}, \mathbf{j} $ and $ \mathbf{k} $ follow immediately:

$ \mathbf{i}\mathbf{k} = \mathbf{i}(\mathbf{i}\mathbf{j}) = -\mathbf{j} $ |

$ \mathbf{k}\mathbf{i} = (\mathbf{i}\mathbf{j})\mathbf{i} = -(\mathbf{j}\mathbf{i})\mathbf{i} = \mathbf{j} $ |

$ \mathbf{j}\mathbf{k} = \mathbf{j}(\mathbf{i}\mathbf{j}) = \mathbf{j}(-\mathbf{j}\mathbf{i}) = \mathbf{i} $ |

$ \mathbf{k}\mathbf{j} = \mathbf{j}(\mathbf{j}\mathbf{i}) = -\mathbf{i} $ |

$ \mathbf{i}\mathbf{j}\mathbf{k} = \mathbf{i}\mathbf{j}(\mathbf{i}\mathbf{j}) = \mathbf{i}\mathbf{j}(-\mathbf{j}\mathbf{i}) = \mathbf{i}\mathbf{i} = -1. $ |

Briefly going back to the definition of the quaternion product, we notice that parts of the sums bear a suspicious resemblance to the components of the vector cross product; writing the quaternion $ q $ as a scalar component combined with a vector component denoting the "imaginary" parts, that is, $ q=(s,v) $ , we can obtain the following identity: $ q_1\cdot q_2 = (s_1,v_1)\cdot(s_2,v_2) = (s_1 s_2 - v_1\cdot v_2, s_1 v_2 + s_2 v_1 + v_1\times v_2) $ . This further contributes to our initial intuition that quaternions might have a relationship to axis-angle representations of rotations, if it turns out that the scalar part somehow corresponds to the angle, and the vector part somehow corresponds to the axis.

With complex numbers, we had the identity $ |z|^2 = z\overline{z} $
, where the conjugate $ \overline{z} $
was obtained by negating the imaginary component of $ z $
. We state, without proof, similar results for quaternions: $ |q|^2 = q\overline{q} $
, where $ \overline{q} = \overline{a+\mathbf{i}b+\mathbf{j}c+\mathbf{k}d} = a-\mathbf{i}b-\mathbf{j}c-\mathbf{k}d $
, from which it follows that $ q^{-1} = \frac{q}{|q|^2} = \frac{q}{\sqrt{q\overline{q}}} $
. Another familiar property, $ |q_1 q_2| = |q_1| |q_2| $
, can be established as follows. Let $ q_1 = (s, v) $
, and $ q_2 = (t, u) $
, in the scalar+vector notation. Expanding their product gives

$ |(s,v)\cdot (t,u)|^2 = $ |

$ |(s t - v\cdot u, s u + t v + v \times u)|^2 = $ |

$ (s t - v\cdot u)^2 + (s u + t v + v \times u)^2 = $ |

$ s^2 t^2 + s^2 |u|^2 + t^2 |v|^2 + |v|^2 |u|^2 cos^2 x + |v|^2 |u|^2 sin^2 x + 2 s u \cdot (v \times u) + 2 t v \cdot (v \times u) = $ |

$ s^2 t^2 + s^2 |u|^2 + t^2 |v|^2 + |v|^2 |u|^2 (cos^2 x + sin^2 x) + 2 s u \cdot (v \times u) + 2 t v \cdot (v \times u) = $ |

$ s^2 t^2 + s^2 |u|^2 + t^2 |v|^2 + |v|^2 |u|^2 = $ |

$ s^2 (|u|^2 + t^2) + |v|^2 (|u|^2 + t^2) = $ |

$ (|v|^2 + |s|^2)(|u|^2 + t^2) = $ |

$ |q_1|^2 |q_2|^2. $ |

The modulus is a real number, so taking square roots concludes the proof.

*(note to self -- fill in the details for associativity of scalar multiplication)*

## Rotations with quaternionsEdit

So far, quaternions seem to share a number of essential properties with complex numbers, in particular properties that we had used to describe two-dimensional rotations using complex numbers. In fact, all of the properties of complex numbers still carry over when two of the imaginary components are set to zero; in this manner, quaternions are an extension of complex numbers, much in the same way as complex numbers are an extension of real numbers. This seems to suggest that describing the rotations themselves should be a fairly straightforward extension of the complex case. We could employ the trick we used with complex numbers, by associating components of a 2d rotation matrix with the components of a complex number. While this works, the algebra is lengthy to say the least. Further, two problems spring to mind immediately: first of all, we need to rotate a 3d vector, which has three elements, not four. Which ones do we pick? Second, suppose we zero out a specific element, how do we ensure that the resulting quaternion has that element zeroed out as well? In other words, we want the rotation operator to return a vector whenever we pass it a vector.

The first question is mostly rhetorical; it's more convenient to leave out the real component, since we've already been occasionally treating the remaining three as a vector. The second question is more interesting; to rephrase it, we want to find a quaternion function that preserves zeroed out real components. It's easy to see that simple multiplication of a vector by an arbitrary quaternion does not accomplish this, since given two quaternions $ q_1 = (s,v) $
and $ q_2 = (t,u) $
, the real component of the product is equal to $ s t - v.u $
. If $ t=0 $
, this reduces to $ -v.u $
, and the vector part reduces to $ s u + v \times u $
. It's reasonable to expect being able to zero out the real component by multiplying the product with yet another quaternion; let this quaternion be denoted $ (r,w) $
.

Expanding the product $ (-v.u, s u + v \times u)\cdot (r, w) $
, we obtain $ (-v\cdot u, s u + v \times u)\cdot (r,w) = (-r v\cdot u - (s u + v \times u)\cdot w, -w v\cdot u + r (s u + v \times u) + (s u + v \times u) \times w) $
, and requiring that the real part is zero leads to the constraint $ -r v\cdot u - (s u + v \times u)\cdot w = 0 $
, or, equivalently, $ (s u + v \times u)\cdot w + r v\cdot u = 0 $
.
Since the vector $ u $
is arbitrary, the quaternion $ (r,w) $
must depend only on $ (s,v) $
. In particular, the vector $ \frac{1}{r} w $
must lie on the plane with the normal $ (s u + v \times u) $
(not necessarily unit length), with the distance along that normal $ -v \cdot u $
, *independently of the value of* $ u $
. To do this, we must make the term $ v \times u $
disappear; that is, $ w $
must be a scalar multiple of $ v $
. To that end, let $ w/r = c v $
. Then

$ (s u + v \times u) \cdot c v + v \cdot u = 0 $ |

$ c s v \cdot u + c v \cdot (v \times u) + v \cdot u = 0 $ |

$ c s v \cdot u + v \cdot u = 0 $ |

$ c = -\frac{1}{s} $ |

$ w/r = -\frac{1}{s} v $ |

$ w = -\frac{r}{s} v. $ |

Additionally, since $ |(s,v)\cdot(0,u)\cdot(r,w)| = |(s,v)| |(0,u)| |(r,w)| $ , and rotations are length-preserving, we must have $ |(s,v)| = 1/|(r,w)| $ -- that is, $ s^2+|v|^2 = \frac{1}{r^2 + |w|^2} $ . Plugging in the value for $ w $ , we obtain $ s^2+|v|^2 = \frac{1}{r^2 + |v|^2 r^2/s^2} $ , or

$ 1 = (s^2 + |v|^2)(r^2 + \frac{r^2}{s^2} |v|^2) = s^2 r^2 + 2 r^2 |v|^2 + \frac{r^2}{s^2} |v|^4 $ |

$ \frac{s^2}{r^2} = s^4 + 2 s^2 |v|^2 + |v|^4 $ |

$ \frac{s^2}{r^2} = (s^2 + |v|^2)^2 $ |

$ \frac{r}{s} = \frac{1}{s^2+|v|^2} = \frac{1}{|(s,v)|^2}, $ |

and plugging this back into the definition of $ (r,w) $ yields $ (r,w) = \frac{(s,v)}{|(s,v)|^2} = (s,v)^{-1} $ .

Having run out of letters that score highly in Scrabble, we rewrite the results as follows: given a unit quaternion $ q $
and a vector $ v $
, the transformation $ v' = q v q^{-1} = q v \overline{q} $
is a rotation (note that results thus far have not ruled out the possibility of this transformation being a *reflection*, not a rotation; we forego discussion of this in favor of instead establishing a one-to-one correspondence between rotations and opposite pairs of unit quaternions later in this chapter). It remains, however, to determine how to construct a specific quaternion for a specific rotation. As mentioned in the preface, an elegant solution can be obtained with quaternion exponentiation, but this would require defining it, and motivating that definition. Deferring this to a later section, we present a geometric approach instead.

Suppose we wanted to rotate an arbitrary 3d vector $ u $
by $ t $
radians about a unit-length axis $ r $
. If we run through values of $ t $
between $ 0 $
and $ 2\pi $
, the tip of $ u $
will trace out a circle perpendicular to $ r $
. The center of the circle, call it $ c $
, equals the projection of $ u $
onto $ r $
-- that is, $ r(u\cdot r) $
. The circle is the base of a cone with its apex at the origin; the radius of the circle is the distance between $ u $
and the center of the circle, that is, $ |u-c| $
. If $ t=0 $
, the transformed vector is simply $ u $
. The vector $ x $
from the center of the circle to $ u $
is $ x = u - c = u - r(u\cdot r) = (r\times u)\times r $
, by a reverse application of the BAC-CAB identity. In order to construct the circle, we need a second vector $ y $
in the circle's plane, preferrably perpendicular to $ x $
. Expressing this, we obtain $ y . (r \times u) \times r = 0 $
, implying that $ y $
must be colinear with either $ r $
or $ r \times u $
. The former is clearly impossible -- $ r $
is perpendicular to the circle, whereas we're seeking a vector in the plane of the circle. Therefore, $ y $
must be colinear with $ r \times u $
-- but $ |x| = |(r \times u)\times r| = |(r \times u)| |r| \sin\frac{\pi}{2} = |(r \times u)| = |y| $
, and $ |x| $
is already equal to the radius of the circle, so $ y = r\times u $
. It follows that if we denote by $ u' $
the rotated version of $ u $
, then $ u' = c + x \cos t + y \sin t = r(u\cdot r) + (\cos t)(r \times u)\times r + (\sin t)(r \times u) $
. A brief check reveals that if $ u $
is colinear with the axis of rotation, then all terms vanish except for the first, and $ u'=u $
. Letting $ t=0 $
gives $ u'=u $
as well, since the sine term vanishes, and the cosine term reduces to $ u-r(u\cdot r) $
. Finally, letting $ t=\pi $
yields the well-known formula for reflecting an outgoing vector about a unit length normal, $ u' = 2r(u\cdot r)-u $
.

Expanding $ q u \overline{q} $ , where $ q = (s,v) $ gives

$ (s,v)(0,u)(s,-v) $ | $ = (-v\cdot u, s u + v \times u)(s,-v) $ |

$ =(-s(v\cdot u) - s u\cdot v, v(u\cdot v) + s (s u + v \times u) - (s u + v \times u) \times v) $ | |

$ =(0, v(u\cdot v) + s^2 u + 2 s v \times u - (v \times u) \times v) $ | |

$ =(0, (u |v|^2 - (v \times u) \times v) + s^2 u + 2 s v \times u - (v \times u) \times v) $ | |

$ =(0, u |v|^2 + (1 - |v|^2) u + 2 s v \times u - 2 (v \times u) \times v) $ | |

$ =(0, u + 2 s v \times u - 2 (v \times u) \times v). $ |