GitHub Account Google Scholar Account LinkedIn Account Twitter Account flickr Account vialab Research Group Profile Page

Algorithms for 2D Multi-Touch Rotate,
Scale & Translate (RST) Gestures

Date: July 8, 2014

Homogeneous Coordinate System

Before getting started, a mathematical coordinate system needs to be chosen. Instead of working with the usual Cartesian coordinate system, as used in Euclidean geometry, the computer graphics field employs homogeneous coordinates for projective geometry. Doing so allows common operations to be implemented as matrix operations, which can then be represented as a single matrix. See here for the reasons why.

\[Cartesian \ \ Coordinates \ \ \ \ \ \ \ \ \ \ \ Homogeneous \ \ Coordinates \]
\[\begin{bmatrix}x \\ y \end{bmatrix}\]
\[\longrightarrow \]
\[\begin{bmatrix}x \\ y \\w\end{bmatrix} \]

Transformation Matrices

In a multi-touch gesture, there are three main types of transformations involved: rotation, scaling and translation. To represent those transformations mathematically, we use matrices.

\[Translation \ Matrix = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ T_x & T_y & 1 \end{bmatrix} \]
\[T_x = Translaion \ Along \ x \ Axis \\ T_y = Translaion \ Along \ y \ Axis\]
\[Scaling \ Matrix = \begin{bmatrix} S_x & 0 & 0 \\ 0 & S_y & 0 \\ 0 & 0 & 1 \end{bmatrix} \]
\[S_x = Scaling \ Along \ x \ Axis \\ S_y = Scaling \ Along \ y \ Axis\]
\[Rotation \ Matrix = \begin{bmatrix}\cos(\theta)& \sin(\theta)& 0 \\ -\sin(\theta) & \cos(\theta) & 0\\ 0 & 0 & 1\end{bmatrix}\]
\[\theta = Clockwise \ Angle \ of \ Rotation\]

Multi-Touch Gestures

There are many different strategies used to implement transformation based gestures. A classic example is the work by \(Kruger \ et \ al.\), where they designed and implemented a gesture called RNT \([3]\). It combines rotation and translation and only requires a single source of input (eg. finger, stylus, etc.) to trigger the transformations. Its variant, TNT, was found to be faster and more preferred by participants in a user study \([4]\). \(Wobbrock \ et \ al.\) took a great approach to gesture design by employing an end-user elicitation methodology where 1080 gestures were elicited from 20 non-technical users for 27 different commands \([7]\). A follow up study revealed that this design approach is better than having a small number of system designers select the interaction technique since participants preferred gestures authored by larger groups of end-users \([5]\). In the work by \(Hancock \ et \ al.\), they surveyed five different rotation and translation techniques based on a number of factors including their degrees of freedom (DOF), precision, and completeness \([2]\). These examples have dealt with transformations along two dimensions. For full 3D interaction, \(Hancock \ et \ al.\) designed the first force-based interaction paradigm that supports full 6DOF manipulation on interactive surfaces \([1]\). \(Reisman \ et \ al.\) also enabled interacting with 3D content on a 2D surface by extending the principles of RST (the simultaneous rotate, scale, and translate gesture described in this blog post) into three dimensions \([6]\).


  1. Hancock, M., Cate, T.T., and Carpendale, S. Sticky tools: Full 6DOF force-based interaction for multi-touch tables. In Proceedings of ITS 2009. Link
  2. Hancock, M., Carpendale, S., Vernier, F.D., Wigdor, D., and Shen, C.Rotation and translation mechanisms for tabletop interaction. In Proceedings of TABLETOP 2006. Link
  3. Kruger, R., Carpendale, S., Scott, S.D., and Tang, A. Fluid integration of rotation and translation. In Proceedings of CHI 2005. Link
  4. Liu, J., Pinelle, D., Sallam, S., Subramanian, S., and Gutwin, C. TNT: Improved rotation and translation on digital tables. In Proceedings of GI 2006. Link
  5. Morris, M.R., Wobbrock, J.O., and Wilson, A.D. Understanding users' preferences for surface gestures. In Proceedings of GI 2010. Link
  6. Reisman, J.L., Davidson, P.L., and Han, J.Y. A screen-space formulation for 2D and 3D direct manipulation. In Proceedings of UIST 2009. Link
  7. Wobbrock, J.O., Morris, M.R., and Wilson, A.D. User-defined gestures for surface computing. In Proceedings of CHI 2009. Link

Designing a 4DOF Multi-Touch Gesture

In our example, we will design a 4DOF gesture that rotates, scales and translates an object using more than one source of input (eg. finger). These transformations will only take place on a static z-plane, thus they are two-dimensional. Let's say that a rotating pinch gesture has been performed on an object using multiple fingers (\(n \geq 2\)). At \(time = 1\), we will assume that you have an array encapsulating a minimum of two touches:

\[\left[ touch_{1, \ t = 1}(x_1, y_1, w_1), \ touch_{2, \ t = 1}(x_2, y_2, w_2) \right]\]

The same goes for \(time = 2\),

\[\left[ touch_{1, \ t = 2}(x_1, y_1, w_1), \ touch_{2, \ t = 2}(x_2, y_2, w_2) \right]\]

Our gesture algorithm uses the following math:

\[M' = T'SR\, TM\]
where \(M\) is the object's original transformation matrix, \(T\) is a translation matrix from the object's current position to the World's origin, \(R\) is a rotation matrix, \(S\) is a scale matrix, \(T'\) is a translation matrix from the World's origin to the object's new position, and \(M'\) is the object's new transformation matrix.

You might be wondering why there are two translation matrices. When rotating an object, we must first select the centre or point of rotation. The same holds true when performing a scaling transformation. The rotation matrix in its current state only rotates around the World's origin. If we want an object to rotate around itself, we must first translate the object to the World's origin, perform the rotation, then translate the object back to its original or new position. Therefore, in our algorithm, \(T\) is constructed by using the negative of \(touch_{1, \ t = 1}\)'s values.
\[T_x = -x_{1, \ t = 1}\] \[T_y = -y_{1, \ t = 1}\]
and \(T'\) is constructed by using \(touch_{1, \ t = 2}\)'s values. By only using the first touch's positional vectors, our algorithm does not require more than one touch for translation transformations.
\[T'_x = x_{1, \ t = 2}\] \[T'_y = y_{1, \ t = 2}\]
In the calculation of both \(R\) and \(S\), two vectors are requred: the line between the first and the second touch point at \(time = 1\) and at \(time = 2\). These are calculated by subtracting \(touch_{2, \ t = 1}\) by \(touch_{1, \ t = 1}\) and by subtracting \(touch_{2, \ t = 2}\) by \(touch_{1, \ t = 2}\). Let's call the resulting vectors \(L_{t = 1}\) and \(L_{t = 2}\).
\[L = touch_2 - touch_1 = \left\lgroup\matrix{x_2\cr y_2\cr w_2}\right\rgroup - \left\lgroup\matrix{x_1\cr y_1\cr w_1}\right\rgroup\]
We can determine the rotation angle that is needed for \(R\) by calculating the angle between these two vectors. This is accomplished by first normalizing \(L_{t = 1}\) and \(L_{t = 2}\), then calculating the dot product and taking the inverse cosine of the result.
\[\theta = \cos^{-1}(\hat L_{t = 1} \cdot \hat L_{t = 2})\]
The uniform scale factor for \(S\) is calculated by dividing the magnitude of \(L_{t = 2}\) by the magnitude of \(L_{t = 1}\). For example, by pinching one's fingers, the magnitude (length) of the vector \(L_{t = 2}\) will be smaller than \(L_{t = 1}\), resulting in a scale factor smaller than 1. This would cause the object to shrink, which is the standard action associated with the pinch gesture.
\[ S_x = S_y = \frac{ || L_{t = 2} || }{ || L_{t = 1}|| } \]
Finally, the object's new transformation matrix \(M'\) is calculated by multiplying all these matrices together.