Human activity understanding from three-dimensional data, such as from depth cameras, requires viewpoint-invariant matching. In this paper, we propose a new method of constructing invariants that allows distinction between isometries based on rotation, which preserve handedness, and those that involve reflection, which reverse right and left hands. The state-of-the-art in viewpoint invariants uses either global descriptors such as moments or spherical harmonic magnitudes, or relies on local methods such as feature matching. None of those methods are able to easily distinguish rotations from reflections, which is essential to understand left vs right handed gestures. We show that the distinction between rotation and reflection is contained in the imaginary part of certain weighted inner-products of moment vectors. We show how reflection-sensing viewpoint invariants may be applied to depth-map data for understanding activity data.