The measurement of a bodies position in 3-D space.
Body Tracking is a software and hardware process which simultaneously monitors each part of the body, from limbs to finger tips. The body’s position & movement can then be seamlessly interpreted to draw conclusions about what a person is doing.
Body Tracking Wireframe
Body tracking refers both to the tracking of the “skeleton” (green stick figure) wireframe and its orientation within the designated environment. For example, by comparing positional data, it’s easy to determine where their body parts are in relationship to each other (hands on hips, in pockets, folded etc.) and where the body is within the environment. For instance, are they leaning against a wall on the left side of the interactive environment or standing in the middle of the space (on one foot).
The “skeleton” (green stick figure) refers to the computer-generated wireframe version of the participant’s body. The computer keeps what is essentially a copy of the participant’s body positions and creates the skeleton piece-by-piece. As the participant moves around the space, the skeleton is updated in real time into as close of a one-to-one copy of their body as is possible (depending on factors such as light, closeness to the hardware, and other variables).
Body Tracking and Other Complex Movements
Once the skeleton is tracked, more complicated and abstract movements can be monitored. For instance, by simply comparing the positions of elbows, one can tell whether arms are crossed or not. Similarly we can also determine if the right thumb is facing to the right (from the camera’s perspective), then the palm of that hand is facing forward. If the thumb is facing to the left, then the back of the hand is facing forward. These simple visual tests can combine to recognize complex physical actions. These actions can then be used as software controls: for example, the slapping of a tennis racket or the firing of a “hand” gun.
The same type of analysis can be used to measure a participant’s facial expressions. Read our post on Facial Engagement Tracking, here.