In-game camera control¶
The Beam Eye Tracker SDK allows you make the in-game camera react to the real-time head and eye tracking movements. This has two significant use cases:
Immersion in Simulation Games: by having the in-game camera follow the user’s eye and head movements, the user can experience something close to virtual reality, but without any headgear.
Larger Display Effect: In most games, like in a 3rd person RPG, you can also pan/rotate the camera according to the eye gaze on screen to emulate what would essentially be a larger display for the user.
In terms of integration, the process is the same for both use cases, which is explained in this section.
Note
This features requires the viewport geometry to be correctly set and updated during gameplay. See Viewport for more information.
Integration process¶
The integration process consists of three main work packages:
Optionally, Add in-game settings.
We will now describe each of these work packages in detail.
Control the in-game camera movement¶
To make the in-game camera move according to the real-time tracking, the process
is as follows: once you are receiving a stream of TrackingStateSet
objects (see Data Access Methods),
you can retrieve the SimGameCameraState from it.
The SimGameCameraState
contains camera pose parameters generated by the eye movements, and also camera pose
parameters generated by the head movements, separately. Using the provided
compute_sim_game_camera_transform_parameters
function the components are combined into a single set of camera pose parameters SimCameraTransform3D (yaw, pitch, roll, x, y, z).
Once you have a single set of camera pose parameters
you need to map them to your game engine’s coordinate system (see Mapping to the game engine coordinate system).
Finally, you can simply add them up to the reference camera pose to obtain the final camera pose.
The in-game camera pose generation flow.¶
Implement the camera recentering¶
Recentering is used to “recalibrate” the camera
so that it goes back to the reference position for the current head/eye pose parameters. It is
the main direct interaction the user has with the in-game camera control feature.
The API provides two functions to start
and complete the recentering process, which must be paired and run in sequence:
// Start recentering
api.recenter_sim_game_camera_start();
// ... usually, there is some time in between ...
// Complete recentering
api.recenter_sim_game_camera_end();
Recentering is critical for the user to achieve a pleasant experience. Furthermore, it is important for the user to recenter easily and as frequent as they need. It is thus recommended to pair these function calls to a hotkey’s press and release events.
void on_recentering_hotkey_press() {
// Start recentering
api.recenter_sim_game_camera_start();
}
void on_recentering_hotkey_release() {
// Complete recentering
api.recenter_sim_game_camera_end();
}
Furthermore, this process may benefit from the user holding the hotkey for a moment, as it also may trigger a recalibration of the eye tracking algorithm at the Beam Eye Tracker application.
Note
We strongly recommend that you call these methods instead of implementing your own recentering logic, See Let the Beam Eye Tracker do the heavy lifting for more information.
Add in-game settings¶
Both the Control the in-game camera movement and the Implement the camera recentering processes could be be slightly customized. This suggests that some settings could be exposed to the user in your game’s user interface. These are described below.
Note
Note that all of these settings are optional as you don’t have to expose any in-game settings to achieve an integration.
Device selection¶
If your game already support other head and/or eye tracking devices, you can let the user choose which device to use for in-game camera control. A priori, there is no practical use case for two of such devices to be enabled at the same time, so you can let the user choose input to use for the in-game camera control.
ON/OFF Toggle¶
You can let the user activate or deactivate the in-game camera control. Alternatively, within the Device selection, you could offer the option of “No Device”/”None”, meaning that no device is currently used for the in-game camera control.
Sensitivity/Range Sliders¶
As explained in the Control the in-game camera movement section, the eye and head tracking camera pose
components need to be combined based on weights for each component. The default weight values
are simply 1.0 and 1.0, which deliver the camera pose as “configured” from within the Beam Eye Tracker. See
compute_sim_game_camera_transform_parameters.
However, you can also include two sliders in the game settings, one for the eye tracking and one for the head tracking, to let the user decide the contribution of each component and thus briefly amplify or reduce the effect for each component from within your game.
Each slider could be named “Range” or “Sensitivity”.
In the case of a “range” slider, the values could map 1:1 to the weights themselves, and thus
the user will immediately understand values such as an amplification of 0.5X, 1X or 2X, etc..
Alternatively, you can offer a 0% to 100% “sensitivity” slider, that
would respectively map to something like [0.0, 2.0]. In either case,
we strongly advice that your default values map to 1.0 for the weights.
Note
An assumption of this design is that it gives a quick access tweak to the user inside the game settings. “Casual users” can set the game settings to their liking quickly through such sliders, but “power users” can keep the game settings to default and then access advance settings inside the Beam Eye Tracker.
Recentering hotkey¶
As described in Implement the camera recentering, the recentering process is recommended to be triggered by a hotkey press and release. Thus, you may want to allow the user to configure the hotkey from within the game settings.
Mapping to the game engine coordinate system¶
Warning
Please note that game engines can vary in their definitions of “roll”, “pitch” and “yaw”, not only regarding which axes is associated to each rotation or whether the rotation is right or left handed, but also in the order of rotations. A different order of rotation can lead to different camera orientations than intended. If this is not taken into consideration, it can lead to camera controls that don’t feel natural, at best, or that cause motion sickness at worse. Thus kindly take into account how our coordinate system is defined when using it in your game engine.
The mapping of the camera pose parameters to your game engine’s coordinate system depends on
your specific game engine coordinate system definition. Thus, to let you resolve the mapping
we will here explain the coordinate system as defined by the Beam Eye Tracker API.
In essence, it follows closely the Head Pose coordinate system, but it assumes a virtual camera object, instead of the head:
The z-axis points forward.
The x-axis points to the left.
The y-axis points up.
The rotation angles yaw, pitch, and roll, define intrinsics rotations y-x’-z’’ of the the virtual camera object, i.e., in the order of first yaw (along the y-axis), then pitch (along the now rotated x-axis) and then roll (along the double-rotated z-axis).
Invidually, those angles can also be described as:
Yaw: Rotation around the y-axis (vertical-axis pointing up), \(-\pi/2\) deg when looking fully to the right, \(0\) when looking forward, and \(+\pi/2\) deg when looking fully to the left. Right-handed.
Pitch: Rotation around the x-axis (horizontal axis pointing left), \(-\pi/2\) when looking fully up, \(0\) when looking forward and \(+\pi/2\) when looking fully down. Right-handed.
Roll: Rotation around the z-axis (horizontal axis pointing forward), \(-\pi/2\) when tilting “left ear touching left shoulder”, \(0\) when looking forward and \(+\pi/2\) when tilting “right ear touching right shoulder”. Right-handed.
The translation parameters are to be added POST-rotation, and are defined in meters.
Let the Beam Eye Tracker do the heavy lifting¶
It is noticeable that logic, such as the mapping from real-time tracking to camera pose or
the recentering logic, is done at the Beam Eye Tracker itself
and not by the client code or even internally by the API instance. Indeed,
the client code is only in charge of receiving mostly-ready camera pose parameters, and making
requests to the API instance, such as recentering the camera.
Even though you could implement all the logic in the client code by first retrieving Real-time tracking and then applying your own mapping and behavior, this design decision is intentional and strongly recommended, for these reasons:
It keeps the game implementation and maintenance to a minimum;
Power users already have access to advanced settings in the Beam Eye Tracker app and it can be counterproductive to duplicate such settings in the game;
It is future-proof, as the Beam Eye Tracker can evolve and offer gamers more intuitive ways to control the in-game camera from the head and eye tracking;
Finally, at the beginning of this section, we explained two use cases for the in-game camera control. Keeping this model means that the implementation process for both use cases is the same, and the only difference is the mapping from the real-time tracking to the camera pose, which is done within the Beam Eye Tracker.