What is foveated rendering?
In this post, we explain foveated rendering in its two forms: dynamic and static, he talks about what you can do with the technology and how it's crucial for XR.
Over the past two decades, Tobii has become known as the world’s leading eye tracking company, offering eye tracking and user sensing technologies for a range of devices and use cases. Today, Tobii’s eye tracking technology can be found in laptop computers, VR Headsets, AR glasses, innovative healthcare solutions, assistive devices, and many more applications and products. Recently, Tobii has been working on a set of technologies and capabilities related to foveation — what we call Tobii Spotlight Technology. This blog seeks to explore the current benefits made possible via foveation, with an emphasis on recent benchmarks Tobii conducted.
Let’s start with a little background. Foveation is a computational process for image rendering that mimics the way human vision works. Dynamic foveated rendering (DFR) employs eye tracking to focus processing and bandwidth usage on areas of the image where high resolution is required — the foveal region where the subject is looking — and offers lower resolution toward the periphery. Optimizing the image in this way simulates human vision while reducing demands for bandwidth and processing— cutting latency and improving response times.
Tobii Spotlight Technology is an advanced eye tracking solution, specialized for foveation. It delivers accurate, low latency tracking of the user’s gaze in real-time, enabling superior dynamic foveation that simulates realistic, human vision. By easing the strain on the GPU and improving overall rendering speed, Tobii Spotlight Technology offers improvements to the overall experience of a VR application:
Today, many developers associate foveated rendering with NVIDIA VRS — a rendering technique that allows finer control over shading density and true super-sampling. With VRS, developers can choose to improve visual quality, reduce GPU costs, or even balance a little of both.
VRS applies varying amounts of processing power to different areas of the image. The technology works by altering the number of pixels that are processed by a single-pixel shader operation. These operations can now be applied to blocks of pixels, allowing applications to effectively vary the shading quality in different areas of the screen.
For maximum effect, VRS can also be paired with eye tracking to match optimal rendering quality to the user’s gaze. NVIDIA VRS custom patterns allow developers to optimize the shading density based on the foveal region. The smaller the foveal region, the larger the gain from GPU savings. The foveal region size is determined by:
DFR with VRS foveation gives the user the most optimized custom pattern when combined with a low latency eye tracking signal. This maximizes the benefit of enabling VRS in the application since the shading rate can be reduced considerably, which improves the overall performance of the application and can enable better image quality with super-sampling in the foveal region.
Not all eye tracking signals are created equal. Latency, frequency, accuracy, and noise are all obvious contributors to foveal region size. Perhaps less obvious is the impact of signal reliability, population coverage, angular accuracy falloff, and eye tracking signal artifacts.
Additionally, the ability to track gaze varies across the population. Some people are easy to track, while some people are not trackable at all. A user who normally is easily trackable can become less so through tiredness, dehydration, and illness. Effective foveation should account for this variability.
Tobii has put significant investment into foveation-specific signal research, including the development of specialized foveated rendering tracking signals which reduce or eliminate some of the damaging signal artifacts which can be present on non-specialized signals.
Recently, Tobii ran a variety of benchmarks comparing the performance benefits of both fixed and dynamic foveated rendering. Fixed foveated rendering (FFR) is a technique that assumes a forward viewing direction and limits the rendering costs of display areas which will not be clearly visible in the headset, mainly in the lens distortion region. On the other hand, DFR moves the foveal region wherever the user is looking, which further reduces the foveal region size.
The results (when compared to no foveation) consistently illustrate:
While running tests on Vive Pro Eye, we optimized the foveation parameters for variable rate shading to achieve a shading rate of 16% for DFR. For fixed, we have configured the shading density to 40%, which works best for the Vive Pro Eye headset parameters and is not noticeable to the user in the periphery when enabled. The screen is divided into a few regions going from foveal (where the user is looking), mid (which is a transition from foveal to periphery), and periphery (region is optimized for maximum gain). In the figure below, the colored overlay shows the regions for both FFR and DFR with different size and shape parameters. The color coding is a gradient based on the log of density where blue = 1 sample, purple = 1/4 samples, purple red = 1/8 samples, red = 1/16 samples, and black = culled.
Showdown VR is a cinematic experience where the scene has variable complexity in different parts of the scene. This allows us to sample the GPU shading load for highest and lowest and compare them across Full, Fixed and Dynamic rendering modes. In the figure below, we observe that for some parts of the scene the shading load increases a lot for full rendering (check the second explosion). DFR results for shading load is relatively consistent with fewer spikes for even the heaviest part of scene making it a much smoother experience. For the test below, we consider a slight modification in the scene — the scene has been super sampled 3x times to increase the resolution for better image quality. Here, we observe further reduction in GPU shading load with DFR of around 74.59% even though the overall shading load for the scene has increased.
Next-gen headsets are aiming for higher resolution and larger field of view that demands more and more pixels to be rendered on screen. In the graph below, for a normal VR application we observe the pixels rendered increase exponentially. Comparing that with DFR there is a significant drop for higher resolution headsets going towards 8K and beyond. This directly affects the GPU shading load and hence the savings with DFR increase with increase in headset resolutions. This is also applicable for the applications supersampling for the existing-gen headsets.
Developers can also choose to use these performance enhancing benefits to greatly increase the visual effects in the scene, while retaining smooth frame rates. This enables the developers and designers to push boundaries on visuals and complex shaders. In the figure below, we have tested with some modifications on Showdown VR to improve shader and lighting effects with no additional cost on load.
For a VR application, having a consistently high performance is very important. DFR enables applications to maintain this high performance and additionally supports higher resolutions and better visual effects. When Tobii Spotlight Technology is combined with technologies such as NVIDIA’s variable rate shading, we get maximum benefit from foveation with reduction in GPU shading load. Going beyond just rendering, Tobii Spotlight Technology has several applications for dynamic foveation like foveated transport and streaming.
Learn more on Tobii Spotlight Technology and other dynamic foveation applications.
Also check additional information in the Siggraph panel discussion.
Download our eBook Eye tracking and dynamic foveated rendering — technologies that solve the VR challenge.
In this post, we explain foveated rendering in its two forms: dynamic and static, he talks about what you can do with the technology and how it's crucial for XR.
VR users demand amazing content, low-priced devices, and fantastic experiences. This e-book describes how dynamic foveated rendering helps to meet these demands and deliver the best possible experience with available system resources.
We present a foveated VR video codec, consisting of an image pre-reduction (warp) that can be combined with conventional video codecs such as H.265.
Subscribe to our stories about how people are using eye tracking and attention computing.