A unified super resolution upscaling library that abstracts multiple hardware-accelerated and software-based backends behind two interfaces: ISuperResolutionFactory (enumeration, configuration, and creation) and ISuperResolution (per-frame execution). The module automatically discovers available upscaler implementations at factory creation time based on the render device type.
| Variant | Type | Graphics API | Description |
|---|---|---|---|
| NVIDIA DLSS | Temporal | D3D11, D3D12, Vulkan | Deep-learning based temporal upscaler via NVIDIA NGX SDK |
| Microsoft DirectSR | Temporal | D3D12 | Windows built-in temporal upscaler via DirectSR API |
| AMD FSR | Spatial | All | Shader-based spatial upscaler (Edge Adaptive Upsampling + Contrast Adaptive Sharpening) |
| Apple MetalFX Spatial | Spatial | Metal | Hardware-accelerated spatial upscaler via MetalFX framework |
| Apple MetalFX Temporal | Temporal | Metal | Hardware-accelerated temporal upscaler via MetalFX framework |
Spatial upscaling operates on a single frame. It requires only the low-resolution color texture as input and produces an upscaled image using edge-aware filtering and optional sharpening. No motion vectors, depth buffer, or jitter pattern is needed.
Temporal upscaling accumulates information from multiple frames. In addition to the color texture it requires:
Optional temporal inputs include:
SUPER_RESOLUTION_FLAG_AUTO_EXPOSURE is set.Temporal upscalers rely on sub-pixel jitter applied to the projection matrix each frame to reconstruct detail above the input resolution. The upscaler provides a recommended jitter pattern (Halton 2,3 sequence by default) via ISuperResolution::GetJitterOffset(). The returned values are in pixel space (typically in the (-0.5, 0.5) range) and must be converted to clip space before being added to the projection matrix:
The Y component is negated because the pixel-space Y axis points downward while the clip-space Y axis points upward. The same JitterX / JitterY values in pixel space must also be passed to ExecuteSuperResolutionAttribs so the upscaler can undo the jitter during reprojection.
For spatial upscaling, GetJitterOffset() returns zero for both components and jitter is not needed.
When rendering at a lower resolution for upscaling, the GPU selects coarser mipmap levels because screen-space derivatives are larger relative to the texture coordinate range. To preserve texture detail that the upscaler will reconstruct, apply a negative MIP LOD bias to texture samplers:
$$ \text{MipBias} = \log_2\left(\frac{\text{InputWidth}}{\text{OutputWidth}}\right) $$
The bias should be applied to all material texture samplers (albedo, normal, roughness, etc.) via SamplerDesc::MipLODBias. This compensates for the lower render resolution and prevents the upscaled image from looking blurry.
The API expects per-pixel 2D motion vectors in pixel space using the Previous − Current convention.
Use MotionVectorScaleX / MotionVectorScaleY to convert motion vectors from their native space and adjust the sign convention at execution time. For example, if the shader computes NDC_Current − NDC_Previous:
X is negative to flip direction; Y is positive because the direction flip and the NDC-to-pixel Y axis flip cancel out. If motion vectors are already in the Previous − Current convention, use +0.5 for X and -0.5 for Y.
Motion vectors must use the same resolution as the source color image.
SUPER_RESOLUTION_OPTIMIZATION_TYPE controls the quality/performance trade-off and determines the recommended input resolution relative to the output. Default scale factors used when the backend does not provide its own:
| Optimization Type | Scale Factor | Render Resolution (% of output) |
|---|---|---|
MAX_QUALITY | 0.75 | 75% |
HIGH_QUALITY | 0.69 | 69% |
BALANCED | 0.56 | 56% |
HIGH_PERFORMANCE | 0.50 | 50% |
MAX_PERFORMANCE | 0.34 | 34% |
Use ISuperResolutionFactory::GetSourceSettings() to query the exact optimal input resolution for a given backend, output resolution, and optimization type.
The factory is created per render device. On Windows, the module can be loaded as a shared library:
Query the list of upscaler variants supported by the current device:
Before creating the upscaler, query the recommended input resolution:
The upscaler must be recreated when the variant, input resolution, or output resolution changes:
For temporal upscaling, apply the jitter offset to the projection matrix before rendering the scene (see Jitter for details), then execute the upscaler on the pre-tone-mapped HDR color buffer:
ResetHistory should be set to True when temporal history is no longer valid:
When history is reset, the upscaler discards accumulated temporal data and produces output based solely on the current frame, which may temporarily reduce quality.
Depth and camera notes:
DepthFormat in SuperResolutionDesc must be the SRV-compatible format (e.g. TEX_FORMAT_R32_FLOAT), not the depth-stencil format (e.g. TEX_FORMAT_D32_FLOAT). Use the format of the depth texture's shader resource view.CameraNear and CameraFar assume depth Z values go from 0 at the near plane to 1 at the far plane. If using reverse Z, swap the two values so that CameraNear contains the far plane distance and CameraFar contains the near plane distance.For spatial upscaling, only the color texture and output are required. Execute after tone mapping:
The position of the super resolution pass in the rendering pipeline depends on the upscaling type:
Temporal upscaling (operates on HDR data, replaces TAA):
Spatial upscaling (operates on LDR data, after tone mapping):