Depth of Field is a visualization technique used in photography and cinematography to create a sense of depth by focusing on certain objects, while keeping others out-of-focus and blurred. In real-time rendering, accurately calculating the depth of field effect is prohibitvely expensive. Rasterization uses a pinhole camera model that makes all objects in the frame equally sharp. Therefore, in real-time rendering, depth of field is approximated by applying a blur effect to the final image, using the information from the depth buffer.
The following table enumerates all external inputs required by the Depth Of Field effect.
Name | Format | Notes |
---|---|---|
Color buffer | APPLICATION SPECIFIED (3x FLOAT) | The HDR render target of the current frame containing the scene radiance |
Depth buffer | APPLICATION SPECIFIED (1x FLOAT) | The depth buffer for the current frame provided by the application. The data should be provided as a single floating-point value, the precision of which is under the application's control |
The effect uses a number of parameters to control the quality and performance organized into the HLSL::DepthOfFieldAttribs
structure. The following table lists the parameters and their descriptions.
Name | Notes |
---|---|
MaxCircleOfConfusion | The maximum size of CoC in texture coordinates for a pixel. |
TemporalStabilityFactor | Stability of the temporal accumulation of the CoC. |
BokehKernelRingCount | The number of rings in the Octaweb kernel. |
BokehKernelRingDensity | The number of samples within each ring of the Octaweb kernel. |
To integrate Depth of Field into your project, include the following header files:
Next, create the necessary objects:
Next, call the methods to prepare resources for the PostFXContext
and DepthOfField
objects. This needs to be done every frame before starting the rendering process.
Call the method PostFXContext::Execute
to prepare intermediate resources necessary for all post-processing objects dependent on PostFXContext
. This method can take a constant buffer containing the current and previous-frame cameras (refer to this code [0] and [1]). Alternatively, you can pass the corresponding pointers const HLSL::CameraAttribs* pCurrCamera
and const HLSL::CameraAttribs* pPrevCamera
for the current and previous cameras, respectively. You also need to pass the depth of the current and previous frames, and a buffer with motion vectors in NDC space, via the corresponding ITextureView* pCurrDepthBufferSRV
, ITextureView* pPrevDepthBufferSRV
, and ITextureView* pMotionVectorsSRV
parameters.
To compute the depth of field effect, call the DepthOfField::Execute
method. Before this, fill the DepthOfFieldAttribs
and DepthOfField::RenderAttributes
structures with the necessary data. Refer to the Input resources section for parameter description.
An ITextureView
of the texture containing the depth of field result can be obtained using the DepthOfField::GetDepthOfFieldTextureSRV
method.
Our algorithm is based on the approach described in [Jasper Flick, 2018], but we have significantly modified our version. Specifically, we handle the blurring of the near and far planes separately. For the near plane, we calculate the dilation and blurring of the CoC before computing the bokeh. This step is essential to avoid bleed effects. We also added an approach to eliminate the undersampling effect from [Tiago Sousa, 2013].