2020 • Digital Arts and Entertainment Research
In 2020 I started an internal research group in DAE Research, focusing on real-time computer graphics. The research group is mainly focusing on implementing state-of-the-art real-time rendering techniques in hybrid rendering pipelines. We focus on using DirectX 12, as well as DXR for the ray tracing module. Next to implementing and researching new technology, we also investigate several optimization techniques to maintain interactive frame rates. At this point in time we are working on several use-cases that are specific to video games, such as:
One of the most interesting use-cases I've worked on, which was also part of my master thesis under the supervision of Dr. Ing. Jacco Bikker, is called: Caching Shading Information using Implicit Progressive Low Discrepancy Point Sets. Below you can find the abstract of the project, a link to the publication and some code snippets of the prototype used for the publication.
To create realistic computer-generated images, modern renderers often rely on Monte Carlo integration to approximate the light transportation. While offline rendering systems can execute multiple samples per pixel to better estimate the outcome of the light transport simulation, real-time systems are usually limited to one sample per pixel, per frame, to maintain interactive framerates. Instead, real-time renderers accumulate the results of multiple frames, by caching the intermediate results, to produce a similar image. To be able to cache the intermediate results, a discrete representation of the continuous surfaces in the environment is required. This discretization is often performed using fixed offsets, which leads to visually intrusive discretization artifacts. To reduce these artifacts, our approach instead relies on a runtime discretization using an implicit progressive low discrepancy point set. Compared to other point set techniques, the implicitness of our method minimizes the required memory footprint, it avoids expensive memory lookups, and it allows for an adaptive subdivision scheme which guides the density of the implicit point set. To maximize the effectiveness, our method reconstructs the final image by filtering the cached shading information directly in world space. We evaluate the method by reviewing the quality of the reconstructed image, the memory usage of the data structure used to cache the shading information and by analysing the performance.
The thesis is publicly available via the following DOI: 10.13140/RG.2.2.11099.69920
Below are some code snippets of the prototype used for the publication. Feel free to analyze the entire github repository as well!
Implicit Point Generation: some snippets of the essence of the proposed method. Using 3D noise functions, implicit points are generated and a closest point, for every visibile world position, get randomly selected using barycentric coordinate-based probability. This point finally gets hashed and its key is used as the entry within a hashtable, which is used to cache the shading information.
Shading Information Merge: after accumulating shading information from several world space positions, a final merge with the persistent data structure must be performed. To avoid data overflow, a constant rescaling is performed.
Image Reconstruction in World Space: after storing the shading information in the persistent data structure, the final image gets reconstructed, in world space, using a modified Shepard interpolation. Multiple filtering modes are supported based on the implemented LOD techniques.