Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

This post is again about improving the upsampling of the half-resolution SSAO render target used in the VKDF sponza demo that was written by Iago Toral. I am going to explain how I used information from the normals to understand if the samples of each 2×2 neighborhood we check during the upsampling belong to the same surface or not, and how this was useful in the upsampling improvement.

Continue reading Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)

In my previous posts of these series I analyzed the basic idea behind the depth-aware upsampling techniques. In the first post [1], I implemented the nearest depth sampling algorithm [3] from NVIDIA and in the second one [2], I compared some methods that are improving the quality of the z-buffer downsampled data that I use with the nearest depth. The conclusion was that the nearest depth sampling alone is not good enough to reduce the artifacts of Iago Toral’s SSAO implementation in VKDF [4] to an acceptable level. So, in this post, I am going to talk about my early experiments to further improve the upsampling and the logic behind each one. I named it part 3.1 because while having started the series I’ve found that some combinations of these methods with other ones can give quite better visual results, and as my experiments with the upsampling techniques cannot fit one blog post, I am going to split the upscaling improvements (part 3) in sub-parts.

Continue reading Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)

Depth-aware upsampling experiments (Part 2: Improving the Z-buffer downsampling)

In the previous post of these series, I tried to explain the nearest depth algorithm [1] that I used to improve Iago Toral‘s SSAO upscaling in the sponza demo of VKDF. Although the nearest depth was improving the ambient occlusion in higher resolutions the results were not very good, so I decided to try more quality improvements. In this post, I am going to talk about my first experiments on improving the Z-buffer downsampling.

Continue reading Depth-aware upsampling experiments (Part 2: Improving the Z-buffer downsampling)

Depth-aware upsampling experiments (Part 1: Nearest depth)

This post is about different depth aware techniques I tried in order to improve the upsampling of the low resolution Screen Space Ambient Occlusion (SSAO) texture of a VKDF demo. VKDF is a library and collection of Vulkan demos, written by Iago Toral. In one of his demos (the sponza), Iago implemented SSAO among many other graphics algorithms [1]. As this technique is expensive, he decided to optimize it by using lower resolution textures and render target, which he then upsampled to create a full resolution image that he blended with his original one to display the result. For the upsampling he used linear interpolation, and as expected he observed many artifacts that were increasing by lowering the SSAO textures resolution.

Some time ago, I started experimenting with methods to improve that upsampling in order to familiarize myself with Vulkan. The most promising ones seemed to be the depth-aware techniques:

Continue reading Depth-aware upsampling experiments (Part 1: Nearest depth)

A simple pixel shader viewer

In a previous post, I wrote about Vkrunner, and how I used it to play with fragment shaders. While I was writing the shaders for it, I had to save them, generate a PPM image and display it to see the changes. This render to image/display repetition gave me the idea to write a minimal tool that automatically displays my changes every time I save the file with the shader code and use it when the complexity of the scene is increasing. And so, I’ve written sdrviewer, the minimal OpenGL viewer for pixel shaders of the video below:

Continue reading A simple pixel shader viewer

Vkrunner allows specifying the required Vulkan version

The required Vulkan implementation version for a Vkrunner shader test can now be specified in its [require] section. Tests that are targeting Vulkan versions that aren’t supported by the device driver will be skipped.

Continue reading Vkrunner allows specifying the required Vulkan version

Having fun with Vkrunner!

Vkrunner is a Vulkan shader testing tool similar to Piglit, written by Neil Roberts. It is mostly used by graphics drivers developers, and was also part of the official Khronos conformance tests suite repository (VK-GL-CTS) for some time [1]. There are already posts [2] about its use but they are all written from a driver developer’s perspective and focus on vkrunner’s debugging capabilities. In this post, I’m going to show you an alternative use I’ve found for it, in order to have fun with pixel shaders during my holidays! 🙂

Continue reading Having fun with Vkrunner!

i965: Improved support for the ETC/EAC formats on Intel Gen 7 and previous GPUs

This post is about a recent contribution I’ve done to the i965 mesa driver to improve the emulation of the ETC/EAC texture formats on the Intel Gen 7 and older GPUs, as part of my work for the Igalia‘s graphics team.

Demo:

The video mostly shows the behavior of some GL calls and operations with and without the patches that improve the emulation of the ETC/EAC formats on Gen7 GPUs. The same programs run first with the previous ETC/EAC emulation (upper terminal) and then with the new one (lower terminal).

Continue reading i965: Improved support for the ETC/EAC formats on Intel Gen 7 and previous GPUs

Hair simulation with a mass-spring system (punk’s not dead!)

Hair rendering and simulation can be challenging, especially in real-time. There are many sophisticated algorithms for it (based on particle systems, hair mesh simulation, mass-spring systems and more) that can give very good results. But in this post, I will try to explain a simple and somehow hacky approach I followed in my first attempt to simulate hair (the mohawk hair of the video below) using a mass-spring system.


 

The code can be found here: https://github.com/hikiko/mohawk

Continue reading Hair simulation with a mass-spring system (punk’s not dead!)