Depth-aware upsampling experiments (Part 1: Nearest depth)

This post is about different depth aware techniques I tried in order to improve the upsampling of the low resolution Screen Space Ambient Occlusion (SSAO) texture of a VKDF demo. VKDF is a library and collection of Vulkan demos, written by Iago Toral. In one of his demos (the sponza), Iago implemented SSAO among many other graphics algorithms [1]. As this technique is expensive, he decided to optimize it by using lower resolution textures and render target, which he then upsampled to create a full resolution image that he blended with his original one to display the result. For the upsampling he used linear interpolation, and as expected he observed many artifacts that were increasing by lowering the SSAO textures resolution.

Some time ago, I started experimenting with methods to improve that upsampling in order to familiarize myself with Vulkan. The most promising ones seemed to be the depth-aware techniques:

Depth-aware algorithms and improvements

A depth-aware technique is a technique where we use depth information from the image in order to get some insight about the shape and the discontinuities of the surfaces before attempting the reconstruction. For that, we usually use a downsampled z-buffer (that has the same resolution with the low resolution image) from which we gather information that helps us select the best sample from the downscaled texture during the upsampling.

So, every depth-aware technique has 2 parts to be improved:

  1. The downsampling of the original Z-buffer: we have to make sure it contains the most valuable information about the scene
  2. The upsampling of the texture using information from this Z-buffer and probably other resources and some sort of interpolation

Nearest depth sampling

The most common depth-aware algorithm to upsample the texture is the nearest depth algorithm which is explained very well in this paper from NVIDIA [2].

The idea is that in every 2×2 neighborhood of the downsampled z-buffer, we find the sample whose depth is closer to the original depth (from the high resolution depth buffer, we need both z-buffers in the pass) and we use its uv coordinates to select a sample from the texture we would like to upsample.

So, my first experiment was to compare the linear upsampling with the nearest depth upsampling. For the depth-buffer downsampling, I used the maximum depth in each 2×2 neighborhood.

Comparison #1: Linear Sampling vs Nearest Depth Sampling

First of all some information about the SSAO.

  • Number of samples: 24
  • Target resolution: 1/2 of the original
  • Resolution in which I took the following screenshots: 1/8 of the original
  • Parameters:

 

Let’s see some screenshots:

Nearest depth

Linear upsampling

If you carefully examine these screenshots (taken at the 1/8 of the original resolution) the curve of the first is slightly less pixelized but not significantly better.

But the overall scene (below) looks equally bad when I lower so much the resolution that I can’t really tell the difference between the two methods:

And the following in 1/4 resolution:

Nearest depth


Linear

Here the images are almost identical. Nearest depth alone is hardly an improvement.

Note that the screenshots are taken in such low resolution to make the artifacts too visible. In half resolution for example (that is a reasonable resolution for the SSAO) the artifacts are significantly less for both sampling techniques, and the comparison is more difficult.

Target resolution (1/2 of the original)

The following video shows a comparison of Linear Interpolation (lerp) and Max/Nearest depth combination from different views. As we move the camera and examine different views of the scene, we can see more clearly that the nearest depth has an advantage in the edges and the corners and where we have depth discontinuities (which means than not all the samples of the neighborhood lie on the same surface) but I think that there are still many artifacts to make it acceptable:

The result is a little bit disappointing.

Vulkan and shaders details

Despite the disappointing results, I will share some implementation details as they might help also understand the follow up experiments (that I will probably analyze in some follow up posts):

Downsampling:

Vulkan side I needed a special pass that takes as input (depth attachment) the original depth buffer and renders to a depth render target of the size of the SSAO pass render target. I hardcoded the geometry of the quad to which I mapped the texture inside the vertex shader to keep things simple (bad idea as I had a bug there but that’s another story, without the bug it would have been a good idea… :p).

Some options I used for this pass and might be interesting were the following:

– render target image options:

 

– render pass options for the depth attachment:

 

– pipeline

 

In order to override the gl_FragDepth both the depth writes and the depth test were enabled and the VK_COMPARE_OP_ALWAYS was set. Having an OpenGL background I found this totally weird as my first thought would be to disable the depth test and enable the writes (at least in OpenGL I wouldn’t attempt to write to the z-buffer with the depth test enabled). But as the Vulkan VK_COMPARE_OP_ALWAYS makes the test always pass the result is the same.

and finally for the sampler I used the layout:

 

Now that I mentioned the sampler…

One thing I like in Vulkan is that the texture and the sampler are separate objects. You can reuse the sampler with many textures. Modern OpenGL versions allow this with a more complex (in my opinion) way and some years ago the texture data and the sampling state were part of the same texture object. Vulkan seems to be designed to allow reusing the resources.

Anyway, let’s get a look at the shaders…

For the downsampling, the shaders were really short. The vertex shader creates a quad:

and the fragment shader only selects the maximum in each 2×2 neighborhood:

Now let’s see the upsampling:

First of all, I needed to pass my downsampled z-buffer in the shader of the lighting pass that calculates the ambient occlusion from the SSAO render target in order to replace the linear interpolation (lerp) with the nearest depth. This part was easy but made me realize one more time how careful someone has to be with Vulkan as initially I tried to add my texture to an already big descriptor set. Space, allocations and de-allocations here are important… πŸ™‚

Shaders:
All the upsampling takes place in the fragment shader. I decided to use the built-in textureOffset which requires the offsets to be compile time constants so the code here might look a bit ugly. But you can get the idea:

 

The function above selects the offset of the sample of the low resolution depth buffer that is closer to the original depth.
In main (below) we use it to select the sample from the SSAO texture instead of selecting the result of the linear interpolation:

and we continue with the light calculations.

And that was all. This first method I tried was the simplest one and it didn’t seem to improve the upsampling significantly. I tried some other suggestions to further improve the downsampling and the upsampling but as this post is already too long, I will call it Part 1 and I will post about the downsampling and upsampling improvements in 2 follow up posts (part2 and part3 respectively). I will refer again to this code though in order to explain the steps that I followed later on.

So, closing, here are some things I’ve learned: First of all, Vulkan is… Vulkan! Every single detail is important, every single parameter is important, and one should be really careful with the allocations, deletions, options, bits, flags, everything… You have infinite control but there’s also room for infinite bugs if you aren’t too careful! Second: the validation layers can be life-saving. πŸ˜‰

To be continued

[1]: https://blogs.igalia.com/itoral/2018/04/17/frame-analysis-of-a-rendering-of-the-sponza-model/

[2]: http://developer.download.nvidia.com/assets/gamedev/files/sdk/11/OpacityMappingSDKWhitePaper.pdf

1 thought on “Depth-aware upsampling experiments (Part 1: Nearest depth)”

Leave a Reply

Your email address will not be published. Required fields are marked *