This post is about different depth aware techniques I tried in order to improve the upsampling of the low resolution Screen Space Ambient Occlusion (SSAO) texture of a VKDF demo. VKDF is a library and collection of Vulkan demos, written by Iago Toral. In one of his demos (the sponza), Iago implemented SSAO among many other graphics algorithms . As this technique is expensive, he decided to optimize it by using lower resolution textures and render target, which he then upsampled to create a full resolution image that he blended with his original one to display the result. For the upsampling he used linear interpolation, and as expected he observed many artifacts that were increasing by lowering the SSAO textures resolution.
Some time ago, I started experimenting with methods to improve that upsampling in order to familiarize myself with Vulkan. The most promising ones seemed to be the depth-aware techniques:
In a previous post, I wrote about Vkrunner, and how I used it to play with fragment shaders. While I was writing the shaders for it, I had to save them, generate a PPM image and display it to see the changes. This render to image/display repetition gave me the idea to write a minimal tool that automatically displays my changes every time I save the file with the shader code and use it when the complexity of the scene is increasing. And so, I’ve written sdrviewer, the minimal OpenGL viewer for pixel shaders of the video below:
The required Vulkan implementation version for a Vkrunner shader test can now be specified in its
[require] section. Tests that are targeting Vulkan versions that aren’t supported by the device driver will be skipped.
Vkrunner is a Vulkan shader testing tool similar to Piglit, written by Neil Roberts. It is mostly used by graphics drivers developers, and was also part of the official Khronos conformance tests suite repository (VK-GL-CTS) for some time . There are already posts  about its use but they are all written from a driver developer’s perspective and focus on vkrunner’s debugging capabilities. In this post, I’m going to show you an alternative use I’ve found for it, in order to have fun with pixel shaders during my holidays! 🙂
This post is about a system we devised and set up at home for me to be able to reject all the annoying phone calls I receive during the day from my laptop, without having to go pick up the phone. If you are also working from home like I do (yes, this is another cool option we have at Igalia!) you might find this hack useful. 😀
This post is about a recent contribution I’ve done to the i965 mesa driver to improve the emulation of the ETC/EAC texture formats on the Intel Gen 7 and older GPUs, as part of my work for the Igalia‘s graphics team.
The video mostly shows the behavior of some GL calls and operations with and without the patches that improve the emulation of the ETC/EAC formats on Gen7 GPUs. The same programs run first with the previous ETC/EAC emulation (upper terminal) and then with the new one (lower terminal).
Hair rendering and simulation can be challenging, especially in real-time. There are many sophisticated algorithms for it (based on particle systems, hair mesh simulation, mass-spring systems and more) that can give very good results. But in this post, I will try to explain a simple and somehow hacky approach I followed in my first attempt to simulate hair (the mohawk hair of the video below) using a mass-spring system.
The code can be found here: https://github.com/hikiko/mohawk