[OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.

This is the third post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.

We are going to see a slightly more complex case of Vulkan/GL interoperability where an image is allocated and filled using Vulkan and then it is displayed using OpenGL. This case is implemented in Piglit’s vk-image-display test for a 2D RGBA texture (which is one of the most commonly used texture types).

Remember that the code for the test and the Vulkan helper/framework functions as well as the interoperability functions is in tests/spec/ext_external_objects/ Piglit directory.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.

[OpenGL and Vulkan Interoperability on Linux]: The XDC 2020 presentation

XDC 2020 (co-sponsored by Igalia) was virtual this year due to covid-19. Several presentations were coming from Igalians (see my XDC post) and I’ve given a talk about OpenGL and Vulkan interoperability and our work to support EXT_external_objects and EXT_external_objects_fd on different Mesa3D drivers.

These are my slides and pages 9-12 contain short descriptions of all the Piglit EXT_external_objects use cases I plan to describe in the upcoming OpenGL and Vulkan Interoperability blog posts:

estea-xdc2020

Continue reading [OpenGL and Vulkan Interoperability on Linux]: The XDC 2020 presentation

[OpenGL and Vulkan Interoperability on Linux] Part 2: Using OpenGL to draw on Vulkan textures.

This is the second post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.

We are going to see a very simple case of Vulkan/GL interoperability where an image is allocated using Vulkan and filled using OpenGL. This case is implemented in Piglit’s vk-image-overwrite test for images of different formats.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 2: Using OpenGL to draw on Vulkan textures.

[OpenGL and Vulkan Interoperability on Linux] Part 1: Introduction

It’s been a while that Igalia’s graphics team had been working on the OpenGL extensions that provide the mechanisms for OpenGL and Vulkan interoperability in the Intel iris (gallium3d) driver that is part of mesa.

As there were no conformance tests (CTS) for this extension, and we needed to test it, we have written (and we are still writing) small tests for piglit that allow the exchange and the synchronization of the exchange of resources such as buffers, textures, and depth or stencil buffers.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 1: Introduction

Depth-aware upsampling experiments (Part 6: A complete approach to upsample the half-resolution render target of a SSAO implementation)

This is the final post of the series where I explain the ideas tried in order to improve the upsampling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In the previous posts, I performed experiments to explore different upsampling ideas and I explained the logic behind adopting or rejecting each one. At the end, I’ve managed to find a method that reduces the artifacts to an acceptable level. So, in this post I’ll try to present it completed and in detail.
Continue reading Depth-aware upsampling experiments (Part 6: A complete approach to upsample the half-resolution render target of a SSAO implementation)

Depth-aware upsampling experiments (Part 5: Sample classification tweaks to improve the SSAO upsampling on surfaces)

This is another post of the series where I explain the ideas I try in order to improve the upsampling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In a previous post (3.2), I had classified the sample neighborhoods in surface neighborhoods and neighborhoods that contain depth discontinuities using the normals. Having this information about the neighborhoods, in the last post, I demonstrated how to further improve the nearest depth algorithm (that was explained in parts 1 and 2 of these series) and reduce the artifacts in the neighborhoods where we detect depth discontinuities. The result was good but we’ve seen that there are still some imperfections in a few edge cases. So, in this post, I am going to talk about some ideas I had to further improve the SSAO and my final decisions.

Continue reading Depth-aware upsampling experiments (Part 5: Sample classification tweaks to improve the SSAO upsampling on surfaces)

Depth-aware upsampling experiments (Part 4: Improving the nearest depth where we detect discontinuities)

This is another post of the series where I explain some ideas I tried in order to improve the upscaling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In the previous post, I had classified the sample neighborhoods in surface neighborhoods and neighborhoods that contain depth discontinuities using the normals. Having this information about the neighborhoods, in this post, I will try to further improve the nearest depth algorithm (see also parts 1 and 2) and reduce the artifacts in the neighborhoods where we detect depth discontinuities.

Continue reading Depth-aware upsampling experiments (Part 4: Improving the nearest depth where we detect discontinuities)

Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

This post is again about improving the upsampling of the half-resolution SSAO render target used in the VKDF sponza demo that was written by Iago Toral. I am going to explain how I used information from the normals to understand if the samples of each 2×2 neighborhood we check during the upsampling belong to the same surface or not, and how this was useful in the upsampling improvement.

Continue reading Depth-aware upsampling experiments (Part 3.2: Improving the upsampling using normals to classify the samples)

Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)

In my previous posts of these series I analyzed the basic idea behind the depth-aware upsampling techniques. In the first post [1], I implemented the nearest depth sampling algorithm [3] from NVIDIA and in the second one [2], I compared some methods that are improving the quality of the z-buffer downsampled data that I use with the nearest depth. The conclusion was that the nearest depth sampling alone is not good enough to reduce the artifacts of Iago Toral’s SSAO implementation in VKDF [4] to an acceptable level. So, in this post, I am going to talk about my early experiments to further improve the upsampling and the logic behind each one. I named it part 3.1 because while having started the series I’ve found that some combinations of these methods with other ones can give quite better visual results, and as my experiments with the upsampling techniques cannot fit one blog post, I am going to split the upscaling improvements (part 3) in sub-parts.

Continue reading Depth-aware upsampling experiments (Part 3.1: Improving the upsampling using depths to classify the samples)