[OpenGL and Vulkan Interoperability on Linux] Part 5: A Vulkan pixel buffer is reused from OpenGL

This is the 5th post of the OpenGL and Vulkan interoperability series where I describe some use cases for the EXT_external_objects and EXT_external_objects_fd extensions. These use cases have been implemented inside Piglit as part of my work for Igalia‘s graphics team using a Vulkan framework I’ve written for this purpose.

And in this 5th post, we are going to see a case where a pixel buffer is allocated and filled by Vulkan and its data are used as source data for an OpenGL texture image.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 5: A Vulkan pixel buffer is reused from OpenGL

[OpenGL and Vulkan Interoperability on Linux] Part 4: Using OpenGL to overwrite Vulkan allocated textures.

This is the 4th post on OpenGL and Vulkan Interoperability on Linux. The first one was an introduction to EXT_external_objects and EXT_external_objects_fd extensions, the second was describing a simple interoperability use case where a Vulkan allocated textured is filled by OpenGL, and the third was about a slightly more complex use case where a Vulkan texture was filled by Vulkan and displayed by OpenGL. In this 4th and last post about shared textures, we are going to see a use case where a Vulkan texture is filled by Vulkan, then gets overwritten by OpenGL, then is read back from Vulkan and then displayed again using OpenGL. This more complex use case has also been written for Piglit using the small Vulkan framework I’ve written to test the external objects extensions. The source code can be found inside the tests/spec/ext_external_objects directory of the mesa/piglit master branch.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 4: Using OpenGL to overwrite Vulkan allocated textures.

[OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.

This is the third post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.

We are going to see a slightly more complex case of Vulkan/GL interoperability where an image is allocated and filled using Vulkan and then it is displayed using OpenGL. This case is implemented in Piglit’s vk-image-display test for a 2D RGBA texture (which is one of the most commonly used texture types).

Remember that the code for the test and the Vulkan helper/framework functions as well as the interoperability functions is in tests/spec/ext_external_objects/ Piglit directory.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.

[OpenGL and Vulkan Interoperability on Linux]: The XDC 2020 presentation

In this year’s XDC I’ve given a talk about OpenGL and Vulkan interoperability and our work to support EXT_external_objects and EXT_external_objects_fd on different Mesa3D drivers.

These are my slides and pages 9-12 contain short descriptions of all the Piglit EXT_external_objects use cases I plan to describe in the upcoming OpenGL and Vulkan Interoperability blog posts:

estea-xdc2020

Continue reading [OpenGL and Vulkan Interoperability on Linux]: The XDC 2020 presentation

[OpenGL and Vulkan Interoperability on Linux] Part 2: Using OpenGL to draw on Vulkan textures.

This is the second post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.

We are going to see a very simple case of Vulkan/GL interoperability where an image is allocated using Vulkan and filled using OpenGL. This case is implemented in Piglit’s vk-image-overwrite test for images of different formats.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 2: Using OpenGL to draw on Vulkan textures.

[OpenGL and Vulkan Interoperability on Linux] Part 1: Introduction

It’s been a while that Igalia’s graphics team had been working on the OpenGL extensions that provide the mechanisms for OpenGL and Vulkan interoperability in the Intel iris (gallium3d) driver that is part of mesa.

As there were no conformance tests (CTS) for this extension, and we needed to test it, we have written (and we are still writing) small tests for piglit that allow the exchange and the synchronization of the exchange of resources such as buffers, textures, and depth or stencil buffers.

Continue reading [OpenGL and Vulkan Interoperability on Linux] Part 1: Introduction

Depth-aware upsampling experiments (Part 6: A complete approach to upsample the half-resolution render target of a SSAO implementation)

This is the final post of the series where I explain the ideas I tried in order to improve the upsampling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In the previous posts, I performed experiments to explore different upsampling ideas and I explained the logic behind adopting or rejecting each one. At the end, I’ve managed to find a method that reduces the artifacts to an acceptable level. So, in this post I’ll try to present it completed and in detail.
Continue reading Depth-aware upsampling experiments (Part 6: A complete approach to upsample the half-resolution render target of a SSAO implementation)

Depth-aware upsampling experiments (Part 5: Sample classification tweaks to improve the SSAO upsampling on surfaces)

This is another post of the series where I explain the ideas I try in order to improve the upsampling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In a previous post (3.2), I had classified the sample neighborhoods in surface neighborhoods and neighborhoods that contain depth discontinuities using the normals. Having this information about the neighborhoods, in the last post, I demonstrated how to further improve the nearest depth algorithm (that was explained in parts 1 and 2 of these series) and reduce the artifacts in the neighborhoods where we detect depth discontinuities. The result was good but we’ve seen that there are still some imperfections in a few edge cases. So, in this post, I am going to talk about some ideas I had to further improve the SSAO and my final decisions.

Continue reading Depth-aware upsampling experiments (Part 5: Sample classification tweaks to improve the SSAO upsampling on surfaces)

Depth-aware upsampling experiments (Part 4: Improving the nearest depth where we detect discontinuities)

This is another post of the series where I explain some ideas I tried in order to improve the upscaling of the half-resolution SSAO render target of the VKDF sponza demo that was written by Iago Toral. In the previous post, I had classified the sample neighborhoods in surface neighborhoods and neighborhoods that contain depth discontinuities using the normals. Having this information about the neighborhoods, in this post, I will try to further improve the nearest depth algorithm (see also parts 1 and 2) and reduce the artifacts in the neighborhoods where we detect depth discontinuities.

Continue reading Depth-aware upsampling experiments (Part 4: Improving the nearest depth where we detect discontinuities)