[OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.

This is the third post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.

We are going to see a slightly more complex case of Vulkan/GL interoperability where an image is allocated and filled using Vulkan and then it is displayed using OpenGL. This case is implemented in Piglit’s vk-image-display test for a 2D RGBA texture (which is one of the most commonly used texture types).

Remember that the code for the test and the Vulkan helper/framework functions as well as the interoperability functions is in tests/spec/ext_external_objects/ Piglit directory.

The vk-image-display test:

  • Allocates an image using Vulkan.
  • Uses that image as the render target (color attachment) of a Vulkan render pass and fills it to contain a certain pattern with color bands.
  • Creates a GL memory object and a texture from the image and displays it using OpenGL
  • Checks that the displayed image contains the stripped pattern created by Vulkan

Step 1: Image allocation (Vulkan)

We’ve already seen a similar step 1 in Part 2. The difference in this test is that we are going to need two images for the Vulkan renderpass although only one of them will be used by OpenGL. This is because Vulkan renderpasses need both color and depth attachments.

As the details of how vk_create_ext_image works have been explained in Part 2 I am skipping the long explanations of the flags we need and how we allocate external memory. Please refer to the previous post for more information. What is new, is that now we need to set some image properties such as the usage, the tiling mode, the format, the dimensions and the number of samples, levels and layers. These properties are required when we create Vulkan pipelines, renderpasses, framebuffer objects and more.

Step 2: Initialization of GL and Vulkan structs


In order to have a generic way to render using Vulkan for all tests (most tests I am going to use as demos in the follow up posts need to render something using Vulkan), I’ve created a very basic and minimal Vulkan “framework” where the user can create a Vulkan renderer for different types of “scenes” that in our case are 2D images. The structs and the helper functions of this framework are in tests/spec/ext_external_objects/vk.[hc] and they are prefixed by “vk_“.

I’ve also prefixed with gl_ and vk_ the functions in the tests that perform OpenGL or Vulkan only operations respectively, for example gl_init and vk_init.

In vk_init that initializes the necessary Vulkan parts and creates the renderer I use most vk_ functions from the Vulkan framework. I think that this post will become huge if I start getting into all these Vulkan framework details. Here’s briefly what I needed to do at initialization:

  • I’ve initialized the Vulkan context (created a device, a Vulkan instance etc) calling vk_init_ctx_for_rendering from vk.[hc].
  • I’ve verified that the Vulkan and OpenGL drivers are compatible (vk_check_gl_compatibility explained in Part 2).
  • I’ve allocated the external images, the shaders and the other resources and created the Vulkan renderer that is an abstraction around the pipeline, the renderpass and anything else Vulkan needs to draw one particular scene (see vk_create_renderer and vk_draw of vk.[hc]).
  • I’ve created the Vulkan semaphores calling vk_create_semaphores which are ordinary Vulkan semaphores that will be used Vulkan side to synchronize the rendering (again see the VkSubmitInfo struct of the function vk_draw in vk.[hc]). These semaphores are stored in a vk_semaphores struct in vk.h.

The vk_sem struct contains the 2 Vulkan semaphores used in the Vulkan queue submission at rendering:

One more detail about the semaphores: vk_frame_ready is the semaphore that will be signaled when the rendering to the Vulkan frame will be completed and it is the one used as pSignalSemaphore in VkSubmitInfo for the rendering. The other one (gl_frame_done) is the one upon which Vulkan will be waiting: in other words the Vulkan rendering won’t start until this pWaitSemaphore is signaled. I am going to explain more these semaphores when we see the rendering, but Vulkan side they are used in vk_draw of vk.[hc], as I said.

OpenGL and Vulkan “common” structs:

Before I initialize OpenGL, I used the EXT_external_objects(_fd) extensions to import the Vulkan memory and create a GL memory object and a GL texture similarly to Part 2 and then import the semaphores:

The functions gl_create_mem_obj_from_vk_mem and gl_gen_tex_from_mem_obj that create a GL texture corresponding to a Vulkan texture have been analyzed in Part 2 of these series, so I am going to explain gl_create_semaphores_from_vk which can be found in interop.[hc] files that are a collection of the functions that are used to exchange resources between OpenGL and Vulkan.

The function gl_create_semaphores_from_vk is creating OpenGL semaphores that correspond to the Vulkan ones we’ve seen before. The struct gl_ext_semaphores of interop.h:

contains 2 OpenGL objects: vk_frame_done and gl_frame_ready. OpenGL server will block and wait until vk_frame_done is signaled and then it will take control over the shared texture. When the GL rendering is done, OpenGL will signal the gl_frame_ready semaphore to let Vulkan know that the texture is ready to be used. We’ll see more details later.

In the snippet above the file descriptor (FD) corresponding to each Vulkan semaphore was used to import the semaphores in OpenGL. That way we had 2 semaphores seen by both APIs: OpenGL side as a GLuint semaphore object and Vulkan side as a VkSemaphore object.

OpenGL name Vulkan name Signaling Waiting
vk_frame_done vk_frame_ready by Vulkan by OpenGL
gl_frame_ready gl_frame_done by OpenGL by Vulkan


So, after having initialized the shared resources, I had to initialize some structs for the OpenGL side. OpenGL was going to display the texture and so I needed two shaders: one that renders a fullscreen quad and one that performs the texture mapping with the Vulkan texture. This is quite easy to be done on Piglit and so, gl_init is quite small:

where vs and fs are the GLSL vertex and fragment shaders respectively (see vk_image_display.c for their code).

Step 3: Filling the image memory with Vulkan (rendering)

So, after having had all the structs initialized, it was time for the actual synchronized rendering. The rendering is done in piglit_display which is similar to the Freeglut display callback:

Before anything else the gl_frame_ready semaphore must be signaled for Vulkan to be able to start the rendering and the layout of the Vulkan texture must be set! Table 4.4 from the EXT_external_objects specification contains the details about the necessary layout transitions.

So, when Vulkan receives the OpenGL signal that the frame is ready, it starts operating on it and renders the stripped pattern of the image at the beginning of the post calling vk_draw. When the Vulkan renderpass is over, vk_frame_ready is automatically signaled as it’s the pSignalSemaphore of the renderpass’s VkSubmitInfo struct.

When Vulkan rendering is done, OpenGL takes control, binds the texture and renders a quad calling piglit_draw_rect_tex and using the gl_prog (vs, fs shaders) we’ve seen before.

The stripped pattern is rendered with vk_bands.frag pixel shader, that simply selects the pixel color according to its X-axis position:

The final for loop reads the middle pixel of each image’s band to see if it matches the expected one so that the test passes. This color check is done using the Piglit piglit_probe_pixel_rgba function.

Step 4: Displaying the image memory data with OpenGL

Finally, piglit_present_results displays the Vulkan image that should be the initial stripped pattern that was rendered by Vulkan.


[1]: Piglit code (Tests are in tests/spec/ext_external_objects/)
[2]: Previous blog posts of these series:

What comes next:

Next example will be about reusing an image several times from both APIs. Test vk-image-display-overwrite renders an image using Vulkan, then modifies it using OpenGL, then reads back the pixels using Vulkan and displays them using OpenGL TexImage2D.

See you next time!

4 thoughts on “[OpenGL and Vulkan Interoperability on Linux] Part 3: Using OpenGL to display Vulkan allocated textures.”

  1. Thank you for discussing semaphone importing πŸ™‚

    If glIsSemaphoreEXT returns false, we would expect that glGetError() != GL_NO_ERROR right after glImportSemaphoreFdEXT, right?

    1. glGetError is used in every function to make sure there are no OpenGL errors coming from GL calls (any GL calls) before this check. glisSemaphoreEXT is used to check if import was successful. glGetError is not related to glImportSemaphoreEXT.

      If you read the extension spec: https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects_fd.txt you will see that glImportSemaphoreEXT doesn’t return any error that glGetError could catch.

      I just use frequent glGetError checks because these examples were part of Piglit (drivers testing framework) and I wanted to be sure that there are no errors that go unnoticed.

Leave a Reply

Your email address will not be published. Required fields are marked *