This is the third post of the OpenGL and Vulkan interoperability series, where I explain some EXT_external_objects and EXT_external_objects_fd use cases with examples taken by the Piglit tests I’ve written to test the extensions as part of my work for Igalia‘s graphics team.
We are going to see a slightly more complex case of Vulkan/GL interoperability where an image is allocated and filled using Vulkan and then it is displayed using OpenGL. This case is implemented in Piglit’s vk-image-display
test for a 2D RGBA texture (which is one of the most commonly used texture types).
Remember that the code for the test and the Vulkan helper/framework functions as well as the interoperability functions is in tests/spec/ext_external_objects/
Piglit directory.
The vk-image-display test:
- Allocates an image using Vulkan.
- Uses that image as the render target (color attachment) of a Vulkan render pass and fills it to contain a certain pattern with color bands.
- Creates a GL memory object and a texture from the image and displays it using OpenGL
- Checks that the displayed image contains the stripped pattern created by Vulkan
Step 1: Image allocation (Vulkan)
We’ve already seen a similar step 1 in Part 2. The difference in this test is that we are going to need two images for the Vulkan renderpass although only one of them will be used by OpenGL. This is because Vulkan renderpasses need both color and depth attachments.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
/* creating external images */ /* color image */ if (!vk_fill_ext_image_props(&vk_core, w, h, d, num_samples, num_levels, num_layers, color_format, color_tiling, color_usage, color_in_layout, color_end_layout, &vk_color_att.props)) { fprintf(stderr, "Unsupported color image properties.\n"); return false; } if (!vk_create_ext_image(&vk_core, &vk_color_att.props, &vk_color_att.obj)) { fprintf(stderr, "Failed to create color image.\n"); return false; } /* depth image */ if (!vk_fill_ext_image_props(&vk_core, w, h, d, num_samples, num_levels, num_layers, depth_format, depth_tiling, depth_usage, depth_in_layout, depth_end_layout, &vk_depth_att.props)) { fprintf(stderr, "Unsupported depth image properties.\n"); return false; } if (!vk_create_ext_image(&vk_core, &vk_depth_att.props, &vk_depth_att.obj)) { fprintf(stderr, "Failed to create depth image.\n"); goto fail; } |
As the details of how vk_create_ext_image
works have been explained in Part 2 I am skipping the long explanations of the flags we need and how we allocate external memory. Please refer to the previous post for more information. What is new, is that now we need to set some image properties such as the usage, the tiling mode, the format, the dimensions and the number of samples, levels and layers. These properties are required when we create Vulkan pipelines, renderpasses, framebuffer objects and more.
Step 2: Initialization of GL and Vulkan structs
Vulkan:
In order to have a generic way to render using Vulkan for all tests (most tests I am going to use as demos in the follow up posts need to render something using Vulkan), I’ve created a very basic and minimal Vulkan “framework” where the user can create a Vulkan renderer for different types of “scenes” that in our case are 2D images. The structs and the helper functions of this framework are in tests/spec/ext_external_objects/vk.[hc]
and they are prefixed by “vk_
“.
I’ve also prefixed with gl_
and vk_
the functions in the tests that perform OpenGL or Vulkan only operations respectively, for example gl_init
and vk_init
.
In vk_init
that initializes the necessary Vulkan parts and creates the renderer I use most vk_
functions from the Vulkan framework. I think that this post will become huge if I start getting into all these Vulkan framework details. Here’s briefly what I needed to do at initialization:
- I’ve initialized the Vulkan context (created a device, a Vulkan instance etc) calling
vk_init_ctx_for_rendering
fromvk.[hc]
. - I’ve verified that the Vulkan and OpenGL drivers are compatible (
vk_check_gl_compatibility
explained in Part 2). - I’ve allocated the external images, the shaders and the other resources and created the Vulkan renderer that is an abstraction around the pipeline, the renderpass and anything else Vulkan needs to draw one particular scene (see
vk_create_renderer
andvk_draw
ofvk.[hc]
). - I’ve created the Vulkan semaphores calling
vk_create_semaphores
which are ordinary Vulkan semaphores that will be used Vulkan side to synchronize the rendering (again see theVkSubmitInfo
struct of the functionvk_draw
invk.[hc]
). These semaphores are stored in avk_semaphores
struct invk.h
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
if (!vk_init_ctx_for_rendering(&vk_core)) { fprintf(stderr, "Failed to create Vulkan context.\n"); return false; } if (!vk_check_gl_compatibility(&vk_core)) { fprintf(stderr, "Mismatch in driver/device UUID\n"); return false; } /* creating external images */ [...] /* load shaders */ if (!(vs_src = load_shader(VK_BANDS_VERT, &vs_sz))) goto fail; if (!(fs_src = load_shader(VK_BANDS_FRAG, &fs_sz))) goto fail; /* create Vulkan renderer */ if (!vk_create_renderer(&vk_core, vs_src, vs_sz, fs_src, fs_sz, false, false, &vk_color_att, &vk_depth_att, &vk_rnd)) { fprintf(stderr, "Failed to create Vulkan renderer.\n"); goto fail; } if (!vk_create_semaphores(&vk_core, &vk_sem)) { fprintf(stderr, "Failed to create semaphores.\n"); goto fail; } free(vs_src); free(fs_src); |
The vk_sem
struct contains the 2 Vulkan semaphores used in the Vulkan queue submission at rendering:
1 2 3 4 5 |
struct vk_semaphores { VkSemaphore vk_frame_ready; VkSemaphore gl_frame_done; }; |
One more detail about the semaphores: vk_frame_ready is the semaphore that will be signaled when the rendering to the Vulkan frame will be completed and it is the one used as pSignalSemaphore
in VkSubmitInfo
for the rendering. The other one (gl_frame_done
) is the one upon which Vulkan will be waiting: in other words the Vulkan rendering won’t start until this pWaitSemaphore
is signaled. I am going to explain more these semaphores when we see the rendering, but Vulkan side they are used in vk_draw
of vk.[hc]
, as I said.
OpenGL and Vulkan “common” structs:
Before I initialize OpenGL, I used the EXT_external_objects(_fd)
extensions to import the Vulkan memory and create a GL memory object and a GL texture similarly to Part 2 and then import the semaphores:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
if (!gl_create_mem_obj_from_vk_mem(&vk_core, &vk_color_att.obj.mobj, &gl_mem_obj)) { fprintf(stderr, "Failed to create GL memory object from Vulkan memory.\n"); piglit_report_result(PIGLIT_FAIL); } if (!gl_gen_tex_from_mem_obj(&vk_color_att.props, gl_tex_storage_format, gl_mem_obj, 0, &gl_tex)) { fprintf(stderr, "Failed to create texture from GL memory object.\n"); piglit_report_result(PIGLIT_FAIL); } if (!gl_create_semaphores_from_vk(&vk_core, &vk_sem, &gl_sem)) { fprintf(stderr, "Failed to import semaphores from Vulkan.\n"); piglit_report_result(PIGLIT_FAIL); } |
The functions gl_create_mem_obj_from_vk_mem
and gl_gen_tex_from_mem_obj
that create a GL texture corresponding to a Vulkan texture have been analyzed in Part 2 of these series, so I am going to explain gl_create_semaphores_from_vk
which can be found in interop.[hc]
files that are a collection of the functions that are used to exchange resources between OpenGL and Vulkan.
The function gl_create_semaphores_from_vk
is creating OpenGL semaphores that correspond to the Vulkan ones we’ve seen before. The struct gl_ext_semaphores
of interop.h
:
1 2 3 4 |
struct gl_ext_semaphores { GLuint vk_frame_done; GLuint gl_frame_ready; }; |
contains 2 OpenGL objects: vk_frame_done
and gl_frame_ready
. OpenGL server will block and wait until vk_frame_done
is signaled and then it will take control over the shared texture. When the GL rendering is done, OpenGL will signal the gl_frame_ready
semaphore to let Vulkan know that the texture is ready to be used. We’ll see more details later.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
bool gl_create_semaphores_from_vk(const struct vk_ctx *ctx, const struct vk_semaphores *vk_smps, struct gl_ext_semaphores *gl_smps) { VkSemaphoreGetFdInfoKHR sem_fd_info; int fd_gl_ready; int fd_vk_done; PFN_vkGetSemaphoreFdKHR _vkGetSemaphoreFdKHR; glGenSemaphoresEXT(1, &gl_smps->vk_frame_done); glGenSemaphoresEXT(1, &gl_smps->gl_frame_ready); _vkGetSemaphoreFdKHR = (PFN_vkGetSemaphoreFdKHR)vkGetDeviceProcAddr(ctx->dev, "vkGetSemaphoreFdKHR"); if (!_vkGetSemaphoreFdKHR) { fprintf(stderr, "vkGetSemaphoreFdKHR not found\n"); return false; } memset(&sem_fd_info, 0, sizeof sem_fd_info); sem_fd_info.sType = VK_STRUCTURE_TYPE_SEMAPHORE_GET_FD_INFO_KHR; sem_fd_info.semaphore = vk_smps->vk_frame_ready; sem_fd_info.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD_BIT_KHR; if (_vkGetSemaphoreFdKHR(ctx->dev, &sem_fd_info, &fd_vk_done) != VK_SUCCESS) { fprintf(stderr, "Failed to get the Vulkan memory FD"); return false; } sem_fd_info.semaphore = vk_smps->gl_frame_done; if (_vkGetSemaphoreFdKHR(ctx->dev, &sem_fd_info, &fd_gl_ready) != VK_SUCCESS) { fprintf(stderr, "Failed to get the Vulkan memory FD"); return false; } glImportSemaphoreFdEXT(gl_smps->vk_frame_done, GL_HANDLE_TYPE_OPAQUE_FD_EXT, fd_vk_done); glImportSemaphoreFdEXT(gl_smps->gl_frame_ready, GL_HANDLE_TYPE_OPAQUE_FD_EXT, fd_gl_ready); if (!glIsSemaphoreEXT(gl_smps->vk_frame_done)) return false; if (!glIsSemaphoreEXT(gl_smps->gl_frame_ready)) return false; return glGetError() == GL_NO_ERROR; } |
In the snippet above the file descriptor (FD) corresponding to each Vulkan semaphore was used to import the semaphores in OpenGL. That way we had 2 semaphores seen by both APIs: OpenGL side as a GLuint
semaphore object and Vulkan side as a VkSemaphore
object.
OpenGL name | Vulkan name | Signaling | Waiting |
---|---|---|---|
vk_frame_done | vk_frame_ready | by Vulkan | by OpenGL |
gl_frame_ready | gl_frame_done | by OpenGL | by Vulkan |
OpenGL:
So, after having initialized the shared resources, I had to initialize some structs for the OpenGL side. OpenGL was going to display the texture and so I needed two shaders: one that renders a fullscreen quad and one that performs the texture mapping with the Vulkan texture. This is quite easy to be done on Piglit and so, gl_init
is quite small:
1 2 3 4 5 6 7 8 9 |
static bool gl_init() { gl_prog = piglit_build_simple_program(vs, fs); glUseProgram(gl_prog); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); return glGetError() == GL_NO_ERROR; } |
where vs
and fs
are the GLSL vertex and fragment shaders respectively (see vk_image_display.c
for their code).
Step 3: Filling the image memory with Vulkan (rendering)
So, after having had all the structs initialized, it was time for the actual synchronized rendering. The rendering is done in piglit_display
which is similar to the Freeglut display callback:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
enum piglit_result piglit_display(void) { enum piglit_result res = PIGLIT_PASS; int i; bool vk_sem_has_wait = true; bool vk_sem_has_signal = true; float colors[6][4] = { {1.0, 0.0, 0.0, 1.0}, {0.0, 1.0, 0.0, 1.0}, {0.0, 0.0, 1.0, 1.0}, {1.0, 1.0, 0.0, 1.0}, {1.0, 0.0, 1.0, 1.0}, {0.0, 1.0, 1.0, 1.0} }; GLuint layout = gl_get_layout_from_vk(color_in_layout); if (vk_sem_has_wait) { glSignalSemaphoreEXT(gl_sem.gl_frame_ready, 0, 0, 1, &gl_tex, &layout); glFlush(); } struct vk_image_att images[] = { vk_color_att, vk_depth_att }; vk_draw(&vk_core, 0, &vk_rnd, vk_fb_color, 4, &vk_sem, vk_sem_has_wait, vk_sem_has_signal, images, ARRAY_SIZE(images), 0, 0, w, h); layout = gl_get_layout_from_vk(color_end_layout); if (vk_sem_has_signal) { glWaitSemaphoreEXT(gl_sem.vk_frame_done, 0, 0, 1, &gl_tex, &layout); glFlush(); } /* OpenGL rendering */ glBindTexture(gl_target, gl_tex); piglit_draw_rect_tex(-1, -1, 2, 2, 0, 0, 1, 1); for (i = 0; i < 6; i++) { float x = i * (float)piglit_width / 6.0 + (float)piglit_width / 12.0; float y = (float)piglit_height / 2.0; if (!piglit_probe_pixel_rgba(x, y, colors[i])) res = PIGLIT_FAIL; } piglit_present_results(); return res; } |
Before anything else the gl_frame_ready
semaphore must be signaled for Vulkan to be able to start the rendering and the layout of the Vulkan texture must be set! Table 4.4 from the EXT_external_objects specification contains the details about the necessary layout transitions.
So, when Vulkan receives the OpenGL signal that the frame is ready, it starts operating on it and renders the stripped pattern of the image at the beginning of the post calling vk_draw
. When the Vulkan renderpass is over, vk_frame_ready
is automatically signaled as it’s the pSignalSemaphore
of the renderpass’s VkSubmitInfo
struct.
When Vulkan rendering is done, OpenGL takes control, binds the texture and renders a quad calling piglit_draw_rect_tex
and using the gl_prog
(vs
, fs
shaders) we’ve seen before.
The stripped pattern is rendered with vk_bands.frag
pixel shader, that simply selects the pixel color according to its X-axis position:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
layout(push_constant) uniform img_dims { float w; float h; } IMG_DIMS; layout(location = 0) out vec4 f_color; const vec4 colors[] = vec4[] ( vec4(1.0, 0.0, 0.0, 1.0), vec4(0.0, 1.0, 0.0, 1.0), vec4(0.0, 0.0, 1.0, 1.0), vec4(1.0, 1.0, 0.0, 1.0), vec4(1.0, 0.0, 1.0, 1.0), vec4(0.0, 1.0, 1.0, 1.0)); void main() { int band = int(gl_FragCoord.x * 6.0 / IMG_DIMS.w); f_color = colors[band]; } |
The final for
loop reads the middle pixel of each image’s band to see if it matches the expected one so that the test passes. This color check is done using the Piglit piglit_probe_pixel_rgba
function.
Step 4: Displaying the image memory data with OpenGL
Finally, piglit_present_results
displays the Vulkan image that should be the initial stripped pattern that was rendered by Vulkan.
Links
[1]: Piglit code (Tests are in tests/spec/ext_external_objects/)
[2]: Previous blog posts of these series:
- [OpenGL and Vulkan Interoperability on Linux] Part 1: Introduction
- [OpenGL and Vulkan Interoperability on Linux] Part 2: Using OpenGL to draw on Vulkan textures.
- [OpenGL and Vulkan Interoperability on Linux]: The XDC 2020 presentation
What comes next:
Next example will be about reusing an image several times from both APIs. Test vk-image-display-overwrite renders an image using Vulkan, then modifies it using OpenGL, then reads back the pixels using Vulkan and displays them using OpenGL TexImage2D.
See you next time!
Thank you for discussing semaphone importing π
If glIsSemaphoreEXT returns false, we would expect that glGetError() != GL_NO_ERROR right after glImportSemaphoreFdEXT, right?
glGetError is used in every function to make sure there are no OpenGL errors coming from GL calls (any GL calls) before this check. glisSemaphoreEXT is used to check if import was successful. glGetError is not related to glImportSemaphoreEXT.
If you read the extension spec: https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_external_objects_fd.txt you will see that glImportSemaphoreEXT doesn’t return any error that glGetError could catch.
I just use frequent glGetError checks because these examples were part of Piglit (drivers testing framework) and I wanted to be sure that there are no errors that go unnoticed.
Why the glFlush after signal and wait?
https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glFlush.xml
We want to be sure that there are no remaining GL commands to be executed when we give the control to Vulkan. Vulkan side we do something similar by not calling
vkQueueWaitIdle
when we use semaphores.