Some random videos and screenshots from things I've done for Igalia
either for work and training or for fun. It's mostly a selection of posts I've written
in my blog, in the past and can be found under the
Igalia category.
My method to improve the upsampling of the SSAO render target of Iago Toral's Vulkan
vkdf demo.
I used the values of a downscaled normal buffer to classify the low resolution image pixel neighborhoods in two categories: neighborhoods where
all the pixels are part of the same surface and neighborhoods where there's a surface discontinuity. Then I used linear interpolation to upsample
the surface neighborhoods and a depth aware algorithm to upsample the neighborhoods that contain discontinuities. I used my own downsampling method
to select the most representative pixel of each neighborhood and normals instead of depths to classify the pixels. The algorithm seemed to significally
improve the visual results compared to 3 other depth aware downsampling
algorithms I tried (max depth, min depth, and min/max depth selected one after the other following a checkerboard pattern) that are used with depth-based classification.
An analysis of all the experiments I've done step by step and how I've came up with the idea of the final algorithm as well as
metrics and comparisons can be found in the series of blog posts below. Part 6 is a
complete description of the final method I used.
That was an OpenGL demo I started when I joined Igalia. I've created a
procedural terrain with a number of cows on it, where the user can set the number of polygons, the size, and the number of cows.
I experimented with graphics concepts like environment mapping, image based lighting, morphing, surface subdivision and more and I
used a Poisson distribution based trick to uniformly spread the cows on the terrain).
Posts related to contributions to graphics drivers and APIs as part of my work
for Igalia's graphics team).
OpenGL and Vulkan interoperability on Linux related posts. These are posts related to the implementation of
EXT_external_objects(_fd) extensions that allow OpenGL and Vulkan access to the same resources. As part of my work for
Igalia I've contributed to this extension for iris (patches), radeonsi (some fixes to the already implemented feature),
i965 (review), and I've written a Vulkan framework for piglit and tests to check several extension use cases in different
drivers (intel, amd, freedreno). The posts are mostly about these use cases:
VK_EXT_sample_locations: This
extension allows an application to modify the locations of samples within a pixel used in rasterization.
Additionally, it allows applications to specify different sample locations for each pixel in a group of adjacent
pixels, which can increase antialiasing quality (particularly if a custom resolve shader is used that takes
advantage of these different locations). I've implemented it for the anv Vulkan driver.
OES_copy_image extension could be enabled on Gen7.
The GPUs became conformant with the OpenGL 4.3 standard
(148 conformance tests that were failing have passed).
Cubemaps and mipmaps of compressed formats could be copied and displayed properly.
glGetCompressedTexImage2D, glGetCompressedTextureSubImage2D,
glCopyImageSubData and other GetCompressed* functions have the
expected behavior.
One thing that was impossible to support was the auto mipmap
generation for obvious reasons (also explained in the blog post).
Read the
blog post
for more details.
Some posts on vkrunner (a tool to debug Vulkan drivers by writing shader tests):
other additions to vkrunner: Blog post about modifications that allow the users to select the device where the Vulkan tests will be executed and a modification to its options selection.
an OpenGL shader viewer that can instantly show the changes (hobby project)
This was a project I started to save time to myself when I write shader tests.
When I started adding tests to piglit and vkrunner, I found
a bit frustrating to save the file, compile, then run the test and
check the result for errors. In case of vkrunner, the process was even
slower as I had to generate the SPIR-V and the result was rendered to a PPM
image that
had to be checked with an external viewer.
To speed this process up
I've written a minimal pixel shader viewer that can instantly show the changes
in a shader file when this file is saved. The viewer only supports pixel shaders, but can read both GLSL pixel shaders
and vkrunner/piglit
formatted shader tests that contain a pixel shader (the parser is quite minimal though but it does the job).
I now use this viewer when I need to write a pixel shader quickly, and as soon as
everything looks fine to me I add it to piglit or vkrunner. In case of vkrunner, I might have to take into
account the differences in the coordinate system sometimes but I still feel that development is faster this
way.
an OpenGL/SPIRV example In Igalia, I've been working on GL and Vulkan extensions (eg VK_EXT_sample_locations, GL_EXT_external_objects, GL_EXT_external_objects_fd).
This post is from when I wanted to start contributing with tests and bug fixes to
the work for the ARB_gl_spirv extension that was written by my
colleagues. I wrote a small GL/SPIRV
example to understand how I should use the extension and
to familiarize myself with the team's work on it.
A hack to display the Vulkan CTS tests output An additional script/configuration I use to debug the Vulkan CTS (Khronos's conformance tests) output
quickly. It can instantly display the test result as if it was running on a window.
My first program in Rust. I wrote it while attending a Rust/Gtk workshop in GUADEC 2019 where I went with Igalia (sponsor).
I used OpenGL and SDL2 though for the graphics. :)
XDC 2018 was organized and sponsored by Igalia and took place in A Coruña.
This is a short animated intro inspired by the lighthouse of A Coruña that was used in the
logo.
Adoption of ANGLE in WebKit on Linux (slides)
This talk was about the experiments I performed to find an optimal way to share texture data between the main WebKit
rendering process (that uses the native Linux OpenGL driver) and WebGL that uses ANGLE without copying them from one render
target to the other. I first tried to use shared EGL/OpenGL context between the main rendering process and the EGL
backend, and then Linux dma buffers. The second method had some advantages, as it could work with multiple processes and
it didn't require changes in the ANGLE backend but only some EGL extensions to be used in the drivers.