Hair simulation with a mass-spring system (punk’s not dead!)

Hair rendering and simulation can be challenging, especially in real-time. There are many sophisticated algorithms for it (based on particle systems, hair mesh simulation, mass-spring systems and more) that can give very good results. But in this post, I will try to explain a simple and somehow hacky approach I followed in my first attempt to simulate hair (the mohawk hair of the video below) using a mass-spring system.


 

The code can be found here: https://github.com/hikiko/mohawk

(Before I start, a WARNING: This post was all an experiment… I started it at the same time I started the code and I was editing it at every massive commit. I found it a little frustrating to explain one idea or a hack I used in one commit and then a few commits later, realize that I don’t really need it anymore, remove it from the code and change the post too. So, after giving it some thought, I decided to not do so, and just explain why I rejected my initial ideas and how I realized that some hacks could be replaced by others that  are better… But, this way, I ended up describing everything I’ve done and undone in the code in chronological order. Like a TL, verbose log of a version control system. Read it at your own risk!)

Step 1 (Prerequisite): Modelling the head.

First of all I needed a 3D head and a region on it that would be covered by the hair. I used two modelling tools for this job: Makehuman and Blender.

Makehuman is a tool to create human 3D models by modifying an existing 3D mesh. I’ve used it to create a human body, and then I used Blender to delete the polygons of the body that were not part of the head, and to name each object (since the head, the eyes, the eyebrows and the eyelids were separate 3D objects) and to correct its pivot point position. Then, I used vertex painting to mark the regions that I wanted to be covered by hair. I exported the head with the colored vertices in an FBX file and loaded it to my program using assimp. I chose the FBX format instead of an open one (like the obj that I usually use) because it can store information about the colored vertices and is compatible with assimp (OBJ doesn’t support colored vertices).

Vertex painting on Blender: here, I marked the head region that will be covered by hair.

Then, I loaded the head 3D model using assimp in my program.

You can check the mesh.h, mesh.cc and main.cc files for the details of the loading.

Step 2: Selecting the appropriate simulation system.

After having the head and the painted vertices, I had to decide what type of hair I wanted to create and what type of hair system I could use for that purpose (a points and springs system, particles, a mesh?).

Although initially I was thinking to use particles (easier), I finally decided to use a points-and-springs system, where each hair strand would be a spring animated by applying forces to it. The reason was that I wanted the hair to look mostly like a mohawk and less like a traditional haircut and with such a system I could sufficiently control the hair direction, as I will explain later. Particles are more suitable for long hair.

Step 3: Distributing the root points.

In order to distribute the roots on the “mohawk” part of the head I had colored previously, I decided to use a variation of the Poisson disk sampling method (an algorithm that places the samples “randomly” but checks that no two samples are too close to each other in order to generate an even but random-looking distribution). I also used a kd-tree to speed up these placements/comparisons.

Coding details:

I iterated through the painted vertices and found the triangles that belong to the region that I wanted to be covered by the mohawk (see get_spawn_triangle of hair.cc). At every iteration, I was repeatedly selecting random points using random barycentric coordinates on each randomly selected triangle of the mohawk, rejecting those that were close (I made the distance check using an arbitrary minimum distance as Poisson radius) to the already selected points. I also used a kd-tree to optimize these comparisons and reject the points faster (see Hair::init in hair.cc).  I ended up with the mohawk covered by uniformly distributed points that would be my hair roots.

And this is how the roots of ~400 hair strands were looking at that point:

Hair roots distributed in the mohawk using the Poisson disk sampling.

Another view of the hair roots distribution.

Step 4: Creating the strands and their anchor points.

At this step, I knew the position of each hair root, so I could easily render the hair strands using the normal direction at each point. But I needed more information to be able to animate the hair:

In the struct above, the spawn_point is the root position and the spawn_dir is the direction of the hair strand. In order to modify these later, I used three anchor directions to apply forces to each hair (more details in the animation step). In the image below, the anchor points of each purple hair can be seen in green color around it. Their directions are relative to the spawn point. To calculate the anchor points directions, I had to calculate an orthonormal basis using the spawn direction, a new normal and their cross product as i, j, k vectors (again see Hair::init in hair.cc).

The green anchor points could apply forces from 3 directions to the hair strand (green line segments). Their directions relatively to the root (spawn point) are denoted in blue, and could be calculated by multiplying the basis matrix with the vectors:

  • v1 = (cos(ω), sin(ω), 0)
  • v2 = (cos(ψ), sin(ψ), 0)
  • v3 = (cos(φ), sin(φ), 0)

where ω = φ = ψ.

The position in the struct is the hair strand position (spawn_pt + spawn_dir * hair length) and the velocity vector is 0 at this point (static hair).

In the image below you can see the anchor points and the hair strands of this step visualized (I reduced the hairs from 400 to 4 for the screenshot).

Some hair strands with their anchor points.

Step 5: Forces!

So far, I had a head and some static hair surrounded by anchor points. In order to animate them, I needed to decide how to apply the forces to them. One simple thing I could do, that wouldn’t involve many difficult calculations and factors to control, was to make the hair follow the motion of the head.

So, before anything else, I needed to add some motion to the head. I used the mouse motion when a key is pressed (‘H’ or ‘h’) to calculate the rotation around the x and z axes. I then created a rotation matrix to multiply with the modelview before the rendering of the head and to use it for the hair motion simulation. The code was quite simple (check the keydown, keyup functions as well as the motion and display ones in the main.cc).

Rotation of the head around the z axis.

At this point, I realized that my 3 anchor points could be reduced to only one, simplifying a lot the calculations. What I really wanted to achieve was to make each hair strand follow the motion of the head and then apply a force towards the resting position. So instead of having 3 anchor points surrounding the hair, I could apply a force from each moved hair end towards the hair end.

In a simple case (no optimisations), the hair movement would be something like that:

The force (yellow line segment) moves the hair strand (purple line segment) towards the anchor point (green point) until the hair goes back to its resting position. The velocity of the overall hair motion is progressively reduced to prevent each hair spring from moving eternally.

The force direction is demonstrated with the yellow line segment, and the new anchor point that replaces the previous 3 is the point in green. Note that the velocity must decrease in time otherwise the animation after moving the head will never stop.

The mohawk simulation of the ~400 hair strands was looking quite realistic with this change, so I kept it:

Animated hair strands.

Coding details:

First of all, I changed my HairStrand struct to the much simpler:

In this struct, pos is the hair end position, velocity is the hair strand velocity vector, spawn_pt is the root in local coordinates and spawn_dir is the direction of the hair in the resting position (when no forces are applied).

To render a hair strand line at each moment, I needed to calculate its end and root positions and update its pos value.

Update function:

Here, I calculated the hair_end in local space and multiplied it with the head transform to find the anchor point position in the view space.

As I had the previous position, I could calculate the force direction as anchor point - pos multiplied by the K_ANC that is the spring constant and was chosen by trial and error. I made the assumption that the hair mass is 1 to keep things simple, and as a result the acceleration vector was equal to the force. I then calculated the motion velocity from the physics formula: v = a * dt; (where v is the velocity, a the acceleration, dt the time), but with some modifications to satisfy the following restrictions:

  • As each hair motion should change continuously to appear more natural, I needed to add the new velocity to the previous velocity.
  • But I also had to reduce the acceleration by a damping factor (again selected by trial and error) otherwise the spring would never rest (hairs are not perfect strings).

So my velocity formula became:

Rendering of the hair strands was quite simple (mainly because I’ve used all my favorite “legacy” OpenGL functions to draw lines with glBegin, glEnd):

In the snippet above, each hair strand is a line segment (GL_LINE) from the transformed spawn point (hair strand root in the view space) to an end calculated as root + hair direction where the hair direction is calculated as: (end position - root) * hair length. The reason that I normalized the direction, was that I wanted the hair strands to have the same length all the time (and springs don’t).

Step 6:  Optimizations!

In the GIF image above, I intentionally used relatively small motions and forces. The reason was that moving the head more aggressively would cause the hair strands to move so much that some of them would end up being rendered inside the head mesh… Like in the following image:

Applying big forces affects the hair strand rendering.

And that wasn’t exactly what I wanted for my simulation…

What I needed to do to fix the artifact, was some sort of collision detection with the head in order to correct the motion of the hair strands that would “enter” it before they enter it. As it would be difficult to calculate the collision with the actual head 3D model, I decided to do some hacky approximation.

My first thought (spoiler alert: a bad idea!) was to calculate the collision with a sphere that had similar dimensions with the head.

For that, I needed to find some convenient center and diameter to construct it. Although I’ve selected these values by trial and error, I’ve done some quick checks first to get an idea of what would a “good” size be:

I exported the head 3d model in OBJ format from Blender (see step 1) and I found the vectors of the head that were in this form:

I then calculated quickly the maximum distances in x, y, z axes with a script and used them as a reference for the size of the sphere diameter (they were in the same coordinate system).

Here’s an example of a “convenient” sphere:

collision detection
Sphere that approximates the head and can be used for collision detection.

The collision detection at this point was quite simple to calculate. I’ve written a CollSphere class in the file object.h (intending to make it inherit from a generic collider class later, so that I can also perform collision detection with other objects for other effects).

And here is the “trick” explained:

After calculating the hair position in the update function the handle_collision function was called. This function only updates the hair position if the hair end is not falling inside the head.

Note that the comparison had to take place with all the vectors being in the same coordinate system.
Then I checked if the point is inside the sphere:

As it happened before with the anchor points, I soon realized that this trick would never work well with the mohawk: the hair strands were too long and I was only using the hair end to check for collisions. And in most cases the end was not colliding with the head but other parts of the hair strand did . Checking with another point of the hair strand that is closer to the root would only minimize the problem, as the only accurate check, would be a check with the root that would test nothing… 😀 Using more points across the hair could probably work better but it would be complex and not worth the effort. So, I’ve decided to keep the collision detection mechanism for other future effects (eg. to check for collisions with flying objects) but not use it in this particular case.

The following screenshot is an attempt to show better how problematic this idea was. I’ve rendered some points in random positions and I painted in yellow those that if they were hair ends would fall inside the head and in blue those that would fall outside, and then I rendered the head in wireframe. It became more obvious why it would be almost impossible to fix the artifact with this hack: the check is accurate only for the points that lie very very closely to the head surface:

This hack wouldn’t work…

So at this point, I had to abandon the idea to use the sphere for collisions and think of something better. And I soon realized that I could only use the hair direction to test whether each hair is about to enter the head in the next drawing. And to avoid the collision of the hair strand with the head, I could just project the hair strand on the head surface…

How?

With a few simple tests:
In the update function I already had the code that calculates the new hair position in world space:

I also had the spawn direction in local coordinates (hair[i]->spawn_dir). This direction in world space would match the normal, as it is the direction of the hair in the rest position. To bring it in world space I only needed to apply the current head transform without the translations:

I also needed the hair root position and the hair direction to be in the same space (when I transformed the position I needed to keep the translation so I used the whole 4×4 matrix):

Then I performed the check for collisions as follows:

The dot product could show if there was a collision because:

  • The dot product between the hair end new position and the normal, when the normal is normalized (length 1), is indicative of the proportion of the normal that would be covered if we could project the hair strand on it (see the blue line segments in the image below).
  • When the hair collides with the head, the dot product is negative. When it lies above the surface, it is positive.
Checking if a hair collides with the head

After this step the hair motion was wayyyy much better:

The hair strands didn’t enter the head anymore.

Step 7: Textures, Materials, Head Surroundings…

After the hair simulation was fixed, the remaining things to do were to load some textures and materials (and fix some million bugs) to make the hair look nice and create some sort of environment around the head to have a better understanding of the camera rotations. The first was mainly work with assimp (see files: mesh.h/cc and the display function in main.cc) but the latter was again quite tricky…

I’ll try to explain how I achieved this “environment map”-like gradient around the head:

First of all, at the time I reached this point, I realized, that most of the hair simulation was done and I hadn’t loaded a single shader or a texture (apart from those that I added later for the head, the eyebrows etc from assimp). So, I thought that it would be quite interesting and challenging to continue like that and to try to create a surrounding environment without using cube maps or any procedural effect that would require shaders.

Instead, I used a glutSolidSphere as a “sphere map” which I rendered around the scene. Then, I generated an one-dimensional texture with the gradient (by calling following function with green and blue as colors):

and I applied the 1D texture on the sphere.
And that was basically the tricky part:

To begin with, the glutSolidSphere function does not generate texture coordinates. So, I had to create some myself. After that, I needed a way to map the 1D gradient to the 3D sphere.

The whole idea is implemented in these lines:

The function glTexGen generates the texture coordinates for the current vertices (in our case the sphere’s vertices) by using the distance of each vertex from a plane (GL_OBJECT_PLANE) in local space as texture coordinate. As the name of the variable plane indicates, it is the array of the 4 values that define the plane equation that will be used for this purpose:
Eq: ax + by + cz + d = 0
where the values a, b and c define the normal (a, b, c) of the plane and d is the translation (or distance from the origin).

And here’s how I selected the values of the plane equation in order to use a good plane for the mapping:

As the sphere’s orientation when I rendered it was such that the poles of the sphere were falling on the z axis, I knew that the normal of the plane should be of the form (0, 0, value) (aligned to the z-axis), because I needed to apply the 1D gradient from the north pole to the south pole of the sphere. If the plane equation was (0, 0, 1, 0) and the sphere had a radius of 1 unit, then, the gradient would be repeated twice (once from the north pole to the equator, and once from the equator to the south pole). For that I needed to translate the plane by half sphere.

And so my plane equation became: (0, 0, 0.5/350, 0.5) (because the sphere had radius 350 as you can see in the glutSolidSphere call) and I wanted to shift the texture coordinates by 0.5 so that the final range become [0, 1].

With this trick, I mapped the 1D gradient texture to the solid sphere. Then I rendered a wire sphere (glutWireSphere) to make the camera rotations more obvious.

The 2 poles of the spheres:

And I think, that was all…

Epilogue: Blah blah blah and some acknowledgments.

So far, I only had theoretical knowledge about some hair rendering algorithms, but I hadn’t written any program myself. I started this little hack in August, as an experiment inspired by the hair simulations I’ve seen in the Computer Animation Festival on SIGGRAPH2018 where I went with Igalia.

Since then, I abandoned it around 1 million times, as I always had something else to do, and I suddenly remembered it a few weeks ago, while I had to stay away from my computer and spend my time productively by looking at the ceiling in a philosophical mood about life, the universe and the use of spring-like hair strands to create mohawks for hairless 3D heads… :p Anyway, I decided to finish it during the Christmas holidays (because never is too late) and see if I can finally achieve a mohawk-like result.

I have some ideas on how to further improve it and for effects I could add, which I might implement and explain in a follow-up post (or I might not, as I usually abandon the almost-done-hobby-projects to start new ones to leave unfinished… :p).

Anyway, in closing,

  • I ‘d like to thank Nuclear for his libraries (I borrowed his math library, his library to load images and his kd-tree code from his github) and for his help with the modelling tools and some (millions) questions I had,
  • as well as Igalia that sends us to the most cool conferences to get ideas :).

And some good news:
you have finally reached THE END of this post!

Happy new year!

Leave a Reply

Your email address will not be published. Required fields are marked *