Saturday, March 24, 2018

Sand without grains - Abelian Sandpile Model - PART 1

About a month ago I stumbled upon a project with an interesting FX : Moving Sand Dunes.
Yes ... like in the movie 'Dune' but without the worms.
And with 3 weeks dev time.

During the first day I kept watching and re-watching this YouTube video.


The part I found really fascinating is around second 16, where you can see the leading edge of the falling sand moving up, and the pile of sand underneath accumulating trying to chase the leading edge. Strangely enough this results in something visually moving up , opposed to what we would expect from sand falling down. 

What causes this behavior in real life is the result of millions and millions of tiny sand rocks. Each little rock has no choice than falling when there's nothing else supporting it, roll down colliding with other rocks and eventually stop as soon as the inclination is not strong enough to win the friction with the other rocks. 
All these little rocks will then eventually accumulate until the whole system becomes stable. 

If I had to recreate this in Houdini exactly as I just described, it would take a huge amount of CPU power, RAM and weeks of simulation. Not to mention disk space. In other words, a lot of resources. And the result would probably never be remotely close, because even with a terrific render farm, I could never have THAT many particles.

Actually, how many particles are we talking about ?

Well, given that the average medium-grained sized sand is 0.375 mm in diameter, how many grains of sand can I fit ...
Houdini Sand Solver is NOT an option in this case.

Let's make a few considerations:
  • looking at the video, we cannot see the single sand rocks ! All we see is a large surface deforming. So maybe we can skip the particles approach entirely and focus on the behavior of this large surface.
  • naively put, the behavior of the sand dune looks 'simple', hence there must be a 'simple' mechanism to reproduce it.
  • we don't have to be physically correct ! (unless you're animating a cube : proof).
After a few minutes on YouTube I found this video :





Is really that simple ? Well ... seems too good to be true.
Good news is that since the algorithm is so simple, it will take no time to figure out if we're going in the right direction !

Before we start, let me show you the kind of result we are aiming for.


What you're looking at is a flipbook generated in Houdini and the surface is a HeightField.
The sand starts from a veeery unstable configuration and slowly rearranges in a way that is stable. Note how the little mountain in the foreground becomes stable soon, cause its height is lower. Note how the sand accumulates at the base, and the walls slowly become flat while growing and covering underlying elements.


Sand Pile Algorithm in 2D

After watching that video, I Googled "sandpile algorithm" and found this : Abelian Sandpile Model

I started reading and after a few lines I got to this point :


I stopped reading here, and I was already on Houdini.

NOTE:
Houdini has an HeightField Erode solver which is super mega cool, but in the long 5 minutes I tried I couldn't reproduce the sandpile behaviour described below. Regardless, I still wanted to experiment this idea, primarily because it's really quick to implement and once I do it myself I can control every single aspect of it.

Cool, let's get started.

As always, let's work in 2D first. It'll be easy to switch to 3D later.
Let's pretend this is our initial sand distribution:



Each column is a stack of sand grains.
For instance the stack E has 2 grains piled , the stack G has 3, and so on.

The idea is that if a stack of sand is too tall compared to the adjacent stacks (say by 2 grains), one grain of sand would fall down, towards one of the adjacent lower stacks.
When this happens, the taller stack becomes 'one grain' shorter , while the shorter stack will become 'one grain' taller ! Well, believe it or not, this is pretty much it.


Iteration - 1
Let's analyze the stacks one by one to identify the 'unstable' stacks:
stack C is 2 grains taller than B
stack D is 2 grains taller than E
stack G is 2 grains taller than F
stack H is 3 grains taller than I

So ...

C = C - 1 and B = B +1 (one grain of sand moves from C to B)
D = D - 1 and E = E +1 (one grain of sand moves from D to E)
G = G - 1 and F = F +1 (one grain of sand moves from G to F)
H = H - 1 and I = I +1  (one grain of sand moves from H to I)

Which brings us to the next state:


Iteration - 2
Let's analyze the stacks one by one again:

stack B is now 2 grains taller than A

So ...

B = B - 1 and A = A +1 (one grain of sand moves from B to A)


Iteration - 3
If we analyze the stacks again we notice there's nothing left to do. This means our sand distribution has reached a stable state and we're done !

What is interesting is that if you sum all the heights in each of the steps above , they'll return the same number.

0+1+3+4+2+1+3+3+0 = 17
0+2+2+3+3+2+2+2+1 = 17
1+1+2+3+3+2+2+2+1 = 17

This is good. It means that sand is not magically appearing or disappearing (the mass of the system is stable, like in reality). We just managed to re-arrange the stacks in a stable distribution using a simple iterative algorithm.

Assumptions about our initial sand distribution:
  • if there is sand, there is sand underneath it.

This distribution would work with the algorithm ! Sand is supported by more sand underneath
This distribution would NOT work with the algorithm ! Sand in the red area is not supported.

  • our sand grains are arranged into a grid.
Given the two assumptions above, we can simplify the system considering only the grains at the surface and ignore the sand below. In other words we'll only consider the visible surface and ignore whatever is happening in the depth.

The ideal data structure for this approach turns out to be a simple grid. Each cell of the grid correspond to one virtual stack of sand rocks (seen from above) and the number in it corresponds to the height of each stack.

Sand Pile Algorithm in 3D


Let's set this initial 'sand' distribution.



Remember that each square is a stack of sand, and the number is the height of that stack.

Before proceeding we need to set two 'global' variables:
  • Mass transfer (M) : how much sand will fall from an unstable stack to one of the adjacent stacks.
  • Threshold (T) : the minimum height difference between the current stack and its adjacent stacks that will mark that stack as unstable (and cause an amount M of sand to be transferred to some adjacent lower stack).
For this example let's set T = 2 and M = 0.5
Now, let's identify the stacks whose adjacent stacks are lower by T.


We can identify 3 ! For instance, both '2's are surrounded by many '0's (2-0 = T!), and '4' is obviously way taller than it's neighbors by a value bigger than T.
'1' instead is not selected because the difference between it's height and the adjacent stack's height is smaller than T.

The next step is to transfer some sand (M) from those 3 stacks to some adjacent lower stack.

Let's check our options:



I marked the possible 'falling' directions with arrows. For instance '4' can fall in all the directions cause it's literally surrounded by lower stacks. So ... what direction shall we choose ? Let's pick one randomly.
In the pic above I randomly choose one arrow (the red one), as the 'falling' direction.

Pic A

Cool. So the first '2' will fall left, '4' will fall down, and the last '2' will fall right.
What does 'fall' mean ?
It means that we'll have to remove M=0.5 from the tall stack (see pic below) ...


... and transfer it to the selected adjacent lower stack (see pic below).


More specifically, in this last step for each stack we need to add M as many times as incoming arrows.

At this point we have a new state, and we need to run the same steps again and again, until all the stacks are stable.

Remember that the orange stacks will become smaller (M will be subtracted) and the green stacks will become bigger (M will be added).

Step 2 (unstable)
  

Step 3 (unstable)

Step 4 (unstable)

Step 5 (stable !!)

Why is the last state stable ? Because no stack is surrounded by stacks lower than itself by T.
In other words, sand rocks have no reason to fall anywhere cause they're all supported by solid neighbors.

Cool.
There's an important consideration to make.
You might be tempted to do the following:

For each stack A ...
  • get the 4 neighbors
  • if any neighbor is lower than A by (say) 2 then
    • reduce A by (say) 0.5
    • increase one of those 'low' neighbors by 0.5

Well this would NOT work because during one iteration you're modifying one neighbor. But in the next iteration that neighbor might become the current stack !
So .. we need to do this in 2 steps, as showed above in Sandpile Algorithm 3d.

STEP 1 - identifying the unstable stacks only

For each stack A ...
  • if it's surrounded by any stack lower than T, choose one randomly ... 
For instance if the stacks Up and Left of A are 'low', we can randomly choose one, say Up. So we'll mark A with 'U'. (so the choices are, U, D, L, R for an unstable A stack).
If A is NOT surrounded by 'low' stacks, then we mark A as 'S' (for stable).

Pic B: This corresponds to Pic A (scroll up a few pics).

STEP 2 - sand transfer

Now we can loop over each stack again and actually perform the transfer.
More specifically.
For every stack A...
  • if A is NOT marked with S, reduce A by M (remember we set M = 0.5).
    Or, simply put, if A is unstable has to drop some sand.

  • if the up stack is marked D (down) it means that the north stack wants to transfer some sand to the stack to his south, which is A. So increase A by M.
  • if the down stack is marked U (up) it means that the south stack wants to transfer some sand to the stack to his north, which is A. So increase A by M.
  • if the left stack is marked R (right) it means that the left stack wants to transfer some sand to the stack to his right, which is A. So increase A by M.
  • if the right stack is marked L (left) it means that the right stack wants to transfer some sand to the stack to his left, which is A. So increase A by M.
Pic C

Note that in both Step 1 and 2 we're modifying only the stack currently looped on (A). The neighbors are inspected, never modified. Which happens to be exactly how Houdini VEX works ! What an incredible coincidence (it's not, I did it on purpose since the beginning ha!).

Cool.
Let's summarize the whole thing in pseudo-code.

STEP 1 (pseudo code)

  1. set T = 2 (T is the height threshold)
  2. for each stack A
    1. h = height of A
    2. hU = A's UP neighbor height
    3. hD = A's DOWN neighbor height
    4. hL = A's LEFT neighbor height
    5. hR = A's RIGHT neighbor height
    6. fU = 1 if hU is smaller than h by at least T, 0 otherwise.  (or ... h-hU>=T) 
    7. fD = 1 if hD is smaller than h by at least T, 0 otherwise.  (or ... h-hD>=T)
    8. fL = 1 if hL is smaller than h by at least T, 0 otherwise.  (or ... h-hL>=T)
    9. fR = 1 if hR is smaller than h by at least T, 0 otherwise.  (or ... h-hR>=T)
    10. if (any of fU or fD or fL or fR is 1) 
      1. then pick one randomly and tag A with that label (example if we choose fU, A is marked with 'U').
      2. else, mark A with 'S' (as stable)
At this stage all the stacks should have a tag that can be 'U', 'D', 'L', 'R', 'S' (like in Pic B).

STEP 2 (pseudo code)

  1. set M = 0.5 (M is the mass transfer between sand stacks)
  2. for each stack A
    1. h = A's height
    2. u = label of A's UP neighbor (could be 'U', 'D', 'L', 'R', 'S')
    3. d = label of A's DOWN neighbor (could be 'U', 'D', 'L', 'R', 'S')
    4. l = label of A's LEFT neighbor (could be 'U', 'D', 'L', 'R', 'S')
    5. r = label of A's RIGHT neighbor (could be 'U', 'D', 'L', 'R', 'S')
    6. if (u is 'D') then h = h + M
    7. if (d is 'U') then h = h + M
    8. if (l is 'R') then h = h + M
    9. if (r is 'L') then h = h + M
    10. if (A's label is different than 'S') then h = h - M
Cool ! This is pretty much the whole idea.

PART 2  - Implementing a HeightField Pandpile Solver in Houdini with VEX (coming soon)
In Part 2 I'll explain how to implement this in Houdini using HeightFields and a little Vex.
If you're inpatient, curious and familiar with Houdini please give it a go and try to implement this algorithm in Houdini yourself !
... else ... Part 2 will be ready very soon.

PART 3 - Implementing a HeightField Sandpile Solver in Houdini with OpenCL (coming a little later than soon)
This algorithm is already fast cause it involves very simple math, nevertheless if you want to have production quality (don't we all ? ) you might want to use a very large grid with a ton of detail. In that case, why not speed up the calculation by say ... 10 or 20 times converting the code into OpenCL ?




Sunday, February 12, 2017

HUG L.A. Presentation - Nov 2016

A few months ago I had the honor to present L.A. Hug Nov 2016, where I explained in detail the workflow I used to produce this tornado fx.

The event was hosted in Framestore L.A.
You'll find the details Here

And below you'll find the presentation PDF and Hip files (H15.5) for both the Tornado and the Ink effect.

HUG-Nov2016 Presentation and Files

Enjoy and if you've question please don't hesitate to contact me.

Friday, April 22, 2016

Cigarette Smoke - Part 1

Before you start reading, be aware there is a Part 2, and you'll find it here :
LINK

But don't read it before reading Part 1. So pretend I didn't say anything.

Now for real.

I don't think there is a single person on earth who didn't spent at least 10 minutes of their life contemplating in awe the beautiful shapes drawn in mid air by an incense stick or cigarette smoke. If I had to put together the time I spent observing smoke till this day I'm sure it wouldn't be less than a week.

I found this beautiful reference video on Vimeo.
Let's watch it together for a little while.

Cigarette Smoke Reference

Many years ago I tried to create this fx in Maya. I recall the attempt, but I can't remember the outcome. 

Let's try with Houdini. 

The first solution that crossed my mind was the same I would apply to generate ink in water: lots and lots of points advected by the velocity field generated by a Pyro/Smoke sim.
The only difference would simply be the distribution of the points source.
Let's try this approach first.

Here I created an low-res smoke sim with a little bit of turbulence.



Let's cache this sim (only "vel" and "density" volume fields are really required for this tutorial).

OPTION 1 : Advect (a lot of) Points

Now let's advect a points along the velocity field generated by this sim.

In the pic below, in the right branch, import your smoke sim, convert it to VDB and make sure to use a VDB Vector Merge to merge vel.x, vel.y, vel.z in a single VDB vector field "vel".
Connect this branch to the right input of the Solver SOP.

In the left branch I just imported the same geometry I used to emit the smoke, made it really small and scattered points (make sure to randomize the Scatter seed per frame). This will be the points emitter.
Connect this branch to the left input of the Solver SOP.


Note 1 :
The reason cause I made the points emitter very small is because I want to keep the advected points as close as possible for as long as possible during the simulation, to mimic the behavior of this kind of smoke. The velocity field will be in charge of moving those points, so the closer the points, the higher the chances that they will live in contiguous velocity voxels, or maybe even in the same voxel, and consequently move together, at least for that specific time step !

Note 2:
After the scatter node I added a Transform node , and added an expression to animate the scale using trigonometric functions (see pic below for details, but feel free to experiment with different values of course). Why ? Well, if you notice in the reference video at the beginning of this article, the width of the stream of smoke taking off from the source changes a lot. This is important to add variety to the evolution of the smoke, and to mimic nature as best as we can of course !



The content of the Solver is the usual feedback loop with an injection of new points every frame.
If we assume that the first frame of the range is 1, at frame 1 the switch node will let pass the first set of scattered points.
At frame 2 the switch node will let pass the previous state, merged with the second set of scattered points. And so on for all the frames after 2.


Note :
You don't have to use a Solver for this setup. You can achieve the same result using a POP Network and a POP Points Advect. I prefer to use a Solver cause of my masochistic tendency of creating everything from scratch as an exercise. 
And because VDB Points Advect is much faster than POP Points Advect.


This is the result scattering 100 points per frame.
Let's try with 1000 points per frame.


Better , but still there are sparse, disconnected points.
Furthermore, I don't like the stepped pattern in the lower part.


Let's "smear" the initial position of the source points along "vel":

Add a Point Wrangle SOP (red node below) and connect the input points to the input 1, and the volumes to the input 2.


In the Point Wrangle node enter this vex code:
vector vel=volumesamplev(@OpInput2,"vel",v@P);
v@P+=float(random(i@ptnum*3.432))*normalize(vel)*0.025;
I used 0.025 in my sim cause it was working for me. If you still see 'steps' in the lower part of the sim, feel free to change this value to whatever fits your sim.

If you play the sim now the lower part should look roughly like this.


I guess this solves the stepping issue...


...but we still got the sparse / disconnected points in the upper part of the sim.

Let's try to reduce the points sparse-ness using the Gradient Vector Field generated by the density field . A quick way that helped me to understand the Gradient vector is the following : the Gradient vector field is for a density scalar field what normals are for a surface. In other words think of the Gradient as the normal vector for the density field. What we want to do is a sort of 'peak' SOP in Volume-land. We want to "shrink" the points along a normal vector (which for volumes is called Gradient), just a little bit, every time step.

What we want to do is advect the points along the Gradient vector field, on top of advecting along "vel". Double advection !! (it'll be slightly slower, but it's worth).

Separate "vel" from the "density", apply a VDB Analysis set to "gradient", and make sure to explicitly set the name for the resulting volume to "gradient", or "grad". Then merge it back to the original branch with "vel" and "density".


Now, dive in Solver1 and add an extra VDB Advect Points , and make sure to shorten the timestep advection by at least 0.01 (feel free to play with this number).


The reason cause we need to resize the timestep for the Gradient advection is that we want to push the points towards the zone of higher density only a tiny bit for each point, and every frame. If we keep this number to 1, the points will overshoot.


This is working much better. Notice how the points tend to converge to curves and we have much less sparse points now.
This is a possible solution and if you emit enough points you can get decent results.
Personally I prefer the next solution.

OPTION 2 : Advect Lines

Apparently the problem that we have with Option 1, is the sparse points. We are trying every possible workaround to keep those points in lines. 
So .... why don't we advect lines instead of points ?

Let's try.
First off , we need to remove the expression from Scatter SOP seed. This way, we'll make sure to have a consistent point source where each point will maintain it's own id (@ptnum). 



Next, we need to assign some kind of id to the source points. This way later we can use that id to create lines using Add SOP. Since we are no longer randomizing the point scattering per frame, @ptnum will work perfectly as id.

Add a Point Wrangle to the source branch and enter this VEX code:

i@id=@ptnum;
Your network should look more or less like this ...



Good. Now the plan is to do the following every time step (in the Solver):
  • advect the points as usual (we already have this part)
  • create lines connecting all the points with the same i@id
  • resample the lines
  • smooth the lines (this is optional but highly reccomended)
  • delete the lines and keep only the points
Why do we create lines ? 
So we can resample them.

Why do we need to resample them ? 
Cause after the advection step we don't want to loose detail. So we resample the lines at fixed sized segments, and this way we are sure that every line will always be nice and smooth.

Why do we smooth ? 
To add extra niceness and smoothness (as per beautiful video reference).

Why (on earth) do we delete the lines that we just created ?
Because new points are introduced at every time step. We could extend the lines with the new points but it's easier to just delete the lines at the end of the time-step, and re-create them at the beginning of the next time step after injecting the new source points.


Let's to that.

In the Solver append the following nodes and parameters.


And immediately after the Solver plug an extra Add SOP with the same settings as he first Add SOP in the image above.


Set the number of Scattered points to 10 and run the Sim.


Finally we got rid of the sparse points and honestly this version, with only 10 lines, looks much much better than the Option 1.
Of course now, instead of having sparse points, we have sparse lines ! But somehow this works much better because the characteristic feature of this kind of smoke is curved lines in space. Which is exactly what we got.
Furthermore, sparse points are sparse in every dimension. Sparse lines at least are continuous in one dimension, the one that counts. Plus, remember that we scattered only 10 points so far.

This is the main gist of this technique and if you're interested HERE you will find the Project File.

Now, let's render this thing.
In the following iteration I worked a bit on the Render side.
  • added age and life to the points (if you use the POP Network approach age and life are created automatically)
  • scattered 1000 points (= 1000 lines)
  • created a "density" attribute mapped to the normalized age
  • Rasterized a VDB from the curves using Volume Rasterize and sampled the above mentioned density attribute into a Volume Density Attribute
  • assigned a simple Pyro shader
  • rendered in Mantra
  • added a bit of glow and bluish tint in Nuke


If you're interested in Part 2, you'll find it HERE



Thank you for reading and let me know if you come up with some idea to improve this technique.





Saturday, August 15, 2015

How to create a Magnetic Vector Field in Houdini - Tutorial

A few days ago I was chatting with a fellow colleague (Jon Parker, great Houdini Artist) about magnetic fields. So I decided to create a little OTL to generate a vector field starting from a pole cloud distribution and share the workflow, since it's really simple to implement and the results are great. I am not sure (actually I highly doubt) that the following implementation is physically correct in any way, I just wanted to have a way to quickly create a magnetic-looking vector field.

I generally use Wrangle nodes pretty much for everything since it's very quick to create, initialize attributes and modify their value with just a few lines of code. Feel free to use VOP nodes if you prefer a node-based approach.

For what concerns Volumes, I generally prefer to use VDB (when I am not forced to use Houdini Volumes). VDB have an optimized memory structure that allow a very fast access to voxels data, plus they are sparse so they have a lower memory foot print compared to Houdini Volumes. They are slightly more tricky to use, but the reward is huge in terms of speed and memory.

Let's start simple. Two points with a "f@pole" attribute that can be either +1 or -1. 


For mere visualization purpose I created a little setup where a sphere is copied (copy SOP) to each point ...


.. the sphere is RED if the pole is negative, and GREEN if the pole is positive (using a copy SOP and the stamp expression function).


Using a wrangle node and the stamp function in one of the parameters I can select the color of each sphere based on f@pole attribute.


EDIT : I was completely unaware of chv ( ... ) vex command, which allows to read directly a vector parameter from the UI  (thank you anonymous poster !!). Which means that the lines above can be written, much more elegantly, this way :
vector negcol = chv ("negativecol");
vector poscol = chv ("positivecol");

... your setup should now look more or less like this.


It's now time to create the Volume that will contain our vector field.
Let's create a VDB node, make sure to give the field a name (I used "magneticfield"), and set it to be a vector field.

This will create an empty VDB volume. By default a VDB volume is dimensionless. Why ? Cause VDB is a sparse volume, meaning that it exists only where we want to. Consequently we need to "activate" the VDB in the regions of space where we need it, before actually 'filling' it. 

In order to do that, we use "VDB_activate" SOP which allows to use geometry to activate the region. 

In this case I am using a bounding box, with some padding, generated directly by my point cloud.
This is what the node graph should look like roughly.
NOTE: don't forget to enable "Pack Geometry before Copying" in the Copy SOP, under Stamp section (later, when we'll use thousands of points, this option will make a huge difference).


And the viewport (note the empty VDB defined by the bounding box surrounding the poles):


There's one more step we should follow before starting to write our Volume Wrangle node. At the moment, we wouldn't be able to visualize the content of the Vector VDB cause of the nature of a vector field. For this purpose we can use the Volume Trail node. This node will sample the vector field in the points specified by the First Input , and draw curves following the vector volume connected in the Second Input.

I used the previously created bounding box to generate a Fog Volume, and scatter points within it. These will be the sample points used by the Trail SOP.
And this is what the graph should look like. Note that the sample points go in the input 1 of the Volume Trail SOP.
Now, we can't see any trail cause the vector field is empty. Let's test if it works filling the volume with a constant vector.

Your viewport should now look like this. 
Horizontal lines, matching the horizontal vector {1.0 , 0.0 , 0.0 }.


Now we have a good environment that will allow us to visualize and debug our code.

Let's now discuss how to calculate the magnetic field resulting by the 2 poles , for each voxel in the VDB grid.

Let's calculate the magnetic field generated by the poles p1 and p2 at the position v@P (the dark grey voxel visible in the picture above).
That Voxel will feel the negative influence of p1 (meaning , will be repelled from p1) and the positive influence of p2 (meaning, will be attracted towards p2). If we calculate the vector distance between the voxel and each pole position, of course we obtain a vector that is very small when the voxel is close to the point, and very big when it's far. We want the opposite. That's why, in our calculation we use the inverse of the distance from each pole, as a multiplier of the (normalized) distance vector. This way, the closer we are to the pole, the stronger is the attraction or repulsion as you can see from the function plot below.


d is red (no good, cause it gets bigger and bigger with the distance)
1/d is green (good cause it big close to the pole, and smaller and smaller far from the pole).

This was the trickiest part. The rest is pretty simple.
All we need to do is to store in each voxel the sum of all the normalized distance from each pole position, multiplied by the pole attribute ( to make sure the contributing vector is repelling or attracting depending on the pole ) and multiplied again by the inverse of the distance, as explained before.  

The pseudo code is the following:
  • For each Voxel position v@P
    • initialize a vector called VectorField = { 0 , 0 , 0 }
    • iterate through all the points (pi) in a certain radius from v@P
      • import the pole attribute of the point, pole
      • find the vector d between pi and v@P
      • find the normalized (nd) and magnitude (md) of vector d
      • add nd , multiplied by pole, and by the inverse of the distance md to VectorField
    • the magnetic field v@magneticfield in the voxel v@P is set to VectorField
Let's create a Volume Wrangle, and connect the Input 1 to the VDB volume , and the input 2 to the merge node containing the 2 points with the "pole" attribute.



Converting the pseudo code in VEX is probably easier than writing the pseudo code itself.


Now this is what the view port should look like:


Now we can replace the boring 2 poles setup with something more attractive.
How about ... simulating the magnetic field on the surface of the sun ? 

This picture I downloaded from Google Image is a good reference.

In order to recreate that look, all we need to do is scatter points on a sphere (about 2k), randomly assign f@pole to -1 or 1 and feed it into the simple setup we just created.


I find this pretty cool ! :)
Ok, I guess that's it.
If you like this tutorial, or found a better way to achieve the same result, please don't hesitate to comment.
Thank you for reading !

Since you read the whole article you definitely deserve the hip file.

EDIT:
It's better to use the inverse of the square of the distance, instead of the inverse of the distance. This will definitely give better results and less interference in the voxels far from poles.
Thanks to Jon Parker for this suggestion.

Alessandro