Reflections and Refractions

When I have made a raymarched sphere I was quite happy. But I wanted to make a glass one. So the first experiment was to make a sphere which only worked like a lens. Just to see what’s behind it with a bit refracted rays.

I shoot a ray from the eye adjusting it for the current pixel with
vec3 ray_dir = normalize(up * uv.y + right *uv.x + forward);
Where right and forward are just static vectors and uv is my current pixel. Like this:

1
2
3
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 forward = vec3(0.0, 0.0, 1.0);
vec3 right = cross(up, forward);

So now once my ray hit the ball, I didn’t take the pixel from the texture and display it but I have used a cubemap. First for the refraction I am happy GLSL handles it internally as well as reflect(), I just need a vector and a vector I am going refract on and of course a coefficient. For the cubemap there is textureCube() instead of texture2D(). So the whole magic happens here:

1
2
vec3 refracted_ray = refract(ray_dir, -point_normal, refract_koef);
vec4 refracted_color = textureCube(iChannel2, refracted_ray);

I use -point_normal because I refract the ray going from me instead of going into me that’s why I invert it and refract_koef is 1.02. Refraction coefficient only needs to be above 1 to be a convex lens and below 1 to be a concave lens. Now for the rays that don’t hit anything I need to draw a normal pixel from the cubemap:
gl_FragColor = textureCube(iChannel2, ray_dir);

The pseudocode is this:
1
2
3
4
5
6
7
8
9
...
float dist = raymarch(...);
if (dist < max_distance) {
  vec3 refracted_ray = refract(ray_dir, -point_normal, refract_koef);
  vec4 refracted_color = textureCube(iChannel2, refracted_ray);   
} else {
  gl_FragColor = textureCube(iChannel2, ray_dir);  
}
...

And the result looks good:

I wanted to push this lens a bit forward and make it look like a sphere - so reflections too. For this I simply need to reflect the ray, get the pixel from the cubemap and mix reflection and refraction colors. Reflection is even more easy. Now point_normal is positive because it points my the viewers eye.

1
2
vec3 reflected_ray = reflect(ray_dir, point_normal);    
vec4 reflected_color = textureCube(iChannel2, reflected_ray);

And I mix the colors with gl_FragColor = mix(reflected_color, refracted_color, 0.5);. This shows up:

It reflects rays and also refracts but it isn’t realistic as I have hardcoded a constant 0.5 so refraction and reflection rays are combined as equals while in reality the more straight I look into the object the more refraction I see and less reflection while on the other hand when I look into the object from the side almost all I can see are - reflections. To fix this I have found fresnel lens algorithm on GPU Gems 2. So we simply need to get coefficient between reflection and refraction and it looks ok when written like this:

1
2
3
4
float fresnelBias = 0.00;
float fresnelScale = 0.25;
float fresnelPower = 1.97;
float mix_coef = fresnelBias + fresnelScale*pow(1.0 + dot(ray_dir, point_normal), fresnelPower);

The constants are hand tweaked because I didn’t find any suitable ones. So consider this an experiment. So now my color is calculated with this:
vec4 mixed = mix(reflected_color, refracted_color, mix_coef);;
And the result looks more realistic:

The last thing which I wanted to do - a rainbow on a bubble. This is due to the different refraction coefficient of colors, so to implement this I refract R, G and B components of a texel differently. Again - the coefficients are hand tweaked:

1
2
3
4
5
6
7
8
9
10
vec3 etaRatioRGB = vec3(1.03, 1.06, 1.09);

vec3 TRed   = refract(ray_dir, -point_normal, etaRatioRGB.r);
vec3 TGreen = refract(ray_dir, -point_normal, etaRatioRGB.g);
vec3 TBlue  = refract(ray_dir, -point_normal, etaRatioRGB.b);

vec4 refracted_color;
refracted_color.r = textureCube(iChannel2, TRed).r;
refracted_color.g = textureCube(iChannel2, TGreen).g;
refracted_color.b = textureCube(iChannel2, TBlue).b;

And the result looks bubbly!:

In conclusion - it was really fun and quite fun and fresnel lens formula really makes sense. From my understanding as ray_dir and point_normal are normalized the dot just gives me a cos of those vectors so when I look forward into the bubble I get 1 which maximizes refractions I see and when I look from the very side into it all I can see are reflections because of cos giving me zero. The pow there exists because of inverse square law or I believe so.

For the bonus here is the concave lens with refract_koef = 0.98;:

You can see the code and tweak it also online here:
Lens on Shadertoy
Bubble on Shadertoy

Voronoi Diagram

Voronoi

Voronoi diagram is a method for dividing space into a set of regions. We have a set of points and each region is a container of points/pixels closer to that exact point than the others.

A quick play on Shadertoy shows me that it’s easy as it sounds. Of course it’s inefficient and complexity of this implementation is O(N^2) because of loop going through all the points and searching for the closest one. This is the result:

I found this algorithm to be very promising for procedural generation for example for shattered glass or these random examples. I will of course try to build something nice with this thing.

Glsl Raymarching

My second part about GLSL is about raymarching - a simple method to draw figures by shooting “rays” all over the field of view and if the ray collides with an object, we draw something like a pixel or like a lighted pixel from a texture.

Tricks

The first trick of ray marching is that when we shoot the ray, we increment it and to squeeze better performance we do not increase it by a fixed rate but rather by a dynamic one. The question is - how dynamic? The trick is to get the distance of the nearest object and move that amount forward because we are sure the ray will not collide with any object in that path.

My approach

My approach is a very basic and simple one. I simply want a ball in the middle of the screen defined only by it’s radius and nothing more. Making ray collide with a circle is kids play.

Lets begin

So of course I have my circle radius, let’s call it R. #define R 0.3
Then I want to normalize my pixel position into [-1..1] range, so:

1
vec2 uv = ((2.0 * gl_FragCoord.xy) - iResolution.xy) / min(iResolution.x, iResolution.y);

Next thing we need I believe is the distance function to our circle. Now distance to the circle is probably the easiest task :)
1
2
3
float get_distance(vec3 point) {
  return length(point) - R;
}

Here we accept our 3D point vector and we also assume it’s in the center of the screen. length is a built-in of GLSL no need to write it ourselves. Actually now, we are almost there to draw a circle :). We have our circle defined, the distance function, we need only camera/eye vector and do the rays.

###Define our eye###
Lets say I want to be viewing directly to the center but from some distance. This means I need to modify only Z value.
1
vec3 eye_pos = vec3(0.0, 0.0, -3.0);

Eye position alone is not enough so I have to define where my eye is looking
1
vec3 forward = vec3(0.0, 0.0, 1.0);

My eye is looking forward. Actually it doesn’t matter the exact number of Z part, it can be anything but the vector needs to be normalized in order to get correct results. We can simply pass in 3.5 instead of 1.0 but then we should wrap it into normalize().

###Turn on the lights!###
Lets define the light.
1
vec3 light = vec3(0.0, 0.0, -3.0);

Our light will be at the same place where our eye.

###Rays###
We now need to shoot rays all over the scene to detect the object. We define the up vector. It will be used as a helper by doing cross product of it and forward vector.
1
vec3 up = vec3(0.0, 1.0, 0.0);

And we already have the forward vector(direction of where our eye is looking). We can’t easily calculate each ray’s vector so we defined the up vector in order to get the perpendicular vector - a cross product.
1
vec3 right = cross(up, forward);

Now this right vector is pointing perpendiculary to our eye direction and up vector. We simply adjust it a bit and get a vector to shoot(so we would not shoot always to the same spot).
1
vec3 ray_dir = normalize(up * uv.y + right*uv.x + forward);

And here we go. For each pixel going from -1.0 to 1.0 we calculate the ray vector. Let me explain the reasoning - we take the up vector multiply it by y coordinate of the pixel to adjust the vertical shift of our ray then we multiply our right vector by pixel’s x value to adjust horizontally. Now we squash those two terms together and add a forward vector. There may be a question - why? It’s just to point the ray to the direction where we are looking.

###Ray march it###
Finally the last lego part we need is ray marching function. This function will return the distance if the object was hit. If it fails to find any object for a hit it returns a maximum distance so we could draw something like a black background or a sky or a wall.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#define MAX_STEPS 100

float raymarch(vec3 ray_origin, vec3 ray_direction) {
  float d = 0.0;

  for (int i = 0; i < MAX_STEPS; i++) {
    vec3 new_point = ray_origin + ray_direction*d;
    float s = get_distance(new_point);
    if (s < epsilon) return d;
    d += s;
    if (d > max_distance) return max_distance;
  }
  return max_distance;
}

Let’s break down the things. First we define the maximum of steps to raymarch. The bigger the number - the better quality of a scene for the price of performance. Our function accepts ray origin point and it’s direction. Since we project the image into an “eye” so it is our ray_origin and direction is ray_dir which we have calculated just above.

So now the drawing goes like this:
1
2
3
4
5
6
7
8
9
...
float d = raymarch(eye_pos, ray_dir);

if (d < max_distance) {
gl_FragColor = vec4(1.0);
 } else {
gl_FragColor = vec4(0.0);
}
...

Now we simply get a white circle in the middle of a screen with black background.

Lighting

It’s pretty easy to make a feeling of 3D as we have a distance returned. The lighting works simply by taking cosine of light vector and surface normal(vector pointing up with respect to a point). For this operation we need to get a normal vector of our point.

1
2
3
4
5
6
7
vec3 get_normal(vec3 point) {
  float d0 = get_distance(point);
  float dX = get_distance(point - vec3(epsilon, 0.0, 0.0));
  float dY = get_distance(point - vec3(0.0, epsilon, 0.0));
  float dZ = get_distance(point - vec3(0.0, 0.0, epsilon));
  return normalize(dX-d0, dY-d0, dZ-d0);
}

The trick to get point’s normal in our case is to calculate distances to the points that are farther backwards in all directions by just a tiny bit. Then we take the differences and construct the normal. And so:
1
2
3
4
5
6
7
8
9
10
11
...
vec3 ray_dir = normalize(up * uv.y + right *uv.x + forward);

float d = raymarch(eye_pos, ray_dir);
if (d < max_distance) {
  vec3 point = (eye_pos+ray_dir*d);
  vec3 point_normal = get_normal(point);
  vec3 light_dir = -normalize(light-point);
  float dotp_diffuse = max(0.0, dot(light_dir, point_normal));
    gl_FragColor = vec4(dotp_diffuse);
...

The dotp_diffuse is dot product of diffusion(direct color of light on the circle). As the vectors are normalized it’s simply a cosine of light vector and surface normal. If the light hits the surface at 0 angle it will be 1.0 if they are perpendicular it’s 0.

To paint the circle in a color I need to simply multiply color vector by dotp_diffuse, like so vec4(0.0, 0.0, 1.0, 1.0) * dotp_diffuse. This gives us a blue shaded circle:

More fun

We can do more fun with this circle. Like bump map it. In our case we can simply adjust the get_distance() function to return a bit distorted distances by a noise(from a noise texture on iChannel1. It’s Shadertoy thing).

1
2
3
4
5
6
7
float get_distance(vec3 point) {
  float bump = 0.0;
  if ( length(point) < R + bump_factor) {
    bump = bump_factor * texture2D(iChannel1, point.xy).r;
  }
  return length(point) - R + bump;
}

Here’s cheap bump mapping:

Now note that I chose the incorrect method for getting a point from a texture as it’s a circle and I directly access it without any transform. For this reason I borrowed this technique from Inigo Quilez. And here it is:

1
2
3
4
5
6
vec4 texture3d (sampler2D t, vec3 p, vec3 n, float scale) {
  return
    texture2D(t, p.yz * scale) * abs (n.x) +
    texture2D(t, p.xz * scale) * abs (n.y) +
    texture2D(t, p.xy * scale) * abs (n.z);
}

I pass in the texture, point, point normal and scale. I will show it’s usage later.

###Attenuation###
A thing to fix would be - color attenuation. The farther the distance, the more dimmed the color.
1
float attenuation = 1.0 / (1.0 + K*pow( length(light - point), 2.0));

It works by inverse square law - color intensity minimizes squarely as distance increases.

###Specular lighting###
We have diffuse lighting and there’s a specular one. We need a vector of the reflected light and GLSL has a built-in called reflect(). So we can simply do:
1
vec3 reflected_light_dir = reflect(-light_dir, point_normal);

Not to forget the ambient light. If it’s not pitch black we should have some ambient color. I am not going to calculate it and simply will hardcode it #define ambient 0.4 and add it to the final color vector.

Here’s textured sphere with very small and rare bumps and diffuse, specular and ambient lighting.

Plane

I would like to have a shadow. But to cast a shadow I need to have a surface to cast it on, so lets assume we will have an infinite plane. A shorthand to check if point is on a plane - dot(point, plane_normal) - elevation. As I want to have floor, it becomes dot(point, vec(0.0, 1.0, 0.0)) - 1.0. If elevation would be zero - the plane would be invisible to our eye.

I am not going to separate it into a new function but will simply adjust get_distance() to calculate if it hits the ball or a plane.

1
2
3
4
5
6
7
8
9
10
11
12
float get_distance(vec3 point) {
  float bump = 0.0;
  float elevation = -1.0;

  if ( length(point) < R + bump_factor) {
    bump = bump_factor * texture3d(iChannel1, point, normalize(-point), 0.5).r;
  }
  return min(
    length(point) - R + bump,
    dot(point, vec3(0.0, 1.0, 0.0)) - elevation
  );
}

Here we can also see the usage example of texture3d(). get_distance() now returns the minimum of length to circle or a plane. Which one is closer - that one is hit of course.

###Shadow###
To cast a shadow I do it like this: first when I know the ray hit a point(a plane or a ball because shadow can be on both) I take that point and raymarch again by using that collision point as source point and direct it to the light. Now if the ray did collide with something along the path it means we should cast a shadow. Like if it’s a plane or a ball and just above the ball there’s a light. Then the area on the bottom of the ball(on the plane) will shoot rays to the light and hit ball and so we have a shadow.

We will make a soft shadow without sharp edges.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
float shadow_sample (vec3 org, vec3 dir) {
    float res = 1.0;
    float t = epsilon*20.0;
    for (int i =0; i < 100; i++){
        float h = get_distance (org + dir*t);
    if (h <= epsilon) {
            return 0.0;
    }
        res = min (res, 32.0*h/t);
        t += h;
    if (t >= max_distance) {
          return res;
    }

    }
    return res;
}

Lets breakdown this thing. We have t which is increased by some constant. It’s because get_distance() might get the same point we’re shooting from since the starting point is near ball/plane. A value of 20.0 in this case works for me since lower values cause visual artefacts I can’t explain :-).

Within a loop we get distance h. If it hits something(checking approximately with epsilon) then it’s a pitch black. Otherwise we set res to be min of itself(since we don’t want to loose it’s accumulated value) and distance divided by step - “shadow step”. 32.0 multiplier is there for shadow softness - the higher the value the harder the shadow.

We put in the shadow into equation by multiplying everything by shadow_sample(point, -light_dir). And we get kinda huge and ugly equation which begs for a refactoring. But for experimenting purposes it does it’s job. Final result can be looked at and tweaked here: link.

Glsl Basics

I’ve started to experiment with GLSL(OpenGL Shading Language) and done a few easy examples. I play with them on www.shadertoy.com.

The hardest task was to grasp the concept that while writing shaders there’s no state. Of course I can keep the state by passing uniform values(kinda global variables) which I can set up in the code, pass to the shader and for all the pixels(in case of fragment shader) get the same value.

As there’s no state, well except for the time in shaderToy case it’s iGlobalTime I need to calculate what I should need to draw, like a circle, then grab the current pixel, see if it’s approximately nearby my circle and draw a pixel.

Let’s have a small demo which will draw a horizontal line crossing vertical line in the middle.

First, GLSL doesn’t have PI value defined. Why? I don’t know, so let’s define it:
const float PI = 3.14159265359;

Our current pixel position is inside gl_FragCoord. We can access x and y values of it but I like
my data normalized, so I normalize it like this:

1
2
float x = gl_FragCoord.x / iResolution.x;
float y = gl_FragCoord.y / iResolution.y;

iResolution inside shadertoy has width/height set as x and y respectively. Now our x and y is in range 0..1 and we don’t care anymore about resolution, width or length.

The center in 0..1 range is of course 0.5. Now we need some tolerance which will define how approximate our pixels will be horizontally and vertically:

1
2
float thicknessH = 0.01;
float thicknessV = 0.01;

Note that in this case, we don’t operate on pixels so 0.01 is not like 1 pixel or so. It depends on the resolution.

So now check if we’re near the center:

1
2
float diffY = abs(0.5 - y);
float diffX = abs(0.5 - x);

How do we check? Get the difference between center(0.5) and our x and y values then of course check if it’s smaller or equal to our tolerance and draw a pixel.

The full source is this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
void main(void) {
  const float PI = 3.14159265359;
  float thicknessH = 0.01;
  float thicknessV = 0.01;

  float y = gl_FragCoord.y / iResolution.y;
  float x = gl_FragCoord.x / iResolution.x;

  float diffY = abs(0.5 - y);
  float gradY = diffY;

  float diffX = abs(0.5 - x);
  float gradX = diffX;

  if ( diffY < thicknessH || diffX < thicknessV) {
    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0 );
  } else {
    gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0 );
  }
}

And the result can be played with here.

Minilang a Simple Parser in C. Part 2

I have refactored my code. As my first more serious C code it was quite ugly, broke DRY several times, naming was off and
overall code was in a big mess so I have dedicated half of my Sunday to write it again from scratch and I can call it a success.

What’s new

Now it supports parentheses and solves them correctly. Handles whitespace without a problem and understands unary operators. So now we can solve 10*-2 to get -20. Unfortunately unary support is not complete yet as it only supports the case written above. In other cases it doesn’t work because of a simple logic flaw.

Why unary doesn’t work

In cases like -2+2 current code yields 4 instead of 0. Because operator precedence is: parentheses, multiplication/division, addition/subtraction, unary, 2+2 is matched and solved first, leaving AST tree as SUB ADD(2;2). In this case I am deciding what to do. To implement some additional template, or inject unary operator inside ADD node itself and handle it in solver. Or yet the most elegant solution of those - to implement parenthesis around the template, so in -2+2 cause parser would convert this equation into (-2)+2 yielding it to VAL(-2)+2.

Refactoring

I also got rid of checking if there’s a dot in the expression as I do not care about integers or floats. I cast all numerics I have found into double datatypes and don’t think about them twice. I have tried to do my best to give variables and functions meaningful names and the code is full of printf calls on each step of processing so to catch the bugs as early as possible. What I am ashamed of - memory leaks. There’s not even one call of free and there should be.

Minilang can be found on GitHub.