Posts Tagged raytracing

RayRay: raytracing in haxe

SEE IT IN ACTION!

RayRay is a tiny ray tracer in Haxe. It uses brianin3d-intersect to handle the ray-to-sphere intersection test. It randomly traces 8 circles and 2 lights with reflections, shadows and subpixel sampling.

It is pretty slow, but performance could probly be substantially improved by optimizing primary rays and not allocating so many objects for each ray.

Once the scene is rendered, you can click it to have it render a new one.

Advertisements

Leave a Comment

Raytracing a sphere : more rays per pixel

So ya notice anything different between these two images?

versus

How about now?

versus

Golly! The reason for the “jaggies” is because I only used one ray per pixel. As I step pixel by pixel and row by row (image space), I’m also stepping in world space.

The image space goes from (0,0) to (w,h) and my world space goes from (-0.5,-0.5) to (+0.5, +0.5). In this case w=h=size cuz I wanted to make square images. Udderwize, I’d use an x_inc and a y_inc…

Here is what the main loop looks like:

    public RenderedImage traceBall( Settings settings, BufferedImage image ) {
        double size = settings.getSize();

        double x = -0.5;
        double y = -0.5;
        double inc = 1 / size;

        Pt start = new Pt( -0.5, -0.5, 0 );
        Pt stop = new Pt();

        for ( int i = 0 ; i < size ; i++, y+= inc ) {
            x = -0.5;
            start.setY( y );
            for ( int j = 0 ; j < size ; j++, x+= inc ) {
                start.setX( x );
                stop.copy( start );
                stop.setZ( -1 );
                image.setRGB( 
                      j
                    , i
                    , this.traceRay( start, stop, settings ).getRGB()
                );
            }
        }
        return image;
    }

The traceRay routine will use 1 of three implementations depending on the settings: simple (1 ray, no light), light (1 ray, 1 light) or accumulate(n^2 rays, 1 light).

Accumulate just steps from (x,y) to (x+inc,y+inc) in (inc/n) steps:

   public Rgbank accumulate_traceRay( Pt start, Pt stop, Settings settings ) {
        Rgbank accum = new Rgbank( 0, 0, 0, 0 );
        double size = settings.getSize();
        double inc = 1 / size;
        double sub_div = settings.getRays();
        double sub_inc = inc / sub_div;

        double xend = start.getX() + inc;
        double yend = start.getY() + inc;

        Pt start_sub = new Pt( start );
        Pt stop_sub = new Pt( stop );

        for ( double y = start.getY() ; y < yend; y+= sub_inc ) {
            start_sub.setY( y );
             stop_sub.setY( y );

            for ( double x = start.getX() ; x < xend ; x+= sub_inc ) {
                start_sub.setX( x );
                 stop_sub.setX( x );

                Rgbank color = this.light_traceRay( start_sub, stop_sub, settings );
                accum.add( color );
            }
        }
        
        Rgbank color = new Rgbank( accum );
        color.divide( sub_div * sub_div );
        return color;
    }

And that’s all there is too it! At this point RayBall.java contains the complete listing, but no more spoilers…

Leave a Comment

Raytracing a sphere : a simple point light

In the last exciting episode! We used a primary ray intersection and the distance from the collision to the viewing plane as the color.

The result was round, which is about the best you can say for it:

Once we have the distance along the ray, we also have the point where the intersection took place: start + ( start – stop ) * distance.

Now I’m going to introduce a point as a light source a (-1,-1,-1).

So now we have our sphere, our point of collision and this notion of a light. Before ya start whippin’ out the big words and fancy physics about l/d^2, lez me tells ye how this here model is gunna work!

We’ll determine the strength of the light at the point of collision based on the angular difference between the normal on the surface of the sphere at the point of collision and vector from the point of collision to the light.

I know that may seem nutty, so here is a really bad drawing to make things just that much worse:


When the angle between the vector from the collision to the light source and the normal at the pont is smaller (ie, they are closer) the light is more intense.
I tried to draw the normals like they are coming straight from the center of the sphere to the point of impact, because they are.

I know… I know… they are really appalling illustrations and yes, I do feel like I really let myself down…

This raises a couple of pretty obvious questions… First off… how do we calculate the normal from the surface of the sphere. Happily, this ends up being pretty easy to write and even easier to rip off:

        public Pt normalToPoint( Pt on_sphere ) {
            return ( new Pt( on_sphere, center_of_sphere ) ).normalize();
        }

You may notice that I also normalize the vector. So now it has a length of 1. The reason I did that is because we are about to come to the next question: how do we calculate the angle between two vectors?

The answer is perhaps the coolest trick ever! We are going to take their dot product!

I know! Like OMG! So easy! I love dot product! It is the coolest! By taking the sum of the product of the i,j,k coefficients for the vectors we get the inverse cosine of the angle between them!

WOW! What an incredible factoid!

Deep breathe… here it is: dot(u,v) = (u.x * v.x) + (u.y * u.y) + (u.z * u.z) = acos(u,v)

So what? So what! Are you some kinda math-hater! This is math I can understand: adding stuff and multiplying stuff! Come on!

So what can we use it for in this context? Right, sure… Well… for one thing, the range of values is from -1 to +1. If the value is less than zero, the angular difference is greater than 180 degrees, so we don’t brighten the pixel since there’s no way the light could hit this spot on the sphere.

Now when the value is between 0 and 1, 1 means a lot of light and 0 means basically no light.

Before I used the distance from the source of the ray to the collision to determine how much of the color of the sphere we used in that pixel.

In this modified version, I just added the value from the dot product to the distance (making sure the sum is capped at 1) and used that.

The code can be found in the light_traceRay method of RayBall.java though I have to warn you, it still contains spoilers.

Here is the difference: ==>

Neato!

———————
IE must be destroyed.

Leave a Comment

Raytracing a sphere : primary rays

So I was thinking about writing a little bejeweled style game the other day and thought it’d be good enuff to have some shaded balls for the jewels.

Naturally, I decided to write small raytracer.

In case you missed the excellent “Raytracing Topics & Techniques” by Jacco Bikker, here is a quick break down of how a raytracer works: for each pixel at (x,y) in our image, we cast a ray, eg: from (x,y,0) to (x,y,-1).

Casting a ray involves checking to see which object it would hit / intersect first.

In the case of sphere’s, there is a pretty straight forward bit of black magic we can hijack from povray:

        public double rayIntersection( Pt start, Pt stop ) {
            // start_to_sphere = start - center_of_sphere
            Pt start_to_sphere = new Pt( start, center_of_sphere );

            double radius2 = radius_of_sphere * radius_of_sphere;

            double dv = stop.dot( start_to_sphere );

            double stop_length2 = stop.lengthSquared();
            double start_to_sphere_length2 = start_to_sphere.lengthSquared();
            double start_to_surface_of_sphere = start_to_sphere_length2 - radius2;

            double determinant = (
                ( dv * dv )
                -
                ( stop_length2 * start_to_surface_of_sphere )
            );

            double result = -1;
            if( determinant >= 0 ) {
                determinant = Math.sqrt( determinant );
                double t1 = ( -dv + determinant ) / stop_length2;
                double t2 = ( -dv - determinant ) / stop_length2;
                result = ( ( t1 = 0 ) || t2 < 0 ) ? t1 : t2;
            }
            return result;
        }

I know that looks cryptic and nasty, but the bottom line is that this will return a number from 0-1 to indicate how far along the ray the hit occurred or -1 if no hit occured.

Using a single ray per pixel and this using the distance value to weight the red, green and blue values for the sphere produced a result like this:

in about 0.320 of a second.

Looks pretty crappy, huh? Bet you could do better in the gimp in about that amount of time! 😛

Whatever… that was just the primary ray. The primary ray is neat, it tells us where the ray hit on what (player hater?), but why doesn’t it look that neat?

The reason is cuz there is no lighting model…

SPOILER WARNING: RayBall.java

BTW, if someone can give me a more descriptive (and accurate) name for that “dv” variable, I’ll happily change it. The changes from the povray example seem to work, cuz after all pt.dot( pt ) = pt.distanceSquared(), right?

———————
IE must be destroyed.

Leave a Comment