Raytracer and Radiosity Renderer

[Story So Far] [Background] [RAY Implementation] [Images] [Bibliography] [Downloading] [Authors]

Story So Far

RAY was born as a project in the Image Synthesis course and consisted of a conventional recursive raytracer.
A year later it was extended to include the radiosity technique for image rendering. This work was done as a semester project in the Intelligent Systems Laboratory under the supervision of Dr. Craig (Chaim) Gotsman


Raytracing is a powerful rendering technique, but has a number of major drawbacks.

  • It is view dependent - rendering with different viewpoints must be redone entirely.
  • Its simulation of diffuse reflection effects is very poor.
  • Simulation of aerial light sources is very difficult.

    Radiosity addresses these problems by rendering using heat transfer principles. As in finite-element calculations, the scene surfaces are discretized to n elements (patches). A radiosity bi is associated with each patch Pi (polygons in RAY's case). This is the amount of energy emitted by a unit area of the element. The energy transfer between two elements is fully characterized by their relative position and orientations, their physical characteristics and occluding objects. The proportion of the total energy emitted by element Pi, received by element Pj is the form-factor Fij. When calculating form factors, we assume that surfaces are Lambertian or ideally diffuse, i.e. each point on the surface emits energy uniformly in all directions.

    The input for the radiosity calculation is the specification of a set of n patches. The radiosities are computed by computing n2 form factors and solving a set of n linear equations.

    After obtaining radiosity values for the patches, the scene can be rendered from an arbitrary viewpoint by simple shading methods.

    This classic radiosity method suffers from two major drawbacks:

    • It is inefficient in space and time because form-factors are computed and stored even for elements that are not illuminated.
    • The radiosity value is assumed to be uniform over the patch, which is generally incorrect.
    It is hard to demonstrate inefficiency, but inaccuracy is easily demonstrated on the picture of the famous Cornell Box generated by the radiosity method with single a radiosity value for the initial polygons.


    Common methods for attacking these problems are progressive refinement and adaptive subdivision. Both techniques are implemented in RAY.

    Progressive Refinement

    This is an iterative technique which produces the solution to the radiosity equations in steps. After each step, the current solution may be used as an approximation to the final result, which is refined at the next step.

    Adaptive Subdivision

    This is a method which addresses the problem of radiosity varying over the area of a patch. The method involves some heuristic criteria which determine whether the radiosity value over the patch is constant enough. If it is not, the patch is divided into smaller subpatches until the criteria are satisfied.

    RAY Implementation

    A detailed description of RAY's algorithms and data structures is given in the RAY documentation package. Here we give a very general description and some results.

    RAY accepts input scenes in IRIT data format, using the IRIT library to parse these files. Click here for a summary of RAY options and input file format.

    At the heart of the radiosity algorithm lies the form-factor computation. RAY approximates the area integral by dividing both sending and receiving patches into small subelements of known size and pretending these subelements are differential. The visibility between two subelements is determined by raycasting.

    For computing radiosity values, RAY uses the "shooting" progressive refinement algorithm. This algorithm uses the notion of "unshot", or residual, radiosity. This is the amount of energy which a patch has due to its emitting property or due to illumination received from other patches, and which was not yet distributed among other patches. In the initial state, the only patches with radiosity are those emitting light, and all radiosity is unshot. The 'brightest' patch is then selected and its unshot radiosity is distributed between other patches by increasing their unshot radiosity values. This continues until the total unshot radiosity is reduced below a threshold. The iterative process is illustrated in the following images.

    Iteration 1.
    Light source shoots
    Iteration 4.
    Green wall shoots
    Iteration 7.
    Red wall shoots
    Iteration 8.
    Top of large box shoots
    Iteration 16. Iteration 24.

    It is possible, at any stage of the progressive refinement, to get a crude estimate of the solution. This will, however, usually be darker than the true solution. To compensate for this, a technique called ambient illumination is used. It affects only the image display. The idea is to add some amount of radiosity (proportional to the average unshot radiosity of the scene) to every patch before displaying it. This means that, the closer the solution to convergence, the smaller the impact of the ambient illumination. Click here for the same sequence of Cornell Box images as before, displayed with ambient illumination.

    Adaptive subdivision takes place while calculating the amount of energy transferred between a source and destination patch. The general idea is that first it is assumed that the radiosity is distributed uniformly over the destination patch, which is then subdivided into smaller patches for more accurate results. By comparing these two results, a decision is made about the necessity of subdivision. The test is applied recursively. The subdivision process is illustrated in the following images:

    Iteration 1.
    Left light source shoots.
    Finest division on the floor is
    around the border of the shadow
    Iteration 2.
    Right light source shoots.
    Division of the floor is refined
    around second shadow

    The process of rendering the image, although fairy straightforward, poses some problems. One of the most difficult of them was the so-called T-Vertex problem (see Documentation).

    RAY generates images in 24-bit binary PPM format. The images displayed here were converted to JPEG for Netscape's sake.

    Here are some images produced by RAY on some non-trivial scenes.

    droom.scr office.scr kitchen.scr

    The input scenes were converted from VRML by wrl2irit written by Oded Sudarsky.


    1. Sillion and Puech, Radiosity and Global Illumination, Morgan-Kaufman, 1994
    2. Cohen and Wallace, Radiosity and Realistic Image Synthesis, Academic Press, 1993
    3. Watt and Watt, Advanced Animation And Rendering Techniques, Addison-Wesley, 1992


    Linked below are different parts of the RAY distribution. All tar files, when opened, build a directory tree having RAY as root.


    ray.src.tar.gz RAY sources for SunOS 5.5 and Irix 5.2 systems. Hopefully, it will cause little or no trouble compiling on other UNIX systems as well. In order to compile it you'll need the IRIT includes and libraries compiled for your system. For more details, see the BUILD file in the RAY/ directory.


    ray.doc.tar.gz RAY documentation in PostScript and MS Word 6.0 document format.

    Reproducing Images on this Page

    ray.rep.tar.gz RAY binary for SunOS 5.5 and all the data files and scripts needed to reproduce the images on this page. Detailed instructions can be found in the REPRODUCING_IMAGES file in the RAY/ directory.


    Gregory Bershansky (c0595662@cs.technion.ac.il)
    Alexey Efron (c0895871@cs.technion.ac.il)