Previous | Up

CHAPTER 1

At the Beginning Was the Light



Overview

This chapter tries to introduce the reader to the actual use of the Lightflow Rendering Tools. The best way to do this is by giving a tutorial and commenting it accurately.


Tutorial

The following example is one of the simplest possible scenes that can be rendered: a sphere in the empty space.
Now start python and type the following text:


from lightflow import *

s = scene()

s.lightOn( s.newLight( "point", [ "position", vector3( 5.0, -5.0, 4.0 ), "color", vector3( 300.0, 300.0, 300.0 ) ] ) )


plastic = s.newMaterial( "standard", [ "ka", vector3( 0, 0, 0.5 ), "kc", vector3( 1, 0.5, 0.5 ), "kd", 0.5, "km", 0.1 ] )


s.materialBegin( plastic )

s.addObject( s.newObject( "sphere", [ "radius", 1.0 ] ) )

s.materialEnd()


saver = s.newImager( "tga-saver", [ "file", "ball1.tga" ] )

s.imagerBegin( saver )

camera = s.newCamera( "pinhole", [ "eye", vector3( 0, -4, 0 ), "aim", vector3( 0, 0, 0 ) ] )

s.imagerEnd()

s.render( camera, 300, 300 )
(view image)
The first line of the program introduces a variable that contains a scene object. This line creates a new scene, the main block of a Lightflow program.
The second line calls the two scene methods named lightOn and newLight.
newLight is the first to be executed: it specifies the construction of a new light object belonging to the scene, and it takes two arguments, a string indicating the type of light to be created, in this case a "point" light, and a list containing the parameters that will be used in the construction process. Here the parameters are "position", which specifies that the next token should be interpreted as the position of the point-light, and "color", which says that the color of the light follows. Both the position and the color are specified with a three-dimensional vector, namely vector3, that in the first case represents a point in space, while in the second represents a RGB triple. Note that this string-value pairing is a peculiarity of all the parameter lists that are passed to scene methods that start with "new", such as newLight, newObject, newMaterial, etc. This mechanism in fact provides a method to distinguish between the meaning of the different parameters. Also it allows to specify only a subset of all the possible parameters that each type accepts, while the others will mantain their default value. For example "point" lights have also a field named "shadows" that accepts a boolean value specifying if shadows are projected or not, and its default is true.
The second method call, lightOn, is very simple: it accepts a single argument, the handle to a light source that has been created using newLight. This handle is an integer, and it could also be assigned to a variable for future uses, but we will see that here there was no need to do this. The purpose of this method is to turn on the light source that is passed to it, that otherwise would not illuminate anything. This serves because we may want a light to illuminate only some parts of the scene. Note that this on/off mechanism has not any temporal significance, since we are rendering a single image at a time: it has only a geographical meaning.
The next method call is to newMaterial, a procedure that creates materials in the same way used for ligths. The first argument is still the type, in this case the "standard" material, and the second is the usual parameter list. This is composed by "ka", the ambient color, "kc", the surface color, "kd" the amount of diffusion, and "km" a smoothness coefficient for the specular highlights. Here we won't give a detailed explanation of these coefficients, which is given in the "standard" material documentation, but you may have noticed that they are similar to the ones used by common RenderMan shaders, thus the name "standard".
After registering the material handle into a variable named plastic, this is passed to the method materialBegin. This method performs a complex stacking operation, but for now you should only know that its purpose is to declare the passed material as the current one, from which the following objects will be made.
The next operation involves two methods, addObject and newObject. Again the first to be executed is the second to appear, and so we will explain it before the other. newObject creates an object, that is to say a geometrical entity, that here is a sphere with radius 1. As we said before this sphere will inherit the current material, and so it will be made of plastic. addObject adds this object to the scene, putting it into the rendering set. You may ask yourself why this operation is not automatic, and the answer is simple, even if not immediate: objects can be compounded before use, for example with a boolean operation, and so only the final result needs to be really added to the set, while the intermediate objects should be left apart.
The call to materialEnd marks the end of the previously declared material block. This means that if we declare another object after here it will no longer be made of plastic. This simple concept has some complex drawbacks you should know, since they are extensively used throughout the rendering interface. The two methods materialBegin and materialEnd define a block, which is opened by the first and closed by the latter. Note that these methods should always come in couple, that is to say you cannot call materialBegin without calling materialEnd after it, and you cannot even call materialEnd without having already opened a block with materialBegin. Also these methods mantain a stack, and so material blocks may be nested, provided that coupling rules are not ever broken. An example of this feature could be the following:

s.materialBegin( plastic1 )
... # do something
s.materialBegin( plastic2 ) # nested material
...
s.materialEnd()
...
s.materialEnd()

Wrong uses instead would be like these:

s.materialBegin( plastic )
...
s.materialEnd()
...
s.materialEnd() # unmatched material end
    
s.materialBegin( plastic ) ... # not closed material block

The next entity to be created is an imager, that is to say the special output device used to handle rendering results. Here a "tga-saver" is created that writes the final image to a targa file.
Then a new block is introduced, as for materials: the imager passed to imagerBegin will be the current one used by the following cameras, untill imagerEnd will be called.
Before closing this block a "pinhole" camera is constructed with newCamera, passing to it the position of the eye and the one of the aim.
After having closed the imager block the rendering is done by calling render, which takes as input the camera and the resolution of the image in width-height form.

Once you have seen and comprehended the results of the first example, you are ready to modify it in order to obtain something more than a simple sphere. Try typing this:


from lightflow import *

s = scene()

s.lightOn( s.newLight( "point", [ "position", vector3( 5.0, -5.0, 4.0 ), "color", vector3( 300.0, 300.0, 300.0 ) ] ) )


bump = s.newPattern( "multifractal", [ "basis", "sin", "scale", 0.6, "depth", 0.2, "turbulence.omega", 0.5, 0.7, "turbulence.octaves", 6] )

plastic = s.newMaterial( "standard", [ "ka", vector3( 0, 0, 0.05 ), "kc", vector3( 1, 1, 1 ), "kd", 0.5, "km", 0.1, "displacement", bump ] )


s.materialBegin( plastic )

sphere = s.newObject( "sphere", [ "radius", 1.0 ] )

s.addObject( s.newObject( "gsurface", [ "surface", sphere, "subdivision", 4, 4, "sampling", 8, 4, "tolerance", 0.01, 0.05 ] ) )

s.materialEnd()


saver = s.newImager( "tga-saver", [ "file", "ball2.tga" ] )

s.imagerBegin( saver )

camera = s.newCamera( "pinhole", [ "eye", vector3( 0, -4, 0 ), "aim", vector3( 0, 0, 0 ) ] )

s.imagerEnd()

s.render( camera, 300, 300 )
view image
You will note that the sphere has been deformed in a strange and subtle way. This is due to the effect of the "multifractal" pattern named bump, which has been inserted in the "displacement" channel of the plastic material. Then also the geometry has been modified by letting the sphere pass through a "gsurface" object, an entity that accurately computes the visualization of generic surfaces, independently of their type. In this case it is necessary to handle the actual deformation of the sphere's surface, that otherwise would have looked flat as if it was bump-mapped.
Now that you have seen what are the simple elegance and the power of Lightflow you can start digging under the surface by reading the second chapter. In the meantime, if you want, you can take a pause and imagine how your projects will look in the future...

Next