Here begins Lightflow.
For now you have only seen part of the standard Interface, which
allows Lightflow to be an open and extensible
product. What makes the difference with other renderers,
however, is the default set of tools that come with this
interface, that is to say the set of built-in classes.
This chapter will show you possible combinations of some of
these classes (they are almost a hundred!), so do not expect to learn
all about Lightflow, try to comprehend how things work
instead and then you'll be able to make your projects by yourself.
Here we will see how to combine multiple patterns together to obtain more complex constructions.
from lightflow import * s = scene() s.lightOn( s.newLight( "directional", [ "direction", vector3( 8.0, 5.0, -6.0 ), "color", vector3( 1.0, 1.0, 1.0 ) ] ) ) cloudpattern1 = s.newPattern( "granite", [ "value", 0.3, 1.0, 1.0, 0.5, 0.0, 0.0, "scale", 1.0, "turbulence.omega", 0.55, "turbulence.lambda", 1.8, "turbulence.octaves", 6 ] ) # Make a granite-like volumetric pattern. # Note the keyword "value", followed by two rows of three parameters # each. # This is an example of value-gradient. A gradient is a function which # interpolates many values, which may be numbers, colors, or entire # patterns and materials. # By convention numeric gradients are called value-gradients, color ones # are named color-gradients and so on. # As a function a gradient associates a value to a real variable. # You can imagine it in cartesian coordinates, putting the variable on # the abscissas and the associated value on the ordinates. # Here the first expected argument is the abscissa at which the value # of the function is specified, then the value follows. The value is # specified both at the right and at the left of the abscissa in order # to model discontinuities, so it is composed by two numbers. # You can specify how many abscissas (and values) you want. # This gradient is used to model the output of our fractal pattern, # that we will use to describe the spatial density of the cloud. # Here the gradient smoothly blends from the value of 1 at the point 0.3, # to the value of 0 at the point 0.5. # Normally the output would be a value between 0 and 1, but we # make it droppoff rapidly from 1 to 0 to model masses of clouds that # are denser at their center and that disappear at their boundaries. cloudpattern2 = s.newPattern( "radial", [ "value", 0.8, 1.0, 1.0, 1.0, 0.0, 0.0, "scale", 1.2 ] ) # Make a radial pattern, that is to say a spherical figure that is # dense at its center and that has zero density at its border. # This sphere has a radius of 1.2. cloudpattern = s.newPattern( "compose", [ "patterns", cloudpattern1, cloudpattern2 ] ) # Compose the two patterns. Here the output of cloudpattern1 is used # as the input of cloudpattern2. Since the radial pattern uses its # input to scale its output, the result will be a sphere containing a # granitic texture which diminishes its intensity near the border. cloudinterior = s.newInterior( "cloud", [ "kr", vector3(0.7, 0.85, 1.0), "kaf", vector3(0.4 * 0.5, 0.7 * 0.5, 1.0 * 0.5), "density", 1.0, "density", cloudpattern, "sampling", 20.0, "shadow-caching", vector3(-1.2,-1.2,-1.2), vector3(1.2,1.2,1.2), "density-caching", 2048, vector3(-1.2,-1.2,-1.2), vector3(1.2,1.2,1.2) ] ) # Here you should note how we described density. # It has been specified with a single value of 1 and then with the # pattern we modeled before. The value of 1 will be used as a input to # cloudpattern1, which will scale its output by this factor. In # this case there will be no scaling, and the output will go from 1 to # 0, as we stated above. # The "shadow-caching" and "density-caching" attributes specify the use of # two different caching mechanisms that will be used to speed-up # computations. They both require a bounding box where to work, and # the density cache also requires the maximum allowed memory # occupancy, which is expressed in Kb (here 2048, i.e. 2 Mb). # The "sampling" value specifies how many samples will be taken into a # segment long one. s.interiorBegin( cloudinterior ) cloud = s.newMaterial( "transparent", [] ) s.interiorEnd() s.materialBegin( cloud ) s.addObject( s.newObject( "sphere", [ "radius", 1.2 ] ) ) s.materialEnd() saver = s.newImager( "tga-saver", [ "file", "cloud.tga" ] ) s.imagerBegin( saver ) camera = s.newCamera( "pinhole", [ "eye", vector3( 0, -4, 0 ), "aim", vector3( 0, 0, 0 ) ] ) s.imagerEnd() s.render( camera, 300, 300 )(view image)
In my opinion the ocean is a beautyful subject. It is fascinating, amusing, romantic and full of remembrances... but what really matters are its wonderful reflections that the common computer graphics man appreciates a lot. So let's go.
from lightflow import * s = scene() # Specify some rendering settings. These values are used by # all the built-in types, so they are grouped under a unique # interface, named "default". s.newInterface( "default", [ "trace-depth", 5, "lighting-depth", 3 ] ) # Make a bright, bright light to model the sun. s.lightOn( s.newLight( "point", [ "position", vector3( 0, 900, 100 ), "color", vector3( 60000.0, 60000.0, 60000.0 ) ] ) ) # Model the water waves with a fractal pattern. s.transformBegin( transform().scaling( vector3( 10, 5, 5 ) ) ) waterbumps = s.newPattern( "multifractal", [ "basis", "sin", "scale", 2.0, "depth", 0.1, "turbulence.distortion", 1.0, "turbulence.omega", 0.1, 0.3, "turbulence.lambda", 2.0, "turbulence.octaves", 5 ] ) # the "basis" value describes the fractal basis, that here is set to # "sin", as waves have a sinusoidal origin. # "scale" represents the width of the waves. This factor is then # multiplied by the scaling transform we put before. # "depth" represents the depth of the waves, when used to displace a # surface. # the "turbulence." keywords are turbulence parameters, that are # explained in the "multifractal" documentation. s.transformEnd() # Model water with a material. water = s.newMaterial( "physical", [ "fresnel", 1, 0.3, 0.0, "IOR", 1.33, "kr", vector3( 1, 1, 1 ), "kt", vector3( 1, 1, 1 ), "kd", 0.0, "km", 0.03, "shinyness", 1.0, "displacement", waterbumps ] ) # the choosed type is "physical", because this provides physical # parameters, very good to model liquids, glasses and metals. # note the use of the waterbumps pattern as a displacement. # Model the ocean fundals in a similar way. earthbumps = s.newPattern( "multifractal", [ "basis", "sin", "scale", 10.0, "depth", 0.5, "turbulence.distortion", 1.0, "turbulence.omega", 0.1, 0.3, "turbulence.lambda", 2.0, "turbulence.octaves", 5 ] ) earth = s.newMaterial( "diffuse", [ "kr", vector3( 0.8, 0.75, 0.7 ), "displacement", earthbumps ] ) # Here starts the sky. # It contains a lot of volumetric clouds, especially at the horizon, # so it is a bit complex. # Volumetric clouds are modeled with an interior and a pattern that modulates its density. s.transformBegin( transform().scaling( vector3( 100, 100, 100 ) ) ) s.transformBegin( transform().translation( vector3( 0, 0, -0.5 ) ) ) cloudpattern = s.newPattern( "multifractal", [ "value", 0.0, 1.0, 1.0, 0.3, 0.1, 0.1, "scale", 1.0, "turbulence.distortion", 1.0, "turbulence.omega", 0.5, 0.8, "turbulence.lambda", 2.0, "turbulence.octaves", 3 ] ) # Make the output distribution go from 1 to 0.1 to model clouds. s.transformEnd() s.transformEnd() cloudinterior = s.newInterior( "cloud", [ "kr", vector3( 2.0, 0.8, 0.4 ), "kaf", vector3( 0.6 * 0.035, 0.8 * 0.035, 0.4 * 0.035), "density", 1.0, "density", cloudpattern, "sampling", 0.07, 1 "shadow-caching", vector3( -1000, -1000, -1 ), vector3( 1000, 1000, 100 ), "density-caching", 2048, vector3( -1000, -1000, -1 ), vector3( 1000, 1000, 100 ) ] ) # Note that our scene will have a radius of 1000 unities, and our # camera would be at its center, so with a sampling factor of 0.07 we # allow 70 samples per ray: an enormous quantity! This will make our # scene render slowly, but the caches will give a help. Obviously if # you remove these volumetric clouds the rendering will be much faster. # Here we don't care about speed however, since we want only to # produce a single, astonishing image... # Now model the fundal, a blue sky made of flat clouds that will be # wrapped on a sphere. skypattern1 = s.newPattern( "linear-v", [ "color", 0.2, vector3( 0.8, 0.9, 1.0 ), vector3( 0.8, 0.9, 1.0 ), 1.0, vector3( 0.2, 0.4, 0.9 ), vector3( 0.2, 0.4, 0.9 ) ] ) s.transformBegin( transform().scaling( vector3( 80, 80, 10 ) ) ) skypattern2 = s.newPattern( "multifractal", [ "color", 0.2, vector3( 0.2, 0.4, 0.9 ), vector3( 0.2, 0.4, 0.9 ), 1.0, vector3( 0.2, 0.3, 0.7 ), vector3( 0.2, 0.3, 0.7 ), "scale", 1.1, "turbulence.distortion", 1.0, "turbulence.omega", 0.5, 0.8 ] ) s.transformEnd() skypattern = s.newPattern( "blend", [ "pattern", skypattern1, "gradient", 0.3, skypattern1, skypattern1, 1.0, skypattern2, skypattern2 ] ) # Note that we actually created two patterns, and we merged them # together with a "blend". # The first pattern is a linear gradient that will span the V # direction of the parametric surface we will attach it to. # In this case we will use it on a sphere, where the V direction is # the one which connects the north and south poles (that is to say a # line of constant U and varying V is a meridian). # This gradient associates a color to each parallel of the sphere, # going from bright blue near the horizon to dark blue near the # azimuth. # The second pattern is again a "multifractal", which we use to model # very distant clouds. Note that we stretch it with a scaling, so that # the clouds will look wide and thin. # The "blend" pattern finally blends the two. It uses the pattern # specified with "pattern" to mix more patterns. In particular it # defines a pattern-gradient that smoothly interpolates between # different patterns. In this case we use the linear pattern both to # model the distribution of the gradient and to model the gradient # itself. # The numeric value of the linear pattern has not been specified # (with a value-gradient), and so it goes from 0 to 1, as default. # Thus the "blend" will make a transition that goes from a clear # and bright sky near the horizon (at the V value of 0.3) to a cloudy # sky near the azimuth (at the V value of 1.0). # Create the sky material. s.interiorBegin( cloudinterior ) sky = s.newMaterial( "matte", [ "kc", vector3( 1, 1, 1 ), "kc", skypattern, "shadowing", 0.0 ] ) s.interiorEnd() # "matte" is a material made to create matte objects, that are not # shaded by light, but that emit a uniform color. In this case this # color is first set to white (1, 1, 1) and then scaled by the skypattern. # The result will be the skypattern itself. # Create a water disc. s.materialBegin( water ) s.addObject( s.newObject( "disc", [ "radius", 1000.0 ] ) ) s.materialEnd() # Create an under-water disc made of earth. s.transformBegin( transform().translation( vector3( 0, 0, -10 ) ) ) s.materialBegin( earth ) s.addObject( s.newObject( "disc", [ "radius", 1000.0 ] ) ) s.materialEnd() s.transformEnd() # Create the sky sphere. s.transformBegin( transform().scaling( vector3( 1, 1, 0.1 ) ) ) s.materialBegin( sky ) s.addObject( s.newObject( "sphere", [ "radius", 1000.0 ] ) ) s.materialEnd() s.transformEnd() saver = s.newImager( "tga-saver", [ "file", "ocean.tga" ] ) s.imagerBegin( saver ) camera = s.newCamera( "pinhole", [ "eye", vector3( 0, -8, 4 ), "aim", vector3( 0, 0, 4 ) ] ) s.imagerEnd() s.render( camera, 400, 300 )(view image)
Now we should see the most astonishing feature of the
Lightflow Rendering Tools, the one that will make your
images real, and their colors alive: global illumination.
We'll do it with the famous Cornell Box, an experimental scene that
was invented at the Cornell University to test different
algorithms.
In the last five years many renderers integrated special
techniques to handle the light transport phenomena, but they
almost always provided slow algorithms, that would also fail in
many situations, producing both uncorrect results and
anaesthetical artifacts.
Lightflow's Global Illumination Engine does
produce perfect images, which are so physically accurate as
good-looking to the human eye. And it does it in the fastest way...
from lightflow import * s = scene() s.newInterface( "default", [ "trace-depth", 6, "radiosity-depth", 6, "radiosity-samples", 350, "radiosity-threshold", 0.1, "radiosity-reuse-distance", 0.4, 0.01, "photon-count", 300000 ] ) # the "trace-depth" attribute controls the maximal number of ray-traced # light bounces. # the "radiosity-depth" attribute controls the maximal number of # radiosity iterations, that is to say the number of bounces of the indirect # illumination. # the "radiosity-samples" attribute sets the amount of rays that are # used to sample the light space at every surface location. Normally # values between 200 and 500 produce good results. Note that this parameter # is very influent on the rendering time, since light sampling is one # of the most time consuming tasks. # "radiosity-threshold" sets the maximal error bound in the radiosity # estimation. A value of 0.1 means that the error is allowed to be 10% # of the real value. # "radiosity-reuse-distance" sets the maximum and minimum distance from # different sampling locations. This parameter is the only one that # must be set accordingly to the scene size. The less these values are, # the better the result will be, but usually a good value for the # maximum distance is about one fifth of the length of the visible surfaces. # In this case we are modeling a room with sides 2 unities long, hence # a value of 0.4 will prove to be good enough. The minimum distance # should be an order of magnitude less. # The "photon-count" parameter controls the amount of photons that are # spread into the scene to compute the global illumination. Obviously # more photons means better approximations and longer times. s.lightOn( s.newLight( "soft-conic", [ "position", vector3( 0, 0, 0.98 ), "direction", vector3( 0, 0, -1 ), "angle", 0.0, 3.141592 / 2.0, "radius", 0.05, "samples", 7, "color", vector3( 18, 18, 18 ) ] ) ) # We simulate an area light with a conic light that produces soft # shadows. The spreading angle of the light is set to 90 degrees # (PI / 2 in radians) to obtain the same light distribution of a patch # light source. We could also put a real "patch" light, with a well # defined surface, but the computation times would have been longer. # Check the class documentation to see how area lights work, and how # fake soft shadows may be obtained with the "soft" and "soft-conic" types. neon = s.newMaterial( "matte", [ "kc", vector3( 1, 1, 1 ), "shadowing", 0.0 ] ) whitewash = s.newMaterial( "diffuse", [ "kr", vector3( 0.8, 0.8, 0.8 ), "radiosity", 1 ] ) redwash = s.newMaterial( "diffuse", [ "kr", vector3( 0.8, 0.1, 0.1 ), "radiosity", 1 ] ) bluewash = s.newMaterial( "diffuse", [ "kr", vector3( 0.1, 0.1, 0.8 ), "radiosity", 1 ] ) metal = s.newMaterial( "physical", [ "fresnel", 1, "IOR", 6.0, "kr", vector3( 0.95, 0.95, 1 ), "kd", 0.1, "km", 0.1, "shinyness", 0.8, "radiosity", 0, "caustics", 1, 1 ] ) glass = s.newMaterial( "general", [ "fresnel", 1, "IOR", 1.57, "kdr", vector3( 0, 0, 0 ), "kdt", vector3( 0, 0, 0 ), "ksr", vector3( 1, 1, 1 ), vector3( 0.5, 0.8, 1 ), "kst", vector3( 1, 1, 1 ), vector3( 1, 0.6, 0.2 ), "kr", vector3( 1, 1, 1 ), "kt", vector3( 1, 1, 1 ), "kd", 0.0, "km", 0.03, "shinyness", 1.0, "transmission", 0, "radiosity", 0, "caustics", 1, 1 ] ) s.materialBegin( whitewash ) s.addObject( s.newObject( "patch", [ "points", vector3( -0.25, -0.25, 0.995 ), vector3( 0.25, -0.25, 0.995 ), vector3( -0.25, 0.25, 0.995 ), vector3( 0.25, 0.25, 0.995 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( -0.25, 0.25, 0.99 ), vector3( 0.25, 0.25, 0.99 ), vector3( -0.25, 0.25, 1.00 ), vector3( 0.25, 0.25, 1.00 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( -0.25, -0.25, 0.99 ), vector3( -0.25, -0.25, 1.00 ), vector3( 0.25, -0.25, 0.99 ), vector3( 0.25, -0.25, 1.00 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( 0.25, -0.25, 0.99 ), vector3( 0.25, -0.25, 1.00 ), vector3( 0.25, 0.25, 0.99 ), vector3( 0.25, 0.25, 1.00 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( -0.25, -0.25, 0.99 ), vector3( -0.25, 0.25, 0.99 ), vector3( -0.25, -0.25, 1.00 ), vector3( -0.25, 0.25, 1.00 ) ] ) ) s.materialEnd() s.materialBegin( neon ) s.addObject( s.newObject( "patch", [ "points", vector3( -0.25, -0.25, 0.99 ), vector3( 0.25, -0.25, 0.99 ), vector3( -0.25, 0.25, 0.99 ), vector3( 0.25, 0.25, 0.99 ) ] ) ) s.materialEnd() s.materialBegin( whitewash ) s.addObject( s.newObject( "patch", [ "points", vector3( -1, -1, -1 ), vector3( -1, 1, -1 ), vector3( 1, -1, -1 ), vector3( 1, 1, -1 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( -1, -1, 1 ), vector3( 1, -1, 1 ), vector3( -1, 1, 1 ), vector3( 1, 1, 1 ) ] ) ) s.addObject( s.newObject( "patch", [ "points", vector3( -1, 1, -1 ), vector3( -1, 1, 1 ), vector3( 1, 1, -1 ), vector3( 1, 1, 1 ) ] ) ) s.materialEnd() s.materialBegin( redwash ) s.addObject( s.newObject( "patch", [ "points", vector3( -1, -1, -1 ), vector3( -1, -1, 1 ), vector3( -1, 1, -1 ), vector3( -1, 1, 1 ) ] ) ) s.materialEnd() s.materialBegin( bluewash ) s.addObject( s.newObject( "patch", [ "points", vector3( 1, -1, -1 ), vector3( 1, 1, -1 ), vector3( 1, -1, 1 ), vector3( 1, 1, 1 ) ] ) ) s.materialEnd() s.materialBegin( glass ) s.transformBegin( transform().translation( vector3( -0.45, 0, -0.1 ) ) ) s.addObject( s.newObject( "sphere", [ "radius", 0.35 ] ) ) s.transformEnd() s.materialEnd() s.materialBegin( metal ) s.transformBegin( transform().translation( vector3( 0.45, 0.4, -0.65 ) ) ) s.addObject( s.newObject( "sphere", [ "radius", 0.35 ] ) ) s.transformEnd() s.materialEnd() saver = s.newImager( "tga-saver", [ "file", "cornell.tga" ] ) s.imagerBegin( saver ) camera = s.newCamera( "pinhole", [ "eye", vector3( 0, -2.99, 0 ), "aim", vector3( 0, 0, 0 ) ] ) s.imagerEnd() s.radiosity() s.render( camera, 300, 300 )(view image)
Here this manual ends. Now you should have at least understood the
main mechanims upon which Lightflow is based, otherwise
you'll better reread the first chapters. If you cannot
comprehend some of the examples because the effect of some
classes is not very clear, try checking the relative class documetation.
If then something still remains obscure feel free to contact
me via email, using the subject Lightflow Explanations.
I will also appreciate any suggestion (in this case use the
subject Lightflow Suggestions, please).
For now, enjoy Lightflow and have a good time !