PovRay – voor beginners

advertisement
PovRay – voor beginners
Werkt alle opdrachten in een apart Word-Bestand uit en zet dit in je logboek
PovRay is een programma waarmee je doormiddel van scripts afbeeldingen kunt maken die
zeer realistisch kunnen uitzien. Dat gebeurd door objecten te maken en deze met een virtuele
camera op te nemen.
Je eerste Scene
Het coordinaten-systeem van povray
Om te begrijpen hoe PocRay werkt moet je eerst weten waar de camera is en in welke richting
deze kijkt. We kunnen dit beschrijven door middel van een drie dimensionaal
coordinatenstelsel.
Tijdens de wiskundeles gebruik je veelal het volgende stelsel:
De positieve x-as wijst naar rechts, de positieve z-as wijst naar boven en de positieve y-as
wijst naar achteren.
Bij PovRay ziet het er net iets anders uit. De positieve x-as wijst naar rechts, de positieve y-as
wijst naar boven en de positieve z-as wijst door het scherm naar achter.
^+Y
|
/+Z
| /
| /
|/
+X
|-------->
Voor negatieve waarden wijzen de assen net de andere kant op:
^+Y
|
/+Z
| /
| /
-X
|/
+X
<-------|-------->
/|
/ |
/ |
-Z/
|
v-Y
Om een scene aan te maken heb je niets meer nodig dan een editor zoals de kladblok.
Opdracht 1
Start de kladblok en bewaar een leeg bestand ondr de naam picture1.pov.
Voer daarna het volgende in (Let op: Povray maakt verschil tussen capitaal en onderkast
.(hoofdletters en kleine letter):
#include "colors.inc"
#include "shapes.inc"
#include "textures.inc"
camera {
location
look_at
}
// The include files contain
// pre-defined scene elements
<0, 2, -3>
<0, 1, 2>
De eerste regel lees handige definities voor kleuren in.
De tweede regel zorgt ervoor dat PovRay opdrachten voor vormen (bol, kubus, etc) kent. De
derde regel importeert povRay handige texturen (dat is een omschrijving van het materiaal
van een object).
Opdracht 2
Open de drie bestanden (colors.inc, shapes.inc, textures.inc) in de kladblok.
Geef de namen van drie gedefinieerde kleuren, vormen en texturen.
Je kunt behalve deze drie ook andere bestanden includen.
Soms includen bestanden weer andere bestanden. Je mag op die manier tien niveau’s diep
includeden
Povray zoekt deze bestanden eerst in de HomeDirectory (dat is de huidige diretory) en daarna
nog in de mappen die eventueel met de +L (Library Path) optie opgegeven worden.
Het ligt dus voro de hand om alle include-bestanden (.inc) in een map met de naam include te
zetten.
Positioneren van de camera
Na de drie includes wordt met de volgende regels omschreven waar de camera staat en in
welke richting ze kijkt:
camera {
location
look_at
}
<0, 2, -3>
<0, 1, 2>
De getallen achter location zijn de X, Y en Z coördinaat van de positie. Het valt je misschien
op dat de coördinaten tussen spitse haken (<>) geschreven worden. In de wiskunde gebruik je
ronde haken op de coordinaten van een punt aan te geven. Hier worden spitse haken gebruikt
omdat de positie gezien wordt als een zogenaamde vector. Dat is (in gedachten) een pijl die in
dit geval vanuit de Oorspong wijst naar het bedoelde punt.
Achter look_at wordende coördinaten opgegeven van het punt waar de camera naar toe kijkt.
Dit punt wordt het centrum van de afbeelding van je scene.
Opdracht 3
Beschrijf de positie van het punt waarvan de camera kijkt t.o.v. de camera positie
(Bijvoorbeeld zó: dit punt ligt 3 eenheden hoger, 2 eenheden lager en 1 eenheid links
van de camera).
Beschrijving van een object
Nu de camera klaar staat kunnen we objecten in je virtuele wereld gaan plaatsen.
We kiezen nu voor een gele bol. Type daartoe de volgende regels aan het eind van je bestand
(picture1.pov) en bewaar het.
sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow}
}
}
// Yellow is pre-defined in COLORS.INC
Eerst geef je met een vector aan waar het middelpunt van de bol ligt. Daarna staat de straal
van de bol.
In dit geval ligt de bol vanuit de oorsprong gezien 1 eenheid omhoog en 2 eenheden naar
achteren.
Opdracht 4
Beschrijf de positie van het middelpunt van de bol gezien vanuit de camera. (gebruik
weer een omschrijving als: dit punt ligt 3 eenheden hoger, 2 eenheden lager en 1 eenheid
links van de camera).
Meer over textuur
Bij textuur wordt in dit geval alleen aangegeven dat de bol geel moet worden. De naam
Yellow (let op de hoofdletter) staat in colors.inc omschreven.
texture {
pigment {color Yellow}
}
// Yellow is pre-defined in COLORS.INC
Behalve de kleur kunnen hier ook de ruwheid, reflectiviteit en de materiaalsoort aangegeven
worden.
Allen de kleur is het minimale wat je bij de textuur kunt opgeven. Met de kleur wordt de kleur
bedoeld die je krijgt als het object volledige verlicht is.
Als je zelf een gekleurde bol zou willen tekenen zou je het deel van de bol dat in de schaduw
ligt donkerdere kleuren geven en de verlichte delen lichter.
De kracht van Povray is dat je je daar niet om hoeft te bekommeren, het programma berekend
voor elke punt de juiste kleur.
Pigment geeft dus aan welke kleur het object HEEFT en niet hoe het er UITZIET.
Je kunt bij pigment ook kleurpatronen opgeven. In dit geval gebeurd dat niet. Met het
keyword color geef je aan dat het object 1 egale kleur moet krijgen.
Als de dor jou gewenste kleur niet beschikbaar is onder een bepaalde naam (Yellow, White),
kun je zelf een kleur opgeven door aan te geven hoeveel rood, groen en blauw je moet
mengen om de gewenste kleur te krijgen. Roze kun je op de volgende manier omschrijven:
color red 1.0 green 0.8 blue 0.8
Als je een tekenprogramma of een kleurmenger gebruikt worden daarin voor de drie
basiskleuren R, G en B vaak waarden van 0 t/m 255 gebruikt. Je kunt deze RGB-waarden
gemakkelijk omrekenen naar de kommagetallen die PovRay gebruikt.
PovRayRBG (x256)
RGBpovRay (:256)
Opdracht 5
Controleer dat de waarde van deze kleur roze in RGB (255,204,204) is.
Opdracht 6
Reken de RGB-waarde (233,100,6) om naar povRay-kleurcodes.
Een lichtbron omschrijven
Als je op dit moment een afbeelding van je scene zou maken zou dat er heel erg teleurstellend
uitzien. Je scene zou een volledig zwart plaatje worden. Waardoor komt dat? Er is nog geen
lichtbron in je virtuele wereld. We gaan nu dus snel licht in de duisternis brengen!
Voeg de volgende regel toe aan (het einde van) je scene-bestand:
light_source { <2, 4, -3> color White}
Opdracht 7
Op grond van de eerdere opdrachten kun je nu waarschijnlijk wel bedenken wat de
betekenis is van deze opdracht. Schrijf dat exact op.
Opdracht 8
Sluit de kladblok nu en open een Command-Promp (Start > Alle programma’s >
Bureauaccessoires > Opdrachtprompt).
Bladeren daarin naar de map waarin je povray geïnstalleerd hebt.
Maak daarbij gebruik van de volgende opdrachten




Wisselen naar drive D: doe je met: D:
Wisselen naar de root van een drive doe met met: CD \
Naar een map wisselen doe je met: CD naamvandemap
Je kunt gemakkelijk 1 niveau terug springen (bijvoorbeeld als je je vergist hebt)
met: CD ..
Maak nu de eerste afbeelding van je scene met de volgende opdracht:
POVRAY +W160 +H120 +P +X +D0 -V -Ipicture1.pov
Het maken van een afbeelding heet renderen.
Opdracht 9
Onder welke naam wordt je afbeelding bewaard? Open de afbeelding in Paint of een
ander tekenenprogramma. Neem de afbeelding op in je Word-Bestand.
Onderzoek hoe je de afbeedling als een .GIF-bestand kunt laten opslaan. Je vindt het
antwoord in de handleiding van povray (of eventueel op internet). Geef in je Wordbestand de opdrachtregel die dat doet.
Opdracht 10
Je kunt typewerk besparen door een zogenaamd batch-bestand aan te maken met exact
deze opdracht. Dat gaat als volgt:



Open de kladblok
Type weer: POVRAY +W160 +H120 +P +X +D0 -V -Ipicture1.pov
Sluit het bestand als Picture1.BAT
Als je nu picture1.bat typt wordt de inhoud als 1 opdracht uitgevoerd.
Mooiere scene’s
Waarschijnlijk valt het resultaat je wat tegen. De bol ziet er wel erg geel uit. Dat gaan we in
dit tweede deel wat verbeteren.
Reflexiviteit
Een object dat lichtweerkaatst zal ergens een lichte vlek op het oppervlak hebben. Dit kunnen
we bereiken door een extra gegeven bij de bol op te geven. Voeg de regel met finish aan je
scene toe.
sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow}
finish {phong 1}
}
}
// Yellow is pre-defined in COLORS.INC
opdracht 11
Bekijk het resultaat weer door je scene te renderen en neem de afbeelding op in je
uitwerkingen. Maak nog twee afbeeldingen waarbij je phong 1 vervangt door phong
0.5 en phong 0. Neem ook deze twee afbeeldingen in je uitwerkingen op.
Ruwheid
Alles wat je tot nu toe gezien hebt is er in werkelijkheid natuurlijk niet. De reflecties werden
door de computer berekend door een kleiner of groter deel van het licht te absorberen (alles in
de virtuele wereld).
Zo kan de computer ook de ruwheid simuleren.
Bij een glad oppervlak staat de normaalvector steeds loodrecht op het oppervlak. Bij een
ruwoppervlak is de normaal vector natuurlijk ook loodrecht op het oppervlak alleen ligt het
oppervlak steeds in een andere richting. Daardoor liggen de normaalvectoren kriskras door
elkaar heen.
PovRay simuleert dat door de normaalvectoren op het oppervlak van een object overal een
beetje van richting te veranderen..
opdracht 12
Verander je scene als volgt en render je scene weer.
sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow}
normal {bumps 0.4
scale 0.2}
finish {phong 1}
}
}
kleurpatronen
Tot nu toe hebben we eenvoudigweg 1 kleur aan de bol gegeven, maar dat kan veel
complexer.
Bekijk het onderstaande voorbeeld maar eens.
sphere {
<0, 1, 2>, 2
texture {
pigment {
wood
color_map {
[0.0 color DarkTan]
[0.9 color DarkBrown]
[1.0 color VeryDarkBrown]
}
turbulence 0.05
scale <0.2, 0.3, 1>
}
finish {phong 1}
}
}
Met het sleutelwoord ‘wood’ voeg je een patroon aan het pigment toe dat lijkt op het
(cirkelvormige) concentrische ringen die je in hout ziet.
Met map geef je aan tussen welke drie kleuren het kleurpatroon moet varieren. In dit
voorbeeld wordt 90% (van 0 tot 0.9) gebruikt om van DarkTan naar DarkBrown over te gaan
en in de resterende 10% wordt de geleidelijk kleur van DarBrown in VeryDarkBrown
veranderd.
Door turbulence (turbulentie) wordt de kleuren wat verwist zodat het geen perfecte cirkels
worden. Met scale kunt je de afmetingen van het patroon zo aanpassen dat het mooi bij de
grootte van het object past. Dit soort patronen is standaard zo ingesteld dat ze 1 keer passen
op een bol met straal 1. Met de vector <0.2, 0.3, 1> krimpen we het kleurpatroon in x- en yrichting in (de z-richting blijft onveranderd). Als je in de vector getallen groter dan 1 gebruikt
wordt het kleurpatroon opgerekt.
Opdracht 12
Verwerk een kleurpatroon in je scene en render opnieuw.
Kant-en-klare texturen
PovRay heeft uitgekiende kant-en-klare texturen in de included files textures.inc en
stones.inc. Soms heeft de textuur alleen invloed op de pigment, maar soms ook op de finish.
Opdracht 13
Probeer de volgende definities eens op je bol uit.
sphere {
<0, 1, 2>, 2
texture {
pigment {
DMFWood4
scale 4
// Pre-defined from textures.inc
// Scale by the same amount in all
// directions
}
finish {Shiny}
// This finish defined in textures.inc
}
}
Je ziet dat in dit geval de schaal van de hout-textuur DMFWood4 met een factor 4 uitgerekt
wordt. In dit geval is 4 een afkorting voor <4,4,4>, dus in alle drie de richtingen wordt de
textuur met een factor 4 uitgerekt.
Opdracht 14
Blader door het bestand textures.inc en onderzoek welke texturen er gedefinieerd zijn.
Probeer minstens 3 texturen uit (schrijf ze op de plaats waar nu DMFWood4 staat) in
combinatie met een andere finish (in plaats van Shiny).
Kopieer de broncode en de gerenderde afbeelding in je logboek.
Voorbeeld van een textuur
sphere {
<0, 1, 2>, 2
texture { PinkAlabaster }
Andere objecten
Je hebt tot nu toe alleen een bol in je scene verwerkt. Er zijn natuurlijk ook veel andere
objecten die je kunt gebruiken.
Vlakken
Een object dat je vaak tegenkomt is een plat vlak.
Opdracht 15
Voeg het volgende vlak aan je scene toe:
plane {
<0, 1, 0>, 0
pigment {
checker
color Red
color Blue
}
}
Je ziet dat checker voor het schakbordpatroon staat. In dit geval met de kleuren Red en Blue.
Je ziet dat de bal een schaduw werpt op de vloer. De schaduw wordt zeer exact berekent en
heeft scherpe randen. In werkelijkheid zit er rond een schaduw ook een bijschaduw. Later zul
je zien hoe je zo een zachtere schaduwrand kunt maken door lichtbronnen te gebruiken die
zelf ook een afmeting hebben (de lichtbron die je nu in je scene hebt staan is een zogenaamde
puntbron; denk maar aan een ster aan de avondhemel)
Het vlak in oneindig groot in beide richtingen. De vector <0, 1, 0> is de vector die loodrecht
op het oppervlak staat: de normaalvector.
The object defined here is an infinite plane. The vector <0, 1, 0> is the surface normal of the
plane (i.e., if you were standing on the surface, the normal points straight up.) The number
afterward is the distance that the plane is displaced along the normal from the origin - in this
case, the floor is placed at Y=0 so that the sphere at Y=1, radius= 1, is resting on it.
Je ziet geen texture statement. PovRay neemt een standaard de opdracht texture mee als je
pigment gebruikt. Dus in plaats van dat je "texture {pigment {...}}" moet typen kun je nu met
alleen pigment volstaan. Je mag texture niet weglaten als je een textuurIdentifier zoals in
opdracht 14 gebruikt ( texture { PinkAlabaster } ), of als je gelaagde texturen gebruikt (die
komen later nog aan bod).
Eenheidsvectoren
Omdat de vectoren <1,0,0>, <0,1,0> en <0,0,1> regelmatig gebruikt worden, heeft povRay 3
ingebouwde identifiers voor deze eenheidsvectoren. Ze heten "x", "y", and "z" respectievelijk.
Het platte vlak van opdracht 15 is dus ook als volgt te beschrijven:
plane {
y,0
pigment {... etc.
Er staan geen haken rond deze identiefiers.
Box
Het volgende object is de doos (box).
Opdracht 16
Neem nu in plaats van de bol de volgende box in je scene op en render
opnieuw.
box {
<-1,0 ,-1>,
< 1,0.5, 3>
pigment {
DMFWood4
scale 4
}
rotate y*20
// Near lower left corner
// Far upper right corner
// Pre-defined from textures.inc
// Scale by the same amount in all
// directions
// Equivalent to "rotate <0,20,0>"
}
De doos wordt beschreven door de drie-dimensionale coördinaten van twee tegenover elkaar
liggend punten. In de eerste vector neem je de laagste x-,y- en z-coördinaat op. In de tweede
vector neem je de hoogste x-, y- en z-coördinaat op.
De zijden van de box lopen parallel aan de x-, y- en z-as. Je kunt ze later wel willekeurig
draaien. Dat gebeurt in de regel rotate. Je ziet daar y*20 staan. Dat is een afkorting van
<0,1,0>*20 = <0,20,0>
Kegel
Hier komt het volgende object: de kegel.
Opdracht 17
Neem nu de volgende kegel in je scene op in plaats van de box en maak de rendering.
cone {
<0,1,0>,0.3
// Center and radius of one end
<1,2,3>,1.0
// Center and radius of other end
pigment {DMFWood4 scale 4 }
finish {Shiny}
}
De vorm wordt opgegeven voor het middelpunt en de straal van de beide einden op te geven.
Bij een normale kegel moet 1 van de beide einden een straal 0 hebben. Het einde is dan geen
cirkel maar een punt. In het voorbeeld van opdracht 17 spreken we van een afgeknotte kegel.
De einden staan loodrecht op de as die denkbeeldig door de kegel loopt.
Het is ook mogelijk om een open kegel te maken. De einden zijn dan doorzichtig.
Je doet dat door het sleutelwoord open te schrijven na de het tweede einde
cone {
<0,1,0>,0.3
// Center and radius of one end
<1,2,3>,1.0
// Center and radius of other end
open
// Removes end caps
pigment {DMFWood4 scale 4 }
finish {Shiny}
}
Cylinder
Een cylinder wordt op de volgende manier gedefinieerd:
cylinder {
<0,1,0>,
//
<1,2,3>,
//
0.5
//
open
//
pigment {DMFWood4
finish {Shiny}
}
Center
Center
Radius
Remove
scale
of one end
of other end
end caps
4 }
In de include file "shapes.inc" kom je nog meer kant-en-klare objecten tegen. Ze hebben
ongeveer de grootte van een bol met straal 1.
Je kunt ze op de volgende manier in je scene opnemen:
object {
UnitBox
pigment {DMFWood4
finish {Shiny}
scale 0.75
rotate <-20,25,0>
translate y
}
scale 4 }
1.
2.
3.
4.
What is ray-tracing?
How does ray-tracing work?
What is POV-Ray?
How do I set up POV-Ray?
Wat is ray-tracing?
Ray tracing is een manier waarop je visuele kunst kunt maken door een beschrijving van een
scene op een wiskundige manier in een afbeelding omgerekend wordt.
Dat gebeurd door de loop van lichtstralen te volgen (straal=ray, volgen=tracing).
Met ray tracing kunnen zeer mooie en complexe scene gemaakt worden. Je hoeft niet te
kunnen tekenen, schilderen of beeldhouwen waardoor je vaak ook snel tot een resultaat kunt
komen.
Er zijn meestal zeer veel berekeningen nodig (denk aan miljoenen of miljarden) om de
beschrijving van de scene in een afbeelding om te zetten. Dit proces heet renderen. Doordat
het zo bewerkelijk is kan zelfe een snelle computer niet meteen het resultaat tonen.
Natuurlijk is ray tracing geen truc waardoor het maken van kunst ineens gemakkelijk wordt.
Sommige voorstellingen zijn moeilijk of niet te maken met ray tracing software. Als er
wiskundige objecten als bollen, kegels en kubussen in de scène voorkomen heeft ray tracing
software hier geen moeite mee. Maar iets als een menselijk gezicht is heel moeilijk in ray
tracing software te beschrijven.
Pure wiskundige vormen hebben een perfectheid die in de werkelijkheid nauwelijks
voorkomt: de objecten worden haarscherp afgebeeld en de schaduwen hebben scherpe randen.
Wanneer heb je voor het laatst een wiskundig perfect gevormde peer gezien?
Door die perfectheid hebben afbeeldingen die met ray tracing gemaakt zijn soms een wat
vreemd surrealistische aanblik. Dat is overigens meteen een eigenschap die kunstenaars juist
uitbuiten! Je bent pas een goed raytracer als je in staat bent om nog realistische, minder
perfecte, scenes te maken.
How does ray-tracing work?
We won't go into all the gory details, but having a general understanding of what's going on
behind the scenes (so to speak) can be helpful when you start ray-tracing your own images.
Although there are several methods of ray-tracing, one of the most common (and the one the
POV-Ray software package uses) works something like this:
First, an internal model of the scene is generated, with your computer screen included as the
receiving "eye" in the model. Then, the software traces imaginary light rays backwards from
where their endpoint lies (a pixel on your computer screen) to their initial point (some light
source in the scene). This step is repeated, pixel by pixel, until the entire image has been
created.
The reason the software traces the light rays backwards, instead of starting at the light source,
is for efficiency's sake -- if a light ray doesn't end up on your screen, then you, as the user,
don't care about it (because you'll never see it). By tracing the light rays backwards, beginning
at the computer screen, the software can assure that every light ray it calculates is one you
care about, because it knows that it will end up on your screen.
In their journey, the light rays can be reflected by mirrors, refracted by glass, or undergo
various other contortions, all of which result in a single pixel of the final image. Because the
ray-tracing software must trace one ray of light for each pixel in the output image, and
because the light rays can undergo so many contortions, the process of ray-tracing can take a
very long time, depending on the size and complexity of the image and the processing power
of your computer. Unless you have an extraordinarily fast computer, you will most likely find
yourself waiting around impatiently for your scene to finish rendering once you've begun to
ray-trace in earnest.
What is POV-Ray?
PovRay is een gemakkelijk te gebruiken en gratis software pakket van hoge kwaliteit. Er zijn
versies voor de PC, Apple en Linux. Ook de broncode van het programma is gratis
beschikbaar.
PovRay is een zogenaamde redering engine. Je maakt eerst een tekstbestand waarin de scene
beschreven wordt. De rendering engine maakt daarvan dan een afbeelding. Pocray zelf heeft
geen visuele userinterface waarmee je meteen kan zien wat je maakt.
Er zijn wel modeleer-programma voor povRay beschikbaar, waardoor je tijdens het maken
van de scene al een grove indruk kunt krijgen van het eindresultaat. We adviseren echter om
bij de basis te beginnen, zodat je goed begrijpt hoe de modeleertaal vor povRay in elkaar zit.
How do I set up POV-Ray?
The latest version of POV-Ray can always be found at ftp://ftp.povray.org/. You will have to
download the correct version for your computer (there are versions available for most
operating systems) and to set it up.
Once you have POV-Ray, how you set it up is highly dependant on your operating system.
We're not about to teach you how to use your own computer; if you can't set it up yourself,
ask a local computer guru to help.
As we mentioned above, POV-Ray doesn't have much of an interface; on most operating
systems, you will give POV-Ray the name of your input file, the name of your output file, and
a whole bunch of other options via the command line. You will also need some form of image
viewer and/or converter in order to display the output files that POV-Ray creates; again, this
is highly operating-system-dependant.
POV-Ray also comes with documentation and example scenes; these make excellent
references if you're stuck or need to know more.
Ok, we're ready to start learning the real stuff now!
Top of
Document
Main
Page
Step 2: POV-Ray
Basics
The Online POV-Ray Tutorial © 1996 The Online POV-Ray Tutorial ThinkQuest Team
--- les 2
The Online POV-Ray Tutorial
POV-Ray Basics
(Show Jump Points) (Hide Jump Points)
Before you can start creating scenes in POV-Ray, you need to know a few things: how to
describe objects in three dimensions, some of POV-Ray's basic notation, and other stuff.
This section will give you the background knowledge you'll need to get started.
Quick reference:
1.
2.
3.
4.
5.
6.
7.
POV-Ray's Coordinate System
Vectors in POV-Ray
How to describe color: RGB and RGBF Vectors
Normal Vectors
POV-Ray Source Code
Comments in POV-Ray Source Code
Including files
POV-Ray's Coordinate System
The source code file, the file POV-Ray takes as input, is really one big list of descriptions of
objects. The very first thing we need in order to describe objects is a way of telling POV-Ray
where things go. Such a method is called a coordinate system. If you have taken elementary
algebra, you will already have experience with a simple coordinate system: a two-dimensional
(or 2D) Cartesian coordinate system. A quick graph of a 2D Cartesian plane looks something
like this:
Any position on this graph can be specified by a set of coordinates, usually written in the
form (x,y). The x coordinate corresponds to its position along the horizontal, or x,
axis, and the y coordinate corresponds to its position along the vertical, or y axis. For
example, (0,0) corresponds to the point in the middle of the graph, or the origin. The
point (1,3) corresponds to the point on the graph one unit right from the origin, and
three units up from the origin. Negative numbers can also be used: (-6,4) corresponds
to the point 6 units left from the origin, and four units up. You get the idea.
Now this is all well and good, but when we look at things other than our computer
screen, we notice we can observe three dimensions, not two: in other words, we describe
objects not just by how far to the right (or left) and how high (or low) they are, but also
how close they are in front (or in back) of you. In other words, to be able to describe a
real scene to POV-Ray, we need, in addition to the x and y coordinates, a third
coordinate. This coordinate is called (surprisingly enough) the z coordinate.
The coordinate system that POV-Ray uses, then, is called a three-dimensional (or 3D)
Cartesian coordinate system. A quick graph looks like this:
(You have to use your imagination somewhat: that third axis is not a diagonal but is
perpendicular to your computer screen -- imagine it shooting out at your face). As you
can see, it looks similar to the 2D graph, except that one additional axis has been added:
the z axis. Because of the additional possible direction, points in this coordinate system
must be described in the form (x,y,z). The point (0,0,0) corresponds to the origin,
or center of the graph, and (1,-2,7) corresponds to the point one unit to the right of,
two units below, and seven units behind the origin.
If you have experience with mathematical 3D coordinate systems, you will notice that
the axes are labelled slightly differently than the system most commonly used in
mathematical terms. The axis we have drawn above is not fixed in POV-Ray -- the way
the axis looks (in terms of which axes are which) really depends on where you place your
camera in POV-Ray. We'll get to explaining the camera soon. For now, just understand
that the labels on the axes may change, depending on how you position your camera.
The 3D graph above represents a coordinate system that POV-Ray can use. Visualizing
objects and scenes in three dimensions can be tricky. Often, a pad of paper and a pencil
can be extremely valuable tools, especially in more complex scenes. Alternatively, you
can take a look at the Resource Library for some graphical tools that may help.
Vectors in POV-Ray
POV-Ray calls the number triples that define positions position vectors. The term vector
refers to any group of numbers describing a certain thing -- there are color vectors and normal
vectors, for example, in addition to position vectors.
In POV-Ray, vectors are surrounded by angle brackets (that's < and >). For example, to
specify the origin in terms that POV-Ray understands, we would say <0,0,0>.
The magnitude of a vector can be thought of as the "length" of the vector. Imagine a line
from the origin to the point in the coordinate system represented by your vector. The
magnitude is the length of this line. (If you really care about the math, the magnitute can
be computed as the square root of the sum of the squares of the elements of the vector -but don't worry, you probably won't have to know that).
An important thing to know about is a POV-Ray feature called vector promotion. Vector
promotion is when a single number is substituted in place of a vector. The single number
is then promoted to a vector, one with all elements equal to that number. For example,
promoting the number 2 to a three-dimensional vector would result in the vector
<2,2,2>. Vector promotion is done automatically for you by POV-Ray in most cases -just put in a single number instead of a vector. This feature allows you to quickly specify
similar vectors.
How to describe color: RGB and RGBF Vectors
Much as any position within the scene can be specified by a three-element vector, so can any
color. In describing a position, each coordinate in the vector corresponds to the position along
a particular axis. In describing a color, each element of the vector corresponds to the amount
of a primary color -- red, green and blue. Such a vector is called a RGB vector (for red green
blue vector).
In a position vector, the individual elements can be any real number at all (actually, this
isn't quite true -- there are upper and lower limits set by the hardware constraints of
your computer). In a RGB vector, the numbers should be between 0.0 and 1.0. You
can have values higher that 1.0, but they don't correspond to any physical property
(what's greener than green?). A value of 1.0 means 100% of that color. For example,
the color black, which is actually the absence of all color, is described by the color vector
<0,0,0>. The color white, a complete combination of all three primary colors, is
specified by the color vector <1,1,1>. Try experimenting with the Color Tool to find
the color vectors for particular colors -- it will help you get a good "feel" for describing
colors in terms of POV-Ray color vectors.
In addition to RGB vectors, you can specify a color in POV-Ray with an RGBF vector.
As you might guess, a RGBF vector is like a RGB vector, but with one extra element the F, for filter. The filter value specifies how transparent the pigment is, ranging from
0.0 (not transparent at all) to 1.0 (100% transparent). RGB vectors have an implied
filter value of 0.0 -- in other words, a color specified by a RGB vector will be perfectly
opaque. A filter value of 1.0 means that all light will be let through, but the light will
still be filtered. For example, the RGBF vector <1,0,0,1> acts like red cellophane -100% of light is passed through, but it is filtered by the red pigment. RGBF vectors can
be a little confusing at first, but they aren't too difficult once you get the hang of it.
These are the most commonly-used ways of specifying color. There are a few more ways
to do it; if you want to read about them, look at the color section of the Language
Reference.
Normal Vectors
Occasionally you will be called upon to specify a normal vector in POV-Ray. Simply put, a
normal vector is a vector parallel to a given plane in three dimensions. Imagine a flat sheet of
paper. If you were to poke a pencil all the way through it so that the end of the pencil was
touching the paper and the pencil was standing straight up (with respect to the paper), the
pencil would represent the normal vector to the paper. In the picture below, the normal vector
is in red and the plane is in blue.
Note that the magnitude of normal vectors is not important (as long as it is non-zero).
This is because normal vectors are used to specify an orientation, not a distance.
POV-Ray is kind enough to automatically define three normal vectors for you: x
(corresponding to <1,0,0>), the normal vector for a plane lying along the y and z axes,
y (corresponding to <0, 1, 0>), the normal vector for a plane lying along the x and z axes,
and z (corresponding to <0, 0, 1>), the normal vector for a plane lying along the x and y
axes. Any time you are asked for a normal vector (or any vector, really) you can
substitute those letters.
POV-Ray Source Code
Source code is the name for the text you give to POV-Ray. POV-Ray reads the source code
file and outputs an image. There are two things you need to know about POV-Ray source
code:
1. POV-Ray source code is case sensitive
2. POV-Ray ignores whitespace
3. Ordering is unimportant
Case sensitive means that upper and lower-case letters are not treated as the same by POVRay. For example, to POV-Ray, sphere is not the same as Sphere and is not the same as
SpHeRe. Whitespace is the common name for any characters you can't directly see on screen
-- spaces, tab characters (the invisible characters put there when you press the Tab key),
carriage returns and line feeds (the invisible characters put there when you hit the Enter key).
Between any two words or symbols in your source code, POV-Ray doesn't care whether you
put one space, two spaces, one hundred spaces, a new line, or any other whitespace.
For example, the phrase:
one two
the phrase
one
and the phrase
one
two
two
are all treated the same by POV-Ray.
Ordering means the order in which you declare objects. POV-Ray does not care where in
the file the objects are -- it makes no difference to the final scene. (VRML programmers
will note that this is a very different approach than VRML's "virtual pen" concept).
This does not hold entirely true for some attributes and CSG operations (both of which
we will describe in detail later), but in the outer-most level in POV-Ray (the one in
which you list the objects in your scene) it doesn't matter.
Comments in POV-Ray Source Code
Comments are another useful part of POV-Ray source code. A comment is a portion of text
that POV-Ray will ignore. It is used to add information to the source code, usually to make
things clearer to the human reader. Comments can be enclosed in /* and */, or, for singleline comments, can be prefixed with a //. For example:
// this is a single-line comment
/* this is
another comment. it can be as long as you want it to be */
C and C++ programmers will recognized this comment style. For a detailed description of
comments, see the comments section of the Language Reference.
Including files
Including files is a feature of many languages that makes re-using code easier. If you have, for
example, many red objects in your scene, you will find it cumbersome (and not very readable)
to type the correct RGB vector for red every time. POV-Ray comes to the rescue with a file
full of pre-defined colors, which you can use and re-use in your source code. (POV-Ray also
comes with files of textures and even objects; we'll get to those later). You can take advantage
of these files by adding the string #include "filename" to the beginning of your file. For
example, to use the pre-defined colors, you would add the string
#include "colors.inc"
to the beginning of your file. Technically, the statement does not have to occur at the
beginning of the file, but the convention is such, and it makes for readability.
The example statement above tells POV-Ray to look for the file called colors.inc and
to read it before continuing to the rest of your file. colors.inc defines many colors,
such as Red, that you can use in your file any time you need a color, in place of a RGB
(or RGBF) vector. This makes your source file much easier to read. Later in the tutorial,
you will learn how to define your own colors (and objects, and textures, and so on) and
how to put them in your own text files. For now, know how to use the provided ones and
be happy.
Now that you've got that out of the way, you're ready to start creating your first scene...
almost.
Top of
Document
Mai
n
Page
Step 1: Introduction to POV-Ray
and Ray-tracing
Step 3: Creating
Simple Scenes
The Online POV-Ray Tutorial © 1996 The Online POV-Ray Tutorial ThinkQuest Team
The Online POV-Ray Tutorial
Creating Simple Scenes
(Show Jump Points) (Hide Jump Points)
The POV-Ray language is fairly easy to use, once you understand it. In fact, if you have any
experience with programming, you will find POV-Ray very easy -- there are no variables,
conditionals, loops, or anything else that can make programming tricky. Basically, a POVRay source file (the file you make and give to POV-Ray) is just a list of objects and their
descriptions. Of course, describing the scene you have in your mind to POV-Ray is the tricky
part, because you have to speak POV-Ray's language.
Quick index:
1. Creating simple objects
2. The Camera
3. Let there be light! (Light sources)
4. The first example scene
5. Transformations
6. Texture
7. Pigment
8. Finish
9. Normal
10. Including Textures
Creating simple objects
The building blocks of all POV-Ray objects and scenes are called primitives. Primitives are
objects that POV-Ray already knows about, and all you have to do is describe a few
attributes. POV-Ray primitives are usually simple geometric shapes such as spheres, cubes,
and cones.
Describing primitives, in general, take this form in POV-Ray:
Object_Name {
Object_Parameters
Some_Simple_Attribute
Some_Other_Simple_Attribute
Some_Complex_Attribute {
Some_Attribute
Some_Other_Attribute
}
}
This isn't very enlightening. Let's take a look at a short example:
sphere {
<0, 0, 0>, 5
pigment {
color rgb <1, 0, 0>
}
}
Deciphering what this does isn't too tricky. The code defines a sphere with its center at the
origin (that's <0,0,0>, remember?) and with a radius of 5 (in other words, the distance from
the center of the sphere to any point on the edge of the sphere is exactly 5 units). The phrase
pigment { color rgb <1,0,0> } simply means that the sphere's pigment (or color)
attribute is described by the rgb vector <1,0,0>, which is the color red. You could have just
as well used color Red, if you had #included the correct file. The pigment attribute, by
the way, is a complex attribute, of which color is just one of the many attributes that can go
inside it.
There are two types of primitives in POV-Ray: finite primitives and infinite primitives. Finite
primitives have well-defined limits. Examples of finite primitives include spheres, cones,
torii, and blobs. Infinite primitives have components that can potentially stretch to infinity -for example, a plane is both infinitely thin and infinitely wide. Examples of infinite objects
include planes, quadrics and cubics. At any rate, describing primitives in POV-Ray is only a
matter of knowing the syntax for the particular primitive you want to describe. You can find a
complete syntax reference in the finite object and infinite object language references.
By now, you're probably itching to make your first scene. Before you can do that, however,
you need to learn about two things: the camera and light sources.
The Camera
Before POV-Ray can generate the scene, it needs to know from where you are looking. If you
imagine your computer screen as the camera taking a snapshot of the scene you're describing,
you'll see POV-Ray needs to know a) where the camera is in the scene, and b) which direction
it's pointing. Such data is given to POV-Ray through the camera object. As you might
imagine, the camera object is a rather important one: in fact, POV-Ray requires that there be
one and only one in each scene.
There are many attributes that the camera object can have; of these, we will only concentrate
on the two most useful: the location and the look_at attributes. A complete reference of all the
camera attributes can be found in the Camera Reference.
A simple camera in POV-Ray looks like this:
camera {
location <2,5,-10>
look_at <0,0,0>
}
This example defines a camera located at <2,5,-10> and pointing at the origin. This means
that anything with a z coordinate less than -10 will definately be invisible -- it will be behind
the camera!
You can put the camera anywhere you want in the scene, including inside of objects (although
you may not see very much), with one exception: you may not place the camera directly over
the origin and have it looking straight down. For complex mathematical reasons, this will
cause POV-Ray to generate an error. If you need that type of setup, position the camera a little
to the left or the right -- your problem will be solved, and your scene will look (almost)
exactly the same.
Anyways, now that we have a way of receiving light, we need to have a way of providing
light.
Let there be light! (Light sources)
If you gave POV-Ray a file containing the camera definition above and the sphere definition
before that, the output image would be a lovely blank picture. This would happen because
you'd have no light in your scene. To add light (thereby enabling you to actually see
something), you need to add a light source.
There are a few different types of light sources in POV-Ray. We will concentrate here on the
most simple (and useful): the point light source. A point light source can be thought of as an
infinitely small object that emits light. Because they are infinitely small, point light sources
cannot be directly seen (so you don't have to worry about them appearing in your scene).
However, their effects can certainly be seen: your scene lights up!
Point light sources as known as non-attenuating light sources: the emitted light does not get
weaker with distance. This means that you can illuminate your entire scene with one point
light source placed far away from the scene. You can have as many light sources as you want,
but they are computationally expensive -- the more you have, the longer POV-Ray will take to
trace your scene.
An example of a simple point light source definition in POV-Ray looks like this:
light_source {
<0,10,-10>
color rgb <1,1,1>
}
The first vector is a position vector specifying the location of the light source. The second
vector specifies the color (and brightness) of the light. It is generally a good idea to use white
or gray light, as using colored light can have side effects that are not immediately obvious (for
example, green objects will not show up when exposed to pure red light). Complete
information for light sources can be found in the lights section of the Language Reference.
Anyways, now that we can add light, we're ready to construct our first full scene.
The first example scene
Putting together all we have learned to far, we get a complete POV-Ray source code file that
looks like this:
// This is a simple red sphere
// first, the camera position
camera {
location <2,5,-10>
look_at <0,0,0>
}
// now, some light
light_source {
<0,-10,0>
color rgb <1,1,1>
}
// the sphere
sphere {
<0,0,0>, 5
pigment { color rgb <1,0,0> }
}
After running POV-Ray, the output image looks like this:
Finally! Your first image! Of course, this one is a little boring -- but don't worry, we'll get to
some fun stuff soon. For now, experiment! It's the best way to learn. Try replacing the sphere
with other objects and seeing what happens. The objects that you should easily be able to use
are boxes, cones, cylinders, spheres, torii and planes.
Transformations
So now we can create some simple objecs. But wait! Some of these objects can only be
created around the origin (like the torus). What if we want to put them somewhere else? What
if we want to move them around? POV-Ray provides answers to all these questions in the
form of transformations. Transformations, in ray-tracing terms, are attributes that change the
position, size or orientation of objects (and of the various attributes of the objects). The most
common types of transformations, and the ones that POV-Ray supports, are translations,
rotations and scalings.
A translation is a transformation that moves an object relative to its current position. It is
specified in POV-Ray by the phrase translate <x,y,z>. Translations are easy to visualize.
Consider a cube sitting on the origin, like this:
Our camera is positioned so that the x axis increases to the right, the y axis increases upwards
and the z axis increases towards us. A translation of <-1,4,2> results in the cube being
moved left one unit, up four, and back two, like this:
A rotation is a transformation that changes the orientation of an object (the way that it's
facing). Rotations are the most complex of the transformations. They are specified to POVRay by the string rotation <x,y,z>, where x, y, and z are the number of degrees (not radians)
around the respective axis. Consider the original cube up above. A rotation of <0,0,45>
rotates the cube 45 degrees around the z axis, leaving us with a cube looking like this:
A quick way to remember which way the objects are going to rotate is by usings the so-called
"left hand rule." Hold out your left hand, fingers clenched and thumb out. Point your thumb in
the positive direction of the axis you are rotating about (if you're rotating about more than one
axis at a time, this won't help you -- unless you have more than one thumb!) The direction that
your fingers curl is the direction an object will rotate when the number of degrees is positive.
(Negative degrees rotate the opposite direction).
Another important thing to remember about rotations is that they are always with respect to
the coordinate axes -- in other words, unless your object is located at the origin, it will orbit
around the axis (or axes) you are rotating it about. For example, this is what would happen if
we translated the cube first, and then rotated it:
To get around this, make sure you rotate your object when its centered at the origin, and
then translate it. Your picture will end up like this:
Transformations are one of the few aspects of POV-Ray in which the order matters, simply
because transformations are always made with respect to the object's current orientation.
The last translation you need to know about is scaling. Simply enough, scaling changes the
size of the object with respect to its current size. Scaling is specified in POV-Ray via the
string scale <x,y,z>. The elements of the vector specify the how much to scale the shape
with respect to the coordinate axis: a scale of 1.0 leaves the object the same, and a scale of
0.0 or less is invalid. Going back to our original cube, if we scaled the object with the string
scale <1,4,1>, we would get a result like this:
Because of vector promotion (if you don't remember what that is, you can re-read about it),
scaling can also take a single number rather than a vector. This causes the object to be scaled
in every direction by that number. For example, the phrase scale 2 is the same as the
phrase scale <2,2,2>.
Transformations are placed like any other attribute. For example:
torus {
3, 11
pigment { color Yellow }
scale <1.5,1,1>
rotate <-45,0,0>
translate <0,2,0>
}
This code makes a yellow torus, slightly widened around the x axis, rotated -45 degrees
around the x axis and with its center at <0,2,0>, like this:
Note that torus objects are created around the origin, so you are in fact forced to use
transformations to get them where you want... luckily for you, you now know how. And to
quote G. I. Joe, knowing is half the battle.
Texture
We admit it -- we lied to you. The pigment attribute is actually a part of a bigger attribute
called the texture attribute. Every time you used pigment, it should have really looked
like this:
texture {
pigment { color Red }
}
The reason that POV-Ray is a little loose about the pigment attribute and lets you use it
outside of texture is because pigment is so frequently used by itself that it becomes a
pain to type out the whole texture statement. In fact, most parts of the texture { }
block you can do the same thing with. Either way, they have the same effect.
The texture attribute contains attributes describing the outward appearance of the object:
pigment, finish and normal. The pigment attribute, as you know, describes the color
of the object (although it's a lot more complicated than what we've shown you so far). The
finish attribute describes how the object "interacts with light" -- highlighting, metallic
luster, shinyness, reflectivity, etc. The normal attribute describes some three-dimensional
features of objects, such as bumps, waves, and ripples. We'll cover these one by one.
Pigment
You've seen the use of the color attribute within the pigment attribute (for example,
pigment { color Blue }). A more complete description that what we've given you so
far can be found in the Color section of the Language Reference. A more flexibe attribute,
however, is color_map. color_maps are used to do a wide variety of things. Basically, a
color_map defines bands of color on a "map" ranging from 0.0 to 1.0 Let's look at a
simple example:
color_map {
[0.0 color Red]
[0.25 color Blue]
[0.9 color Green]
}
This defines three bands of color: red from 0.0 to 0.25, blue from 0.25 to 0.9, and green
from 0.9 to 1.0. The other commonly used format looks like this:
color_map {
[0.0 0.25 color Red]
[0.25 0.9 color Blue]
[0.9 1.0 color Green]
}
They both do the same thing; the second one just contains information about where you want
the bands to stop as well as start.
The next step is tell POV-Ray what to do with this. This is done by using of the many pigment
types. A simple pigment type is called gradient. Gradient creates bands of color based on
the color map. Using the source code from the first scene we created, and replacing the color
Red with our color map and pigment type, we get this:
sphere {
<0,0,0>, 5
pigment {
gradient <0, 1, 0>
color_map {
[0.0 color Red]
[0.25 color Blue]
[1.0 color Green]
}
scale 3
}
}
This source code requires a bit of explaining. The vector following the gradient keyword
is the normal vector to the orientation of the bands of color (you remember normal vectors,
don't you? Or did you think we were wasting our time telling you stuff you didn't need to
know? Admit it! You skipped over that section! Well, we're forgiving; you can go back and
read about it again). The scale statement applies to the pigment, not to the object (look
carefully at where it's placed -- inside the pigment { } block).
Our sphere now looks like this:
A careful examination of this image yields some interesting facts. Starting from the top down,
you can see a slight bit of green (the rest of it was cut off), which fades into the the large blue
band, which in turn fades into the small red band. The red band is abruptly cut off and the
cycle repeats itself again. However, the next time, the pattern has reversed! The red band is on
the top. This is because gradient patterns reverse themselves at the origin. To get around
this, you can translate the texture away from the origin (you can apply all transformations to
textures, remember?). More information on gradients can be found in the gradient section
of the Language Reference.
Ok, now let's try something else. Add the phrase turbulence 0.5 after the gradient
statement. The resulting picture looks like this:
Whoah! The turbulence keyword, as you may have guessed, "mixes stuff up." With this
color map, we get a freakish plasma-like sphere. Values for turbulence range from 0.0 to
1.0. A complete description can be found in the turbulence section of the Language
Reference.
There are many other pigment types than gradient. For example, there is a pigment type
called marble. By itself, rather boring and un-marble-like. However, with a high turbulence,
it can create some very realistic marble pigments. Here's some sample source code:
sphere {
<0,0,0>,5
pigment {
marble
turbulence 1 // full turbulence
color_map {
[0.0 color Gray90] // 90% gray
[0.8 color Gray60] // 60% gray
[1.0 color Gray20] // 20% gray
}
}
}
This high-turbulence marble pigment generates some very nice-looking marble:
Not too shabby, huh? Other pigment types include wood, agate, bozo, and a host of others that
can be found in the pigment section of the Language Reference. And although technically not
pigment types per se, you may want to check out the checker and hexagon pigment patterns,
as well as the image map pattern (which lets you map an external image to an object), all
found in the same section as above. And remember, the best way to learn is to experiment!
Finish
Finish describes how the objects interact with light: how much they reflect, how they shine,
how metallic they are, etc. All finish attributes are enclosed in a finish { } block.
Perhaps the most used of the finish attributes is the phong attribute. A phong is a highlight,
or glare. It is specified, strangely enough, by the phong attribute, followed by a number
between 0.0 and 1.0 that specifies how bright the phong is. There is also a phong_size
that controlls how "tight" the phong is -- in other words, the higher this number, the smaller in
size the phong is (this is a little misleading, yes). Here we have a green sphere with a phong
highlight of 1.0:
sphere {
<0,0,0>, 5
pigment { color rgb <1,1,0> }
finish { phong 0.8 }
}
When lit by two light sources, the sphere looks like this:
As you can see, the phong adds a nice bit of realistic "shine" whenever a light source directly
hits part of the object. A more complete description of phong can be found in the phong
section of the Language Reference.
Another finish attribute that can produce stunning effects is the reflection keyword. This
causes objects to reflect their surroundings to a certain degree. Reflection takes one number,
ranging from 0.0 to 1.0, that specifies how reflective the object is. Let's take a look at a more
complex scene with a reflective object.
#include "colors.inc"
camera {
location <-2, 3, -10>
look_at <0, 5, 0>
}
plane { // the floor
y, 0 // along the x-z plane (y is the normal vector)
pigment { checker color Black color White } // checkered pattern
}
sphere {
<0, 5, 0>, 2
pigment { color White }
finish {
reflection 0.9
phong 1
}
}
light_source { <10, 10, -10> color White }
light_source { <-10, 5, -15> color White }
The image this produces is:
As you can see, this generates a yellowish mirrored sphere floating above an infinite
checkerboard -- a variant of one of the standard ray-tracing scenes. A more in-depth
description of reflectivity can be found in the reflection section of the Reference manual.
The final attribute of the finish keyword we will describe here is the refraction
keyword. Refraction is what happens when light rays passing through a translucent object get
bent, causing a distortion of everything seen through the object. For example, if you look
through a crystal ball, you will see a distorted view of whatever is behind it.
The refraction keyword takes one value. This value should either be 0.0 or 1.0, for
refraction off and on, respectively. Although you can specify values in between, it is not
recommended as it does not correspond to any known physical property. How noticeably it
refracts is controlled by the ior keyword (for index of refraction), which takes a number
greater than 0. The default ior of "empty space" is defined as 1.0. So, if we wanted to
create the crystal ball described above, we would use something like this:
sphere {
<0,5,0>,2
pigment { color rgbf <1,1,1,.8> }
finish {
reflection 0.1
refraction 1.0
ior 1.5
phong 1.0
}
}
Remember your RGBF vectors? A filter value of 1.0 would mean this was an invisible
sphere, certainly not what we want. Our filter value of 0.8 gives the sphere enough definition
to be visible. The image generated looks like this:
Now we start seeing some of the true power of ray-tracing. The warped look of the
checkboard pattern is due to the refraction, the bright hightlighting is due to a phong, and a bit
of reflection makes this all the more realistic. Tinting the glass would be easy: just change the
color of the sphere from <1,1,1> (or white) to whatever color you want it tinted. Modify
the filter value to make the ball more and less translucent. It's fun!
There are many other finish attributes that you can play with, including metallic, ambient,
and crand. We've touched on a few; for a complete reference, read the finish section of the
Language Reference. To get a good feel for most of the finish attributes, you can experiment
with the Finish Tool.
Normal
The normal attribute creates some simple 3D features on your objects: bumps, ripples,
waves, and the like. It does not actually change the object; instead, it changes slightly the way
light bounces off the object and essentailly fools the eye into believing the object is a little
different than it really is. As such, the effects are not 100% true to real life, but they are much,
much faster than actually describing the changes individually would be.
Let's try a bumps example. Bumps are created with (oddly enough) the bumps keyword,
followed by a single number, generally between 0.0 and 1.0, that specifies the relative size
of the bumps. Here's some source code:
cone {
<0,-3,0>,1
<0,3,0>,0.1
texture {
normal {
bumps 1/2
scale 1/6
}
pigment { color rgb <.5,.7,.2> }
}
}
This creates a green cone with a slightly bumpy appearance, like this:
Not to difficult, eh? Imagine how difficult it would be to model all those bumps yourself.
Now, here's a fun one to try -- ripples:
plane {
y, -2
texture {
pigment { color rgb <.1,.9,.9> }
normal {
ripples 0.5
}
}
}
The number following the ripples keyword specifies, again, the relative size of the ripples.
The image this produces is:
Pretty nifty! The ripples keyword and its close relative, the waves keyword, can take a
few modifiers that give a little more control than we've shown you. A complete reference can
be found in the ripples section of the Language Reference. More normal attributes, such a
dents and wrinkles, can be found in the normal section of the same document. You can also
experiment with the Normal Tool to get a feeling for the various attributes.
Including Textures
Much like you learned how to include colors beforehand, you can also include textures. POVRay comes with a file full of some very good textures, called textures.inc. Including
this is the same as before:
#include "colors.inc"
#include "textures.inc"
Note that you must include colors.inc before you include textures.inc, because
textures.inc uses colors from colors.inc.
Using an included texture is easy. To make a sphere that uses the Jade textures, for example,
you would say:
sphere {
<-2, 4, 6>, 5.6
texture { Jade }
}
Look through the file textures.inc for a list of the textures included. You can also look
through colors.inc for a list of the colors in there.
Well, if you've managed this far, you're in good shape. Keep it up! The next section gets in to
the really fun stuff.
Top of
Document
Main
Page
Step 2: POV-Ray
Basics
Step 4: Advanced POV-Ray
Features
The Online POV-Ray Tutorial © 1996 The Online POV-Ray Tutorial ThinkQuest Team
The Online POV-Ray Tutorial
Advanced POV-Ray Features
(Show Jump Points) (Hide Jump Points)
If you've made it this far, you're in good shape! This section covers the features of POV-Ray
that are most most complex, but also the most powerful. Once you complete this section,
you'll be ready a certified ray-tracing master.
Quick Index:
1. #declare
2. CSG
1. Union
2. Difference
3. Intersection
4. Merge
5. Inverse
3. Advanced Objects
#declare
Up until now, creating large numbers of similar objects has been an excersize in cut-and-paste
editor features. POV-Ray provides a very powerful, very flexible way to create many similar
objects with a statement called #declare. #declare essentially creates a new type of object
(or pigment, or texture, or almost anything) that you can use and re-use whenever you like.
Take a look at the following code:
#declare my_sphere =
sphere {
<0, 0, 0>, 5
finish {
pigment rgbf <.5, .2, .4, .667>
}
}
What this does, essentially, is declare a new type of object called "my_sphere" which you can
now use later on in your source code, like this:
object {
my_sphere
translate <-2, 0, 0>
}
The object statement tells POV-Ray to create an object of type "my_sphere." Theoretically,
you can put object statements around every object you use (including primitives, like
spheres) but POV-Ray only requires it for #declared objects.
Note that any attributes you place inside the object statement override those in the
#declare statement -- in this example, our sphere is moved from its original location at
<0,0,0> to <-2,0,0>. This hold true for pigments, finishes, etc.
VRML programmers should take note that this #declare differs somewhat from VRML's
DFN node. #declare does not create an instance of the object (which DFN does), only the
definition. In other words, the above #declare statement would not add any objects to your
scene on its own. You need to instantiate the objects (with the object keyword) to do that.
Now, why would you want to use #declare? Say, for example, you're making a Greek
temple. You would want many pillars in your object, so you would create a pillar object with
#declare, like this:
#declare pillar =
cylinder {
<0, -5, 0>, <0, 5, 0>
texture { White_Marble }
}
Then, you would create however many of these you needed, translating to your heart's
content. Say, however, that you decide the columns in your temple should be made out of red
marble, not white. All you have to do is change the one #declare statement, and all the
pillars change! If you had created those pillars without #declare, you'd have to change
each one by hande -- a major hassle, especially if you had 40 pillars in your temple.
So you can see one immediate benefit to #declare -- updating your scene becomes a lot
easier. But wait, there's more! You can also use #declare to create your own colors and
textures. In fact, the colors.inc and textures.inc files are basically long lists of
#declared colors and textures, respectively. The syntax is intuitive:
#declare blue_green_filter = rgbf <0, .5, .5, .5>
#declare red_glass =
texture {
finish {
refraction 1.0
reflection 0.1
ior 1.5
}
pigment {
color rgbf <1, .7, .7, .7>
}
}
As you can most likely guess, these define a new color, called "blue_green_filter" and
a new texture, called "red_glass". You would use these like this:
sphere {
<0, 0, 0>, 1
pigment { blue_green_filter }
}
cone {
<-2, -4, 16>, 5
<0, -3, 1>, 1
texture { red_glass }
}
Not too difficult! You can use #declare to create custom finish, normal, and pigment
statements... you can even use it with vectors and single numbers, like this:
#declare PI = 3.1415926545338327950288
This will save you a bit of typing if you reference PI frequently in your scene file! (Please
remember that you don't need to put an object statement around anything you #declare
other than objects).
C and C++ programmers should not be mislead by #delare's superficial similarity to the
C/C++ pre-processor macro #define. Their behaviour is quite different. #define actually
changes the source code before it gets compiled (why is why it's called a pre-processor
macro). POV-Ray does not have a pre-processor, and so #declare, although misleadingly
labeled, will not do source-code substitution.
At any rate, you can get the complete syntax for object and #declare in the Language
Reference. They are both powerful tools, and if you create anything other than very simple
scenes, you will find them invaluable.
CSG
CSG stands for Constructive Solid Geometry, a powerful technique in POV-Ray for creating
new objects from combinations of other objects. So far, you have been limited to POV-Ray's
primitives, which, while nice, aren't always what you need. POV-Ray lets you use the
primitives in a much more constructive (har har) way with CSG: you can carve away parts of
objects, you can stick objects together, and other exciting stuff.
There are five operators in CSG: union, intersection, merge, difference, and inverse. The
syntax of all the operators (except inverse) is very simple: it's the operator, followed by a
list of two or more objects enclosed by braces, like this:
CSG_operator {
object_1
object_2
etc.
}
You actually don't have to put any objects at all between the braces, but it doesn't make sense
to have less than two objects (remember, CSG creates new objects from combinations of other
objects) and POV-Ray will warn you when you trace the file. The syntax for inverse is
even easier: it's just the word "inverse."
We'll go over these operators one by one, because they're all important. A complete reference
can be found in the CSG Section of the Language Reference.
Union
A union is the easiest CSG operator to understand. It simply takes a bunch of objects, and
"sticks them together." It doesn't actually move the objects at all, but it creates a common
bond between the objects, kind of like they've joined a special club for important primitives.
(We'll politely ignore the similarities to certain political parties). The source code to a sample
union looks like this:
union {
sphere { <0, 1, 2>, 3 }
box { <-77, 6, 5>, <2, 3, 55> }
sphere { <-2, -3, -4>, 5 }
}
Now rendering the scene doesn't look any different whether you have the union keyword
there or not. So why bother? Two reasons: first, you can assign attributes to the entire union
of objects very easily:
union {
sphere { <0, 1, 2>, 3 }
box { <-77, 6, 5>, <2, 3, 55> }
sphere { <-2, -3, -4>, 5 }
pigment { color Blue } // applies to the entire union
}
In this case, the attribute pigment { color Blue } is applied to every object in the
union. As always, this works with any attribute you care to try: pigment, translations, normal,
etc.
The second, and perhaps even more useful reason for using unions, is when you combine
CSG and the #declare keyword, like this:
#define two_spheres =
union {
sphere { <0, 0, 0>, 3 }
sphere { <-1, -5, -1>, 3 }
}
From now on, you can reference the object two_spheres (which is, amazingly enough,
two separate spheres) just as you would any other #declared object:
object {
two_spheres
pigment { color Pink }
rotate <0, 180, 0>
}
Let's go through one more example, to make sure you understand -- this is a very important
concept. Say you wanted to ray-trace a car. You'd create the wheels, then an axle, and then
use union to stick them together. You could then re-use this wheel and axle combination
however many times you wanted (depending on how many sets of wheels your car has). Your
code might look something like this:
#declare wheels_n_axle =
union {
object { // left wheel
wheel
// assuming we have already created a wheel object
translate <-3, 0, 0>
}
object {
axle
}
// axle
// assuming we have already created an axle object
object { // right wheel
wheel
// assuming we have already created a wheel object
translate <3, 0, 0>
}
}
#declare car =
union {
object { // front wheels and axle
wheel_n_axle
translate <0, 0, 5>
}
object { // rear wheels and axle
wheels_n_axle
translate <0, 0, -5>
}
// other car parts go here
}
Note that the order you place objects in a union is unimportant -- objects within a union
don't really care about the other objects. This is different from the objects in a difference
-- they are very caring, almost loving, objects, as you will see in the next section.
A complete description of the union operator can be found in the CSG Section of the
Language Reference.
Difference
A CSG difference is much like a mathematical difference -- it subtracts objects from one
another. More specifically, it takes chunks out of the first object, each chunk being defined by
the other objects in the difference statement. For example, say we wanted to make a wall
that we would add a door to. The simplest way to do this is with a difference:
#declare wall =
difference {
box { <0, 0, 0>, <10, 10, 1> } // 10x10x1 wall
box { <2, 0, -1>, <6, 8, 2> } // minus a doorway
texture { Wall_Texture } // assuming we have already created a
Wall_Texture
}
The first cube serves as the wall, and the second cube describes what, exactly, we want to take
out from the wall. The two objects without the difference statement look like this:
When we add the difference statement, we get:
Note that we made the doorway cube thicker than the wall. Why? This is because,
occasionally, POV-Ray will get confused when you have two objects that overlay exactly the
same space. So, we made the doorway cube a little thicker, avoiding a potentially weird
image, and at no loss to anything else.
One important thing to remember about differences is that all objects are subtracted from the
first one. If, for example, we wanted to add a few window holes to the wall above, we could
just add a few more cubes at the very end, and voila! Once again, any attributes placed at the
end of the difference statement will apply to the entire object.
A complete reference for the difference keyword is located in the CSG Section of the
Language Reference.
Intersection
Much as a difference removes the insides of objects, a intersection removes the outsides
of objects. The result of using the intersection operator is that the only thing remaining
is the parts which all the objects within the operator had in common. Let's say that you want
to make a single, colored, sugar-coated chocolate candy that won't melt in your hands (not
nameing any names). Furthermore, it must be a mathematically perfect candy. The easiest
way to do this in POV-Ray is with an intersection, like this:
#include "colors.inc"
camera {
location <0, 0, -5>
look_at <0, 0, 0>
}
light_source { <10, 10, -10> color White }
intersection {
sphere { <0, -1, 0>, 2 }
sphere { <0, 1, 0>, 2 }
pigment { color Yellow }
}
This code takes two spheres that overlap, like this:
Then, it uses the intersection operator to remove everything that isn't overlapping,
leaving an a remarkably sweet-looking goody, like this:
Although intersections are a little more difficult to imagine than some of the other CSG
operators, they can be a very powerful tool. You can find a complete reference in the
intersection section of the Language Reference.
Merge
Merge is very similar to union. In fact, the only difference between the two is that, if the
objects actually overlap, merge will make the interior a smooth, continuous unit. Now,
obviously, this won't make a difference to you if your objects aren't opaque. But if you have
transparent, overlapping objects in your scene, the original object boundaries will be shown if
you use a union (or no CSG at all); to get around this, you muts use merge.
A complete reference for the merge operator can be found in the CSG Section of the
Language Reference.
Inverse
Inverse is not used very often, but there are times when it must be used. Inverse will take your
object and reverse what POV-Ray considers to be its "inside" and "outside." This will make
no difference to the way your object looks, but it makes a great deal of difference to the
wayyour object acts when you use it in CSG. Consider the intersection of a sphere and a box.
If the sphere is inverted (by placing the keyword invert in its definition), then POV-Ray
will take the intersection of the box and an object defined as "the entire universe except this
sphere." If you think about it for a while (probaby a long while), you'll realize that that's the
same as a difference. In other words, this:
intersection {
box { <0,0,0>,<1,1,1> }
sphere {
<1,1,1>, 1
inverse
}
}
is the same as this:
difference {
box { <0,0,0>,<1,1,1> }
sphere {
<1,1,1>, 1
}
}
In fact, POV-Ray calculates differences using this same method. A complete reference to
the inverse keyword can be found in the CSG Section of the Language Reference.
Advanced Objects
There are times when POV-Ray's geometric primitives aren't going to be enough for you.
Face it, if you want to ray-trace something as complex as a human being, even CSG won't
help you. In this case, there are two options left to you:
The first is to specify your object in mathematical terms. Obviously, this will only work if
1. Your object can be described by an n-dimensional polynomial in 3-space;
2. You know what the heck I'm talking about; and
3. You like pain
What we're trying to say here is that we're not about to teach you the math necessary to
specify these objects, and, furthermore, we recommend against it, unless you really know
what you're doing. Of course, if you'd like to read about the objects involved (namely,
quadrics, cubics, quadrics and polys), the go right ahead. And if you can use them, so much
the better. But if you don't have the math behind it, then don't worry about it; you'll probably
sleep better at night. We have found these objects to be of limited use.
The second option you have is to use a modelling program. What a modelling program can do
is generate extremely complex objects in POV-Ray by specifying them as a whole bunch of
really simple objects, normally blobs, triangles, smooth triangles. or bicubic patches. These
objects, much like the the mathematical ones above, are not generally meant for human
consumption -- in other words, don't bother trying to create objects with these by hand,
because unless you really know what you're doing, you'll probably just waste a lot of time.
Instead, find a good modelling program (there are many free and shareware ones out there; try
the Resource Library), create the complex object in there (usually the modelling programs
will have a very nice, graphical interface) and run POV-Ray on the file it creates. You will
save a lot of time and effort.
Top of
Document
Main
Page
Step 3: Creating Simple
Scenes
Step 5:
Conclusion
The Online POV-Ray Tutorial © 1996 The Online POV-Ray Tutorial ThinkQuest Team
The Online POV-Ray Tutorial
Conclusion
(Show Jump Points) (Hide Jump Points)
Well, well, well. You've come far from the beginning of the Path of Learning. We'd just like
to take a monute to congratulate you on your new found skills. Congratulations. You now
know the POV-Ray language. Anything that can be done in POV-Ray is now within your
grasp. However, this is just the beginning of your full journey through POV-Ray. This tutorial
has helped as much as it can. Now all you need is experience. The main thing that
distinguishes a good POV-Ray artist from an awesome POV-Ray artist is experience with ray
tracing and possibly better hardware, but that's not as important. There are a great many ray
tracing libraries on the internet and many of them include source. Check 'em out and see how
the masters do things. Then you will be well on your way to becoming a POV-Ray master
yourself.
Top of
Document
Main
Page
Step 4: Advanced POV-Ray
Features
The Online POV-Ray Tutorial © 1996 The Online POV-Ray Tutorial ThinkQuest Team
Download