Monday 30 April 2012

Adding Lighting: Normals

Building on the previous post, we can use the lighting to light a cube of our own making, rather than an object provided by GLUT. Here is the definition of our new cube:

glPushMatrix();
 glBegin(GL_TRIANGLES);

 /*      This is the bottom face*/
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);

 //top face
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 
 //right face
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 
 //front face
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 
 //left face
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 
 //back face
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);

 glEnd();
 glPopMatrix();

The following pictures compare the result of drawing this cube with the result of using the cube provided by GLUT, each modelled using our previous lighting system.


Our cube

GLUT cube

Both cubes are drawn in the exact same environment, using the exact same lighting, yet here we see completely different lighting results.

Surface Normals

The GLUT cube is lit correctly whilst our cube is not lit correctly. This is because the GLUT cube defines normals along with the vertices. A normal is a vector that is perpendicular to a plane. A surface normal is a normal that is perpendicular to the plane that the surface lies on. For example, each surface of our cube is made up of two triangles and each of these has it's own surface normal. To illustrate this, here is our cube shown in Blender with surface normals showing.


The surface normals allow us to know the direction that the surface is facing. OpenGL then uses this information to calculate how much of the light from the light source the surface is receiving. It does this by Taking the direction of the light and the surface normal and calculating the angle between them using the dot product operation. The smaller the angle between the two vectors, the more light the surface receives. Because our cube provides no normal information, OpenGL has nothing to work with and so we get incorrect results.

So how do we go about calculating the surface normals? I covered this is a previous post, but basically the vector operation called the cross product is performed using the three vertices of each face. This returns a vector perpendicular to the surface. The following code will accomplish this.

Vector3 crossProduct(Vector3 v1, Vector3 v2){
 
 Vector3 cross = {v1.y * v2.z - v1.z * v2.y,
     v1.z * v2.x - v1.x * v2.z,
     v1.x * v2.y - v1.y * v2.x};
 
 return cross;
 
}

Vector3 getSurfaceNormal(Vector3 v1, Vector3 v2, Vector3 v3){
 
        /*
        * obtain vectors between the coordinates of
        * triangle.
        */
 Vector3 polyVector1 = {v2.x - v1.x, v2.y - v1.y, v2.z - v1.z};
 Vector3 polyVector2 = {v3.x - v1.x, v3.y - v1.y, v3.z - v1.z};
 
 Vector3 cross = crossProduct(polyVector1, polyVector2);
 
 normalize(cross);
 
 return cross;
 
}

You will notice that after calculating this perpendicular vector, the normalize function is called on it before it is returned which produces a unit vector. A unit vector is simply a vector whose magnitude is 1. The process of normalising a vector is described here. (If these concepts are confusing, reading up on the basics of vector mathematics is strongly recommended before continue to pursue learning OpenGL). When providing normals, we always provide unit vectors as OpenGL expects this due to simplified calculations. Failing to do so will result in strange lighting!

Because our cube is very simple, we can actually just mentally calculate the normals and write them directly into the code. The following example now attaches normals to our cube.

void drawCube(){

 glPushMatrix();
 glBegin(GL_TRIANGLES);

 /*      This is the bottom face*/
 glNormal3f(0.0f, -1.0f, 0.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);

 //top face
 glNormal3f(0.0f, 1.0f, 0.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 
 //right face
 glNormal3f(1.0f, 0.0f, 0.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 
 //front face
 glNormal3f(0.0f, 0.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 
 //left face
 glNormal3f(-1.0f, 0.0f, 0.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, 1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 
 //back face
 glNormal3f(0.0f, 0.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 glVertex3f(1.0f, 1.0f, -1.0f);
 glVertex3f(-1.0f, -1.0f, -1.0f);
 glVertex3f(-1.0f, 1.0f, -1.0f);

 glEnd();
 glPopMatrix();

}

Normals are added by making a call to glNormal3f and stating the vector coordinates. OpenGL then uses this normal value in calculations for any following vertices declared with glVertex3f until either a new normal is declared or the drawing ends. Because OpenGL now knows the correct normals, we achieve the correct lighting conditions, exactly the same as the GLUT cube.


Vertex Normals


Vertex normals are similar to surface (face) normals. Whereas surface normals represent a vector that is perpendicular to the polygon's surface, vertex normals have no concrete definition. A vertex normal is an average of surface normals and they are used to calculate more realistic and less blocky lighting.

One problem with using surface normals is that the lighting for each surface is calculated using its normal which is then applied to every pixel of the surface. This means that each surface has a uniform colour causing edges between surfaces to appear distinct due to the uniform colouring, causing the model to look blocky.


This is due to the way OpenGL calculates lighting: using the Gouraud shading model. The Gouraud method first requires that each vertex has its own normal. A vertex normal is calculated by averaging the surface normals of every surface that the vertex is a member of.


Once it has these normals, OpenGL uses the Blinn-Phong reflection model to calculate how much light is being received at each of these vertex normals. Gouraud takes these values and interpolates the values along the edges of the polygon, using the lighting calculated at the vertices at the end of the lines and changing it based on how far away from each vertex the current point is. Once the lighting values have been interpolated along the edges, the process is repeated for the pixels within the polygon, interpolating using the lighting values just calculated at the edge, changing the pixel value based on how close it is to each edge. Hopefully the following diagram showing the interpolation of interior points, makes it a little clearer (I represents the light intensity value at a point).

This is the cause of the blocky lighting as seen above. When we set only the surface normal to be used for a polygon, each vertex normal uses the surface normal. This means the lighting value at each vertex is the same and thus, the interpolated values across the entire surface are the same. This means there are no smooth transitions between polygons (smooth shading).

One method around this is to, within our program, define a normal for each vertex in the model. The way to do this is to look at the normals of each face that uses the vertex, called adjacent faces. The normal for the vertex is then calculated as the average of the normals of the surface normals as described above. This averages the lighting value of the surfaces such that the edges of the surfaces do not appear.


Cube shown with face normals

   Cube shown with vertex normals

Hopefully clear from the pictures is that at the edges of the cube , the vertex normals are pointing out at a 45 degree angle. This is because it is a combination of the face normal pointing upwards and the face normal pointing outward.

In order to create the vertex normals, we run through each face of the model and for each vertex making up that face, mark the face as an adjacent face. After this initial pass, we run through each vertex and work out the average normal from each of the surface normals of each adjacent face marked. Example code is presented below.

void WavefrontOBJModel::calculateNormals(){

 for(int i = 0; i < numberOfFaces; i++){

  WavefrontOBJFace * face = &modelFaces.at(i);
  
  int vertexIndex1 = face->getVertexIndices().at(0);
  int vertexIndex2 = face->getVertexIndices().at(1);
  int vertexIndex3 = face->getVertexIndices().at(2);
  
  WavefrontOBJVertex * vertex1 = &modelVertices.at(vertexIndex1);
  WavefrontOBJVertex * vertex2 = &modelVertices.at(vertexIndex2);
  WavefrontOBJVertex * vertex3 = &modelVertices.at(vertexIndex3);
  
  vertex1->addAdjacentFace(i);
  vertex2->addAdjacentFace(i);
  vertex3->addAdjacentFace(i);
  
  CVector3 v1 = *vertex1->getCoords();
  CVector3 v2 = *vertex2->getCoords();
  CVector3 v3 = *vertex3->getCoords();

  CVector3 cross1 = v2 - v1;
  CVector3 cross2 = v3 - v1;

  float normalX = cross1.y * cross2.z - cross1.z * cross2.y;
  float normalY = cross1.z * cross2.x - cross1.x * cross2.z;
  float normalZ = cross1.x * cross2.y - cross1.y * cross2.x;

  CVector3 normal (normalX, normalY, normalZ);

  normal.normalize();

  face->setSurfaceNormal(normal);

 }
 
 for(int i = 0; i < numberOfVertices; i++){
  
  WavefrontOBJVertex * v = &modelVertices.at(i);
  
  int numAdjacentFaces = v->getAdjacentFaces().size();
  
  float xNormalTotal = 0.0f;
  float yNormalTotal = 0.0f;
  float zNormalTotal = 0.0f;
  
  for(int j = 0; j < numAdjacentFaces; j++){
   
   int faceIndex = v->getAdjacentFaces().at(j);
   
   CVector3 faceNormal = *modelFaces.at(faceIndex).getSurfaceNormal();
   
   xNormalTotal += faceNormal.x;
   yNormalTotal += faceNormal.y;
   zNormalTotal += faceNormal.z;
   
  }
  
  CVector3 newVertexNormal (xNormalTotal, yNormalTotal, zNormalTotal);
  
  newVertexNormal.normalize();
  
  v->setNormal(newVertexNormal);
  
 }

}

Now that we have a normal for each vertex, we simply make a call to glNormal3f for every glVertex3f call that we make, rather than setting it once for each face.

Strange Lighting: A Caveat


We're nearly at our goal of achieving smooth shading for our models. There is however one problem: if we run our cube example with calculated vertex normals included this time, this is what we get:



What gives!? We went through all the trouble of calculating and defining our vertex normals and it's worse than it was before!

The reason we get this cool but incorrect bevel type effect at the edges is due to the fact that Gouraud shading is used for shading curved surfaces. It does not handle sharp edges well. This is because if we have a light shining directly down on the top of the cube, the vertex normals at the edges use an average of the surface normals on the top and the surface normals at the sides. Since the light is completely on the top surface, the top is in light and the side is in darkness because the angle between the normals is 90 degrees. We do however, get a smooth transition from light to dark which is not correct.

After racking my brains a while and asking a few questions on the interwebs, the solution is that because Gouraud only works on smooth surfaces, we need to break the connection between the edge vertices - we need to stop them averaging surface normals that are separated by a sharp edge. The easy way to do this is to simply duplicate your vertices at every sharp edge in the model. This means that the faces on either side of a sharp edge are no longer adjacent faces in terms of vertices and will not be averaged when calculating vertex normals. I'm sure this can be done programatically however it is much easier to ask (read: demand) that your artist provide you with models where vertices are duplicated at sharp edges. I'm sure this is the standard now but I'm no artist so I'm not sure! Besides, this can be easily accomplished in Blender using the edge split modifier.

Improve Lighting: Add Polygons


Another common problem with lighting is that the models that are being lit simply don't have enough polygons. A cube with only twelve triangles will not look very good when put under the lights. This is because there are not many vertices and so the lighting values calculated at a particular vertex will have to be interpolated across too large a distance, causing poor results. Of course increasing vertices means poorer performance so balancing these two things is a bit of an art. To illustrate the effect of having too few polygons, compare the lighting results of using a spot-light to light our cube with 12 polygons and our cube with 46343 polygons (too many but will demonstrate the effect). Thankfully Blender provides the ability to subdivide our object very easily.



12 polygons

   46343 polygons


As we can see, the low polygon cube is not even lit at all. This is due to the fact that the spotlight doesn't even reach one of the 24 vertices!

Next time we'll improve the appearance of our cube by applying materials and textures. This also means that we'll be able to bring in the specular lighting that I so conveniently skipped over ;).

Next time.

*Gouraud shading description and images inspired by the book 'Computer Graphics' by Hearn & Baker, Third international edition*

Sunday 29 April 2012

Adding Lighting: Basics

In order to add some basic lighting to the 3D model that I'm now able to load in, the lighting must first be initialised and then applied.

There are three main types of lights that can be simulated with OpenGL: directional, point and spotlight. We will talk about these and how to include them in the program later.

Initialising Lighting

Thankfully OpenGL does a lot of the lighting calculations for us. For every lit vertex in the scene a lighting model known as the Blinn-Phong shading model is used to calculate how much light the vertex is receiving from any light sources in the scene and so is able to decide on the colour at that vertex. A shading model called Gouraud shading is then applied to interpolate the lighting across the face, which means that every pixel across the face is efficiently coloured based on the colour of the nearest vertices and its distance from them. This is how our 2D faces are lit and coloured. Thankfully OpenGL does all this for us so we don't have to worry about (not until shaders I think, anyway!).

So how do we utilise this functionality? A simple call to glEnable(GL_LIGHTING) will turn on lighting for us and the corresponding glDisable(GL_LIGHTING), will turn it off. Every vertex sent to the hardware between these calls will be lit using the method described above. That's it!

There are a number of other options that can alter the appearance of the lighting but for now, this will do nicely.

Creating Lights

So now that we've told OpenGL we want to use lighting, how do we actually include this in our scene? After all, a lit scene with no lights isn't much use at all...

OpenGL allows us the use of 8 lights, each with the symbolic name GL_LIGHT0, GL_LIGHT_1...,GL_LIGHT7. In order to use these lights, we have to call glEnable like before. By default all lights apart from 0 are dark and their colours need to be set. Light 0 however, has a bright white light (1.0, 1.0, 1.0, 1.0) by default. We can just enable Light 0 and use that however it's much more fun to customise it to our needs...

The colour of the lights in OpenGL are mainly defined by three components:

  • Ambient: In real life, light rays bounce around and illuminate every object they collide with, gradually losing energy, this causes a room for example, to be filled with light. Modelling this efficiently in software however proves to be a very hard problem. To simplify matters, OpenGL simply applies a small amount of light to everything in the scene, somewhat simulating all the light rays bouncing around. This can be thought of as the "background lighting".


  • Diffuse: Many objects spread the light they receive in a uniform manner. This means that when light interacts with the object, the light bounces around the object and secondary rays are reflected in what appears to be from all directions away from the surface, providing a nice even light. To approximate this effect, if an object is in the area of a light, the calculations used to colour the surface heavily use the diffuse property of the light. Diffuse lighting can be thought of as the "colour" of the light.


  • Specular: Some objects do not reflect light in a uniform manner, they prefer to reflect light in a particular direction. This type of lighting is called specular reflection and occurs for "shiny" objects such as metals and a mirror. Typically we see this as a small white blob. For objects in our scene that are shiny (more on this when we get to texturing the model), we need to set the colour of the light that will appear reflected in this shiny section of the material. This can be thought of as the colour of the "shininess" of the interaction between the light and the object.


Each of these colour properties are set using three values representing a mix of the Red, Blue and Green colours and a 4th alpha property which comes into play when blending is used. All properties of a particular light are set using the glLight[f/i] and glLight[f/i]v functions (the f means that floats are supplied, i - integers). A typical light setting is applied in the example below:

 float ambientColour[4] = {0.2f, 0.2f, 0.2f, 1.0f};
 float diffuseColour[4] = {1.0f, 1.0f, 1.0f, 1.0f};
 float specularColour[4] = {1.0f, 1.0f, 1.0f, 1.0f};
 
 glLightfv(GL_LIGHT0, GL_AMBIENT, ambientColour);
 glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuseColour);
 glLightfv(GL_LIGHT0, GL_DIFFUSE, specularColour);

The 'v' form of the glLight function means you are providing an array argument. This should be a float or integer array of size four, describing the RGBA values. This is enough to configure the colouring of our simple light!

Positioning the Light

The final step of creating this light is to specify where in the scene the light is to be placed. This step is also used for stating the type of light that we want to use. In order to position the light, we call glLight again, providing it with GL_POSITION this time. This again is an array of size four with the first three floats representing the x, y and z coordinates in space in the world in which our camera will be placed. What then, is the 4th float used for?

The fourth float is when we finally get to specify the type of light that we want OpenGL to provide us with. As we said before, there are three types of light that we can utilise: Directional, point and spotlight.

A directional light is a light that originates from an infinite point in the distance. You define a vector which defines the direction from which the light emanates towards the world origin. All rays from the light can then be thought as being parallel to the direction vector defined, basically a wall of light. This light source approximates a large uniform light source, such as the sun.

As an example, we can define a directional light source with the following coordinates.

float position[4] = {-5.0f, 0.0f, 0.0f, 0.0f};

glLightfv(GL_LIGHT0, GL_POSITION, position);

While the coordinates {0, 0, -5} define a direction, it is easier to think of it as the position of the light but with the vector extended into infinity. The following figure hopefully demonstrates the concept a little better. A sphere is located at the world origin {0, 0, 0}, another at {-10, 5, 0}. The light direction is defined as {0, 0, -5}, denoted by the small red cube. The light then shines toward the origin parallel to this vector, from infinitely far away.



The second light type that we can include in our scene is the point light. Rather than being a global light like the directional light, the point light is a local light, an example of which would be a light-bulb. Like the directional light, you define a point light by using an array of size four however this time, the final member is 1 instead of 0, making it a point light. The other difference is that rather than defining a light direction, the first 3 parts of the array define a point in the world where the light is placed. Think of it as positioning any other object in your scene.

float position[4] = {-5.0f, 0.0f, 0.0f, 1.0f};

glLightfv(GL_LIGHT0, GL_POSITION, position);

In this example, we can see that the lighting effect is different. Rather than emanating from an infinite point towards the world origin, the light emanates from the position of the light outwards in all directions.


The final type of light is the spotlight. The spotlight is a specialized form of the point light in that the arc from which light emanates is limited to a cone. We see this type of light in lamps and torches (flash-lights). Rather than modifying the final member of the position array, we leave that as 1 and instead call additional glLight functions.

We will call three additional functions, one specifying the direction of the light source form which the light will emanate; one specifying the angle of the arc from which light will come and a final function which will set whether the light from the arc is concentrated in middle of the beam or spread uniformly out.

 float spotDirection[4] = {1.0f, 0.0f, 0.0f, 1.0f};
 float spotArc = 45.0f;
 float spotCoefficient = 0.0f; // Ranges from 0 to 128.
 
 glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spotDirection);
 glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, spotArc);
 glLightf(GL_LIGHT0, GL_SPOT_EXPONENT, spotCoefficient);

Here we have a spotlight that emanates light in the direction of the vector {1.0f, 0.0f, 0.0f}. We also define the arc to be 45 degrees. This value corresponds to the angle between the centre line of the cone and one of the edges of the cone. This means we have a cone of light, totalling 90 degrees. The final function defines a spot coefficient of 0. This value ranges from 0 to 128 with 0 being a uniform distribution of light and 128 being most of the light focused in the centre of the cone. By adding this spot light to the previous example, we get the following result.


As we can see, the other sphere is not lit as it is outside the spot light's cone from which light emanates.

Organising the Code

With these elements, we can accomplish simple lighting. We can also utilise the position values or even use OpenGL's transformation functions (glTranslate, glRotate) to transform the lights and create interesting lighting within our scene. Some caution however, has to be taken as to when these calls are made.

Some of the function calls to set-up the light can be called once at the beginning of the problem and left until the end or until you require to change it. These functions include enabling the light, setting the ambient, diffuse and specular colours, setting the spotlight arc angle and coefficient if a spotlight is to be used and, providing lighting is to be used for every element in the scene, the lighting itself can be enabled here. We can bundle this up into an initialise lighting function, called once at the beginning of the problem.

Two functions must be called every time the scene is rendered however. These are the call to glLight to position the light and if a spot light is used, the call to glLight to define the direction from which the light emanates. This is because these functions use the current modelView matrix to transform the light. The light will then be placed at different points in the scene depending on whether you call it before setting up your model view matrix (e.g. using gluLookAt()) or after. (This issue is the difference between the lighting being positioned in world coordinates or eye coordinates. Read this if this is not clear. I may write about the coordinate system myself sometime). Probably for most purposes, you will want to make these calls after setting up the model view matrix so it stays in the same place relative to the camera.

I have provided some example code of a program that introduces simple lighting into a scene.

#include < GL/gl.h >
#include < glut.h >

float playerX = 0.0f;
float playerY = 0.0f;
float playerZ = 15.0f;

/*
* Initialise the data used
* for creating our light.
*/

float ambient[4] = {0.2f, 0.2f, 0.2f, 1.0f};
float diffuse[4] = {1.0f, 1.0f, 1.0f, 1.0f};
float specular[4] = {1.0f, 1.0f, 1.0f, 1.0f};

float position[4] = {-5.0f, 0.0f, 0.0f, 1.0f};

float spotDirection[4] = {1.0f, 0.0f, 0.0f, 1.0f};
float spotArc = 45.0f;
float spotCoefficient = 0.0f; // Ranges from 0 to 128.

void positionLight(){
 
 /*
  * Tell OpenGL where we want our light
  * placed and since we're creating a spotlight,
  * we need to set the direction from which
  * light emanates.
  */
 glLightfv(GL_LIGHT0, GL_POSITION, position);
 glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spotDirection);
 
}

void display(){

 glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

 glLoadIdentity();

 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 gluLookAt(playerX, playerY, playerZ, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
 
 /*
  * Tell OpenGL we want all the following
  * objects in our scene to have lighting
  * applied to them.
  */
 glEnable(GL_LIGHTING);
 
 /*
  * Position the lights AFTER the model View matrix
  * has been set up.
  */
 positionLight();
 
 glutSolidSphere(1.0f, 50, 50);
 
 glPushMatrix();
 
 glTranslatef(-7.0f, 5.0f, 0.0f);
 
 glutSolidSphere(1.0f, 50, 50);
 
 glPopMatrix();
 
 /*
  * We don't need the lighting anymore
  * so disable it.
  */
 glDisable(GL_LIGHTING);
 
 glPushMatrix();
 
 glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
 
 /*
  * create a small red cube where the light
  * is.
  */
 glTranslatef(position[0], position[1], position[2]);
 
 glutSolidCube(0.2f);
 
 glPopMatrix();

 glutSwapBuffers();

}

void reshape(int x, int y){

 glMatrixMode(GL_PROJECTION);

 glViewport(0, 0, x, y);

 glLoadIdentity();

 gluPerspective(60.0, (GLdouble)x / (GLdouble)y, 1.0, 100.0);

 glMatrixMode(GL_MODELVIEW);

}

void initLighting(){
 
 /*
  * Tell OpenGL we want to use
  * the first light, GL_LIGHT0.
  */
 glEnable(GL_LIGHT0);
 
 /*
  * Set the ambient, diffuse and specular
  * colour properties for LIGHT0.
  */
 glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
 glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
 glLightfv(GL_LIGHT0, GL_SPECULAR, specular);
 
 /*
  * We're going to make GL_LIGHT0 a spotlight.
  * Set the angle of the cone of light and
  * how uniform the dispersion of light is.
  */
 glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, spotArc);
 glLightf(GL_LIGHT0, GL_SPOT_EXPONENT, spotCoefficient);
 
}

int main(int argc, char **argv){

 glutInit(&argc, argv);
 glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGBA);
 
 glutInitWindowSize(500, 500);

 glutCreateWindow("Adding Lighting");

 glutDisplayFunc(display);
 glutReshapeFunc(reshape);

 glEnable(GL_DEPTH_TEST);
 
 /*
  * setup the lighting once
  * in our program.
  */
 initLighting();

 glutMainLoop();

 return 0;

}


You may note that using this example to light your own models will not work, this is why I have used GLUT to draw some spheres for us. The reason for this is that no normals nor materials have been set for your objects where as the GLUT routines include them. More on normals and materials and their importance in lighting a scene in the next few posts.