glPushMatrix(); glBegin(GL_TRIANGLES); /* This is the bottom face*/ glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); //top face glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 1.0f); //right face glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, 1.0f); //front face glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); //left face glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); //back face glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glEnd(); glPopMatrix();
The following pictures compare the result of drawing this cube with the result of using the cube provided by GLUT, each modelled using our previous lighting system.
Our cube
|
GLUT cube
|
Both cubes are drawn in the exact same environment, using the exact same lighting, yet here we see completely different lighting results.
Surface Normals
The GLUT cube is lit correctly whilst our cube is not lit correctly. This is because the GLUT cube defines normals along with the vertices. A normal is a vector that is perpendicular to a plane. A surface normal is a normal that is perpendicular to the plane that the surface lies on. For example, each surface of our cube is made up of two triangles and each of these has it's own surface normal. To illustrate this, here is our cube shown in Blender with surface normals showing.
The surface normals allow us to know the direction that the surface is facing. OpenGL then uses this information to calculate how much of the light from the light source the surface is receiving. It does this by Taking the direction of the light and the surface normal and calculating the angle between them using the dot product operation. The smaller the angle between the two vectors, the more light the surface receives. Because our cube provides no normal information, OpenGL has nothing to work with and so we get incorrect results.
So how do we go about calculating the surface normals? I covered this is a previous post, but basically the vector operation called the cross product is performed using the three vertices of each face. This returns a vector perpendicular to the surface. The following code will accomplish this.
Vector3 crossProduct(Vector3 v1, Vector3 v2){ Vector3 cross = {v1.y * v2.z - v1.z * v2.y, v1.z * v2.x - v1.x * v2.z, v1.x * v2.y - v1.y * v2.x}; return cross; } Vector3 getSurfaceNormal(Vector3 v1, Vector3 v2, Vector3 v3){ /* * obtain vectors between the coordinates of * triangle. */ Vector3 polyVector1 = {v2.x - v1.x, v2.y - v1.y, v2.z - v1.z}; Vector3 polyVector2 = {v3.x - v1.x, v3.y - v1.y, v3.z - v1.z}; Vector3 cross = crossProduct(polyVector1, polyVector2); normalize(cross); return cross; }
You will notice that after calculating this perpendicular vector, the normalize function is called on it before it is returned which produces a unit vector. A unit vector is simply a vector whose magnitude is 1. The process of normalising a vector is described here. (If these concepts are confusing, reading up on the basics of vector mathematics is strongly recommended before continue to pursue learning OpenGL). When providing normals, we always provide unit vectors as OpenGL expects this due to simplified calculations. Failing to do so will result in strange lighting!
Because our cube is very simple, we can actually just mentally calculate the normals and write them directly into the code. The following example now attaches normals to our cube.
void drawCube(){ glPushMatrix(); glBegin(GL_TRIANGLES); /* This is the bottom face*/ glNormal3f(0.0f, -1.0f, 0.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); //top face glNormal3f(0.0f, 1.0f, 0.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 1.0f); //right face glNormal3f(1.0f, 0.0f, 0.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, 1.0f); //front face glNormal3f(0.0f, 0.0f, 1.0f); glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); //left face glNormal3f(-1.0f, 0.0f, 0.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, 1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); //back face glNormal3f(0.0f, 0.0f, -1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(1.0f, 1.0f, -1.0f); glVertex3f(-1.0f, -1.0f, -1.0f); glVertex3f(-1.0f, 1.0f, -1.0f); glEnd(); glPopMatrix(); }
Normals are added by making a call to glNormal3f and stating the vector coordinates. OpenGL then uses this normal value in calculations for any following vertices declared with glVertex3f until either a new normal is declared or the drawing ends. Because OpenGL now knows the correct normals, we achieve the correct lighting conditions, exactly the same as the GLUT cube.
Vertex Normals
Vertex normals are similar to surface (face) normals. Whereas surface normals represent a vector that is perpendicular to the polygon's surface, vertex normals have no concrete definition. A vertex normal is an average of surface normals and they are used to calculate more realistic and less blocky lighting.
One problem with using surface normals is that the lighting for each surface is calculated using its normal which is then applied to every pixel of the surface. This means that each surface has a uniform colour causing edges between surfaces to appear distinct due to the uniform colouring, causing the model to look blocky.
This is due to the way OpenGL calculates lighting: using the Gouraud shading model. The Gouraud method first requires that each vertex has its own normal. A vertex normal is calculated by averaging the surface normals of every surface that the vertex is a member of.
Once it has these normals, OpenGL uses the Blinn-Phong reflection model to calculate how much light is being received at each of these vertex normals. Gouraud takes these values and interpolates the values along the edges of the polygon, using the lighting calculated at the vertices at the end of the lines and changing it based on how far away from each vertex the current point is. Once the lighting values have been interpolated along the edges, the process is repeated for the pixels within the polygon, interpolating using the lighting values just calculated at the edge, changing the pixel value based on how close it is to each edge. Hopefully the following diagram showing the interpolation of interior points, makes it a little clearer (I represents the light intensity value at a point).
This is the cause of the blocky lighting as seen above. When we set only the surface normal to be used for a polygon, each vertex normal uses the surface normal. This means the lighting value at each vertex is the same and thus, the interpolated values across the entire surface are the same. This means there are no smooth transitions between polygons (smooth shading).
One method around this is to, within our program, define a normal for each vertex in the model. The way to do this is to look at the normals of each face that uses the vertex, called adjacent faces. The normal for the vertex is then calculated as the average of the normals of the surface normals as described above. This averages the lighting value of the surfaces such that the edges of the surfaces do not appear.
Cube shown with face normals | Cube shown with vertex normals |
Hopefully clear from the pictures is that at the edges of the cube , the vertex normals are pointing out at a 45 degree angle. This is because it is a combination of the face normal pointing upwards and the face normal pointing outward.
In order to create the vertex normals, we run through each face of the model and for each vertex making up that face, mark the face as an adjacent face. After this initial pass, we run through each vertex and work out the average normal from each of the surface normals of each adjacent face marked. Example code is presented below.
void WavefrontOBJModel::calculateNormals(){ for(int i = 0; i < numberOfFaces; i++){ WavefrontOBJFace * face = &modelFaces.at(i); int vertexIndex1 = face->getVertexIndices().at(0); int vertexIndex2 = face->getVertexIndices().at(1); int vertexIndex3 = face->getVertexIndices().at(2); WavefrontOBJVertex * vertex1 = &modelVertices.at(vertexIndex1); WavefrontOBJVertex * vertex2 = &modelVertices.at(vertexIndex2); WavefrontOBJVertex * vertex3 = &modelVertices.at(vertexIndex3); vertex1->addAdjacentFace(i); vertex2->addAdjacentFace(i); vertex3->addAdjacentFace(i); CVector3 v1 = *vertex1->getCoords(); CVector3 v2 = *vertex2->getCoords(); CVector3 v3 = *vertex3->getCoords(); CVector3 cross1 = v2 - v1; CVector3 cross2 = v3 - v1; float normalX = cross1.y * cross2.z - cross1.z * cross2.y; float normalY = cross1.z * cross2.x - cross1.x * cross2.z; float normalZ = cross1.x * cross2.y - cross1.y * cross2.x; CVector3 normal (normalX, normalY, normalZ); normal.normalize(); face->setSurfaceNormal(normal); } for(int i = 0; i < numberOfVertices; i++){ WavefrontOBJVertex * v = &modelVertices.at(i); int numAdjacentFaces = v->getAdjacentFaces().size(); float xNormalTotal = 0.0f; float yNormalTotal = 0.0f; float zNormalTotal = 0.0f; for(int j = 0; j < numAdjacentFaces; j++){ int faceIndex = v->getAdjacentFaces().at(j); CVector3 faceNormal = *modelFaces.at(faceIndex).getSurfaceNormal(); xNormalTotal += faceNormal.x; yNormalTotal += faceNormal.y; zNormalTotal += faceNormal.z; } CVector3 newVertexNormal (xNormalTotal, yNormalTotal, zNormalTotal); newVertexNormal.normalize(); v->setNormal(newVertexNormal); } }
Now that we have a normal for each vertex, we simply make a call to glNormal3f for every glVertex3f call that we make, rather than setting it once for each face.
Strange Lighting: A Caveat
We're nearly at our goal of achieving smooth shading for our models. There is however one problem: if we run our cube example with calculated vertex normals included this time, this is what we get:
What gives!? We went through all the trouble of calculating and defining our vertex normals and it's worse than it was before!
The reason we get this cool but incorrect bevel type effect at the edges is due to the fact that Gouraud shading is used for shading curved surfaces. It does not handle sharp edges well. This is because if we have a light shining directly down on the top of the cube, the vertex normals at the edges use an average of the surface normals on the top and the surface normals at the sides. Since the light is completely on the top surface, the top is in light and the side is in darkness because the angle between the normals is 90 degrees. We do however, get a smooth transition from light to dark which is not correct.
After racking my brains a while and asking a few questions on the interwebs, the solution is that because Gouraud only works on smooth surfaces, we need to break the connection between the edge vertices - we need to stop them averaging surface normals that are separated by a sharp edge. The easy way to do this is to simply duplicate your vertices at every sharp edge in the model. This means that the faces on either side of a sharp edge are no longer adjacent faces in terms of vertices and will not be averaged when calculating vertex normals. I'm sure this can be done programatically however it is much easier to ask (read: demand) that your artist provide you with models where vertices are duplicated at sharp edges. I'm sure this is the standard now but I'm no artist so I'm not sure! Besides, this can be easily accomplished in Blender using the edge split modifier.
Improve Lighting: Add Polygons
Another common problem with lighting is that the models that are being lit simply don't have enough polygons. A cube with only twelve triangles will not look very good when put under the lights. This is because there are not many vertices and so the lighting values calculated at a particular vertex will have to be interpolated across too large a distance, causing poor results. Of course increasing vertices means poorer performance so balancing these two things is a bit of an art. To illustrate the effect of having too few polygons, compare the lighting results of using a spot-light to light our cube with 12 polygons and our cube with 46343 polygons (too many but will demonstrate the effect). Thankfully Blender provides the ability to subdivide our object very easily.
12 polygons
|
46343 polygons
|
As we can see, the low polygon cube is not even lit at all. This is due to the fact that the spotlight doesn't even reach one of the 24 vertices!
Next time we'll improve the appearance of our cube by applying materials and textures. This also means that we'll be able to bring in the specular lighting that I so conveniently skipped over ;).
Next time.
*Gouraud shading description and images inspired by the book 'Computer Graphics' by Hearn & Baker, Third international edition*