Graphics Engine
What Is a Graphic Engine?
Definiton
A graphics engine is any software that provides the capability to draw objects to the screen. Objects can be anything from triangles to complex meshes (which are generally made of triangles anyways...).
What's The Point?
Why would we want a graphics engine rather than writing the code to draw everything from scratch? Well, it provides a number of advantages.
However in our case the purpose of the engine is to learn about graphics rather than implement them well. The following points explain the advantages of a properly implemented graphics engine, but not what we will be doing:
- Performance
- the engine is very good at what it does because it's been optimised to do it
- Efficiency
- if you don't constantly have to worry about rendering/drawing objects correctly it frees up time for development
- Correctness
- you can (or rather, hope you can) trust that the implementation of the graphics implementation is more robust than something you can write in a timely manner (less bugs, less edge cases)
So What Are We Trying To Achieve?
To learn about graphics, and the underlying maths. To be absolutely specific we will learn about how objects are rendered, and the underlying computation, but not the graphics pipeline itself.
Our engine will run all the code on the CPU, using only a single thread. As such the performance will be abysmal since we are not taking advantage of the graphics components in our computer (the GPU). If you would like to learn about the graphics pipeline instead, this is a good start and explains how the graphics pipeline is optimized and leveraged in order to squeeze out every ounce of performance.
Ommited Features
There are many features that we will not include as they either won't teach us much or require a disproportionate amount of work:
- Colour
- Rasterization
- Interpolation
- Specular Lighting
(Rasterization, interpolation, and specular lighting all require us to write our own algorithm to render triangles, and that will definitely not be able to run in real time on the CPU. I will opt to use SFML in order to render triangles instead).
What Do I Need To Know?
It's hard to go over what the pre-requisites are for while dodging all the needed terminology. If you see anything that you don't know the meaning of (yet):
- It will be explained soon
- It probably means what it sounds like anyways
Vectors
Vectors are an important concept because they can represent simultaneously direction and distance, which are definitely two things we need to know if we are figure out where to draw things.
Point Vectors
Point vectors represent a position relative to some assumed position (normally the coordinate though they can represent the displacement between any two objects). These are vital in describing where objects are located. For example if a sphere is in front of the camera and above it, then it's coordinate in camera space is (more on this later).
Rays
Rays are like point vectors but they don't represent the relative position of two things, they simply represent a direction. For example is a ray of light is travelling at upwards and to the right then the vector representing it might be (though they are often normalized).
Matrices
Matrices are vectors of vectors (drawn as a square of numbers) that can represent transformations in space. For example if we rotate an object that can be represented by a rotation matrix. They are to convert the position of vertices from world space to camera space. They can also be used to manipulate objects in general.
Their most useful property is composition where if we have two matrices and that represent two separate transformations, their product represents applying the transformation , and then applying the transformation (read right to left).
4-dimensional matrices are used () matrices in order to represent translations.
Vectorspaces
Vectorspaces are a way to think about objects and transformation. When teaching the maths behind graphics it is generally conceptualized as transformations on the vectorspace rather than the objects being transformed. For example if we scale an object by a factor of (make it twice as big) that is equivalent to doubling the distance of every point to the origin (multiplying all vectors in the space by ).
Transformations
You should know about a few transformations when implementing a graphics engine:
- Translation
- Rotation
- Scaling
Background Terminology
There is some background terminology that is important to cover so the next few sections are as clear as possible.
Primitive
A simple object (shape) that can be directly rendered (such as a line, or a point). We will be solely using triangles as our primitive. This is due to a few useful properties that they have, which are not guaranteed by other shapes.
Vertex
Exactly what it means in English: it's simply a boundary point on an object.
Fragment
This term is almost interchangeable with pixel: we won't deal with them directly since we are only implementing rendering at the vertex level but a fragment is the smallest unit within a primitive that is distinct (has its own lighting).
Mesh
A collection of faces, normally primitives (quadrilaterals or triangles). Meshes can have more data than this, for example including the normal data (normal map), texture mappings (UV map) etc.
We will only focus on the vertices.
Space
A vectorspace centered at a given position (world space has an arbitrary centre, camera space is centered on the camera, screen space is centered in the top left of the screen).
Matrix Operations
Matrix operations are composed in order to place objects at their desired position on the screen.
Matrix Structure
Matrices are assumed to be constructed with row vectors for our purposes (though it doesn't matter if it's row vectors). We will be using affine matrices, which are a type of matrix that can represent and compose translation as well as the other operations.
The structure of the affine matrix will be explained separately in due time. For now we can simply ignore the th row and th column.
Inverse Transformations
Each matrix implies a transformation, but there also exists an inverse matrix that undoes this transformation. Inverting matrices is not a trivial problem, therefore for speed and clarity we will be doing each inversion individually.
Translation
Translation (also sometimes called displacement by physics majors) is a transformation that takes an object and moves it in space. It preserves everything about the object except for its position. All angles, rotations, etc. are conserved.
Translation Matrix
Instead of talking about the full matrix for translation we will talk about a column vector () matrix. This simply reduces the amount of clutter and simplifies the notation.
Our translation matrix for a given translation will encode how many units in each cardinal direction we are to move the object.
The 1 at the end is simply so that when it comes to the composition of the matrices we can combine it into a matrix. It is called the perspective scale.
Inverse Translation Matrix
The inverse of the translation matrix is easy to construct, you simply negate all the components (but not the perspective scale):
Rotation
The rotation matrix that represents the desired rotation can be constructed by combining separate rotations, one on each axis.
Euler Angles
Euler angles are the name for the angles of each axis. They will be written as . These angles encode every possible D rotation, but each rotation does not correspond to a unique set of euler angles and so it leads to a problem called gimbal lock where one degree of freedom is lost in rotation.
Gimbal Lock
When the axes align in a particular arrangement it can give rise to a problem called gimbal lock where rotation cannot be applied in an arbitrary direction any longer. This happens when two directions ('gimbals') are parallel and therefore moving either of them would result in the same axis of rotation. For example consider the case that a camera is pointing straight up:
- If it rotates on the axis it will shift the view downwards
- If it rotates on the axis it will turn the view in place
- If it rotates on the axis it will turn the view in place
- The and axis rotations now result in the same outcome, leaving only 2 possible axes of rotation
Quaternions
Quaternions are four dimensional complex numbers with three complex components and one real component. They are a more robust (and efficient) way to represent rotations in D space. However they can be tedious to implement and test, and this project has neither animations, nor a huge need to solve gimbal lock.
Since these are the two main advantages of quaternions I will not be implementing them. If you wish to learn more about what they are, I suggest this video by 3Blue1Brown.
Rotation Matrix
The rotation matrix corresponds to the rotation of the axes. It can be represented as the composition of the rotation of each axis separately (where is the rotation of the axis ), like so:
Inverse Rotation Matrix
Scaling
Scaling is the only transformation that isn't required in order to convert from camera space to world space, though it is certainly a useful transformation and will therefore be covered.
Scaling Matrix
To scale by a factor where is the D identity matrix:
Inverse Scaling Matrix
Composition
To rotate an object about a point (can be the object's centre point to rotate in place) by a vector of euler angles one must apply the inverse translation first:
Scaling must be done with the object at the origin, so with the centre of the object at and the scaling factor :
Translation can be done in isolation at the end by some displacement .
Combining all transformations:
Let's represent this composite transformation matrix by .
Camera
View Fulcrum
The viewing fulcrum is the area in which the camera can see, it is defined by a width (), a height (), a focal length ( that defines the near clipping plane), and a field of view () that defines the angle of vision.
All objects closer than the clipping plane are invisible in order to not obstruct large objects in the scene by extremely small but close ones.
Perspective Projection
The perspective projection is a type of projection that tries to adhere to how optics works in reality. It is not an accurate model of the objects it presents but aims to emulate real life vision.
It's construction as follows:
Orthographic Projection
Orthographic projection does not aim for realism but instead accuracy. It more accurately represents distances and axes and is often used in engineering or D modelling. There are definitely usecases for orthographic projection in graphics (isometric games for example) but I will not be covering them.
World Space
World space means the space that contains the 'absolute' position of all the objects. The motion of any object doesn't change the world coordinates of any other object. This is the default representation of all coordinates before they are transformed into other spaces.
Camera Space
Camera space means the space that contains all positions relative to the camera. The camera is the origin in camera space, while all objects have coordinates based on where they are relative to the camera.
Screen Space
Screen space represents the coordinates on the screen where is the top left and is the bottom right. It provides an easy way to draw D primitives on the scrreen in order to represent D shapes.
Transformation of The Camera
To transform the camera maintain the translation vector and the euler angles. When translating simply add to the component of the translation vector by projecting the forward vector of the camera onto the planes and when rotating the camera simply increment the euler angle being rotated on.
Non-axis-aligned rotations may look awkward, but that must be solved by using quaternions.
Converting to Screen Space
Compute the transformation matrix and the perspective projection matrix . Take some vector that you would like to convert into screen space, and extend it to a component vector (where the last component is ).
The screen space coordinates of it are given by the components of the following vector:
To convert the positions from proportions to pixels simply multiply by the width and height of the screen as needed.
Directional Lighting
Directional lights are a type of light source where every ray is parallel (going in the same direction). They are the simplest light source to implement as they only have a direction and intensity (no position or angle!).
A directional light is simply assumed to be infinitely far away and never attenuate (get dimmer). This is analogous to extremely bright, extremely far away objects (such as the sun or the moon).
Direction
The direction of the light is encoded in a vector, and can be found by normalising the vector. For example if the vector represents the directional light then we know that it goes in the direction (since it's already normalized).
Intensity
Likewise the intensity of the light is encoded as the magnitude of the vector. For example if hte vector represents the directional light then the intensity of the light is (using the pythagorean theorem). This extends to (or really any number of) dimensions.
Lighting Calculations
The lighting calculation relies on two angles and an intensity: the angle from the camera to the object, and the angle from the light to the object.
Given a directional light vector we can compute the direction and the intensity :
When tlaking about angles from the object this really means the angle of the surface normal (which is the directional perpendicular to the surface where the light reflects).
The angle between two unit vectors and can be computed using the dot product (a type of vector product). The formula for it is:
A useful property of triangles in graphics is that they are coplanar, meaning that the entire shape only lies on a single plane and therefore has a single surface normal across the entire shape. We can find the surface normal () in the direction of the light with a cross product.
With our new-found surface normal we can now compute all the values we need. Our final luminosity will be computed as meaning 'brightness'.
Once is computed the triangle can be rendered with a brightness of (RGB value of ).
Rendering Meshes
To render a mesh we first need to read all the mesh data, then convert it into screen coordinates and to render the individual primitives of the mesh.
File Format
I have decided to work with wavefront files as they are a simple format to parse and view directly. They natively provide the vertex and index buffer to use, and it is a very human readable/useable format.
Vertex Buffer
The vertex buffer holds all the vertices in a single 'array'. The reasoning for this is that if the vertices were loaded directly in the mesh data there would be a lot of duplication since each vertex can be used many times. We will load each unique vertex into this buffer and replace all references to vertices by their index inside of the index buffer.
Index Buffer
The index buffer holds all the primitives of the mesh in a single 'array'. We will assume for our purposes that the index buffer will only ever hold triangles as that is the only primitive we render. In the index buffer every group of consecutive indices represent a single triangle's vertices.
As such we can store the entire mesh as a single long index buffer and look up the vertex data as required when rendering.
Triangulation
Triangulation is the process of turning the quads that are present in the mesh data into triangles in order to render them. This is sometimes a trivial process since primitive vertices can be given in the same order for each primitive (clockwise or counter-clockwise).
In that case to triangulate you take the vertices of the quad and construct triangles:
- Triangle has vertices , , and
- Triangle has vertices , , and
However in the more general case there might be a need to triangulate them yourself. You can do this in code but please for the love of god load it into blender and export it already triangulated, saving yourself a lovely afternoon of misery.
Rendering
To finally draw the mesh you must convert all vertices into screen space, and ignoring the component draw each triangle of the mesh.
Painter's Algorithm
When drawing the triangles in an arbitrary order you can expect to see a self-intersecting abomination where far objects overlay near ones. The simplest solution by far is to sort them by their value (depth) before drawing them. This means that far objects are drawn first, and close objects are drawn over them subsequently.
Extensions to The Engine
Blinn-Phong Model
The Blinn-Phong model is a lighting model that provides more depth to the lighting than how bright each primitive is.
It specifies the ambient, diffuse, and specular components which can more accurately display objects. For example due to the specular components spheres have the halo/shiny region that they would when viewed directly under a light source.
This model of lighting also specifies how to interpolate lighting data correctly between fragments of a given primitive which will eliminate the seams present between neighbouring primitives in our current lighting model.
Different Types of Lighting
There are various types of light sources which all interact with their environment different and are used for varying purposes/effects. If you are curious then you can read more about them here.
These are the types of lighting you can implement:
- Point Light
- Spot Light
- Volume Light
- Ambient Light
- Area Light
Quaternions
Quaternions are a more robust method of performing rotations and they allow different techniques such as SLERP (spherical interpolation) which allows for smooth animations, as well as avoiding gimbal lock altogether.
z-buffering
Z-buffering aims to avoid the downfalls of the painter's algorithm by maintaining a buffer of depth values and only writing to a fragment if the new depth value is less than the current one, meaning the new fragment is 'on top' of the current one.