Linear Algebra

Linear algebra is a system that extends to vectors and matrices, not just regular numbers. In elementary algebra (the one with regular numbers), we could solve equations like \(2 + x = 5\), which would give us \(x = 3\). In linear algebra, however, we work with higher order objects, and we could solve systems of equations like this:

$$\begin{bmatrix} 1 & 0 & 1 \\ 0 & -1 & 2 \\ 1 & -1 & 3 \end{bmatrix} \begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} 1 \\ -2 \\ 7 \end{bmatrix}$$

In game development, we tend to mainly use linear algebra for transformations, and less often to solve systems of equations (although this is used as well). Meshes go through several coordinate spaces in 3D until it reaches a pixel on the computer screen. These transformations happen every frame for each vertex, so we need to prioritize efficiency. It turns out that matrices are a great and cheap method of performing these transformations in a predictable way. In fact, our graphics proecssors are especially built to handle matrix transformations much better than our CPU.

Here is an example of a translation matrix for a 3D object:

$$\begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & -2 \\ 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 1\end{bmatrix}$$

Which numbers affect what? And why is it a four-dimensional matrix if we are working in 3D? We will explain all of this in the chapter on transformations. Most transformations will already be implemented for us, but understanding the workings behind them will let us further customize our graphics.

Shaders and graphics programming are our window into the rendering pipeline, and we have to create matrices that transform objects from local space to world space, to camera space, to homogenous clip space, to normalized device space, to viewport space. It's a lot to take in! Don't worry if you don't recognize or remember all these spaces. The key takeaway is that matrices are the things that convert sends objects between these spaces. A matrix does not represent the space itself; it's only the bridge between them.

Scalars, Vectors, and Matrices

We will work with different types of mathematical objects in linear algebra. We are most used to working with regular numbers, but there are plenty of other objects we can work with. In this chapter we will create a new algebra based on vectors and matrices. Instead of saying "regular numbers", we use a more suitable name: scalars. Scalars are numeric values like \(1\), \(0\), \(-2\), \( - \frac{1}{12}\), \(\pi\), \(e\), and even \(\sqrt{-1}\). All the objects in linear algebra can operate on each other, and they can represent many different things:

Object Notation Example Values
Scalar \(s = 3.45\) Speed, temperature, length, volume, account balance
Vector \(\overrightarrow{v} = [3, 4]\) Velocity, suface normal, light ray, direction
Matrix \(M = \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}\) Transformation matrix, system of linear equations, perspective matrix

Mathematical Operations

This section will be a review of some properties of operations on scalars, also found in Numbers & Algebra. You don't have to already be comfortable with this, as we will quickly discard many of these properties in linear algebra. We outline the properties here for comparison with their equivalent properties within linear algebra.

From working with scalars, we know some laws that apply:

Law Description
\(a + 0 = a\) Any scalar plus 0 is itself.
\(a \cdot 0 = 0\) Any scalar multiplied by 0 is 0.
\(a \cdot 1 = a\) Any scalar multiplied by 1 is itself.
\(\frac{a}{a} = 1\) Any scalar divided by itself is 1.
\(a + b = b + a\) You can change the order of addition.
\(a - b \ne b - a\) You can not change the order of subtraction.
\(a \cdot b = b \cdot a\) You can change the order of multiplication.
\(\frac{a}{b} \ne \frac{b}{a}\) You can not change the order of division.
\((a + b) + c = a + (b + c)\) You can group numbers in addition in any way you'd like.
\((a \cdot b) \cdot c = a \cdot (b \cdot c)\) You can group numbers in multiplication in any way you'd like.
\(a (b + c) = ab + ac\) You can distribute the outside factor to each of the inside terms.

Most people are used to these laws, so they don't pay them any attention. Many of them are so intuitive that there is really no reason to even think about them. In linear algebra, we have to be aware of these laws' behaviors, and here's one reason why: \(a \cdot b \ne b \cdot a\). This contradicts the \(7^{th}\) rule above, but remember that this list works only for scalars. With operations in linear algebra, we cannot assume that any of these are working by default.

We give names to some of these laws:

Properties of Operations

Commutativity Description Example
Additive Change the order of two or more terms. \(2 + 3 = 3 + 2\)
Multiplicative Change the order of two or more factors. \(2 \cdot 3 = 3 \cdot 2\)
Associativity Description Example
Additive Group terms in any way you'd like. \((2 + 3) + 4 = 2 + (3 + 4)\)
Multiplicative Group factors in any way you'd like. \((2 \cdot 3) \cdot 4 = 2 \cdot (3 \cdot 4)\)
Distributivity Description Example
Distribute the outside factors to each of the inside terms. \(2 (x + 1) = 2x+2\)

You don't have to memorize the laws and tables above, but try to keep them in the back of your mind as we go through the creation of this exciting new algebraic system. We'll start with vectors, then move onto matrices, and finally onto quaternions. We have plenty of theory, interactive simulations, and problems for you to work through, as well as code snippets for implementation. Let's start!

Chapter Outline

Topic Description
Vectors Directions
Coordinate Systems Spaces
Matrices Blocks of information
Transformations Operations on points and vectors
Quaternions Advanced rotations

[Prev: Topics --- Next: Vectors]