Skip to content
Adaptive

Learn Computer Graphics

Read the notes, then try the practice. It adapts as you go.When you're ready.

Session Length

~17 min

Adaptive Checks

15 questions

Transfer Probes

8

Lesson Notes

Computer graphics is the field of study and practice concerned with generating, manipulating, and representing visual content using computers. It encompasses everything from the mathematical foundations of rendering three-dimensional scenes to the design of user interfaces and the creation of digital art. At its core, computer graphics bridges mathematics, physics, and computer science to solve the fundamental problem of converting abstract data into images that humans can perceive and interpret.

The discipline traces its origins to Ivan Sutherland's Sketchpad system in 1963, which demonstrated that computers could be used for interactive visual creation. Since then, the field has evolved rapidly through landmark developments including Phong shading, texture mapping, ray tracing, and the graphics processing unit (GPU). These advances have transformed industries ranging from film and video games to scientific visualization, medical imaging, architecture, and virtual reality.

Modern computer graphics is divided into several major subfields: real-time rendering (used in games and simulations), offline rendering (used in film production), computational geometry, image processing, and visualization. The rise of programmable GPU pipelines, physically based rendering, and neural rendering techniques continues to push the boundaries of visual realism and computational efficiency, making computer graphics one of the most dynamic and impactful areas of computer science.

You'll be able to:

  • Identify the mathematical foundations of computer graphics including transformations, projections, and color models
  • Apply rasterization and ray tracing algorithms to render three-dimensional scenes with realistic lighting effects
  • Analyze shading models, texture mapping, and global illumination techniques for photorealistic image synthesis
  • Design real-time rendering pipelines that balance visual quality with computational performance for interactive applications

One step at a time.

Key Concepts

Rasterization

The process of converting geometric primitives (such as triangles defined by vertices) into discrete pixels on a screen. It is the dominant rendering technique in real-time graphics because of its speed and efficiency on modern GPU hardware.

Example: When a 3D video game draws a character on screen, the GPU rasterizes thousands of triangles that compose the character's mesh, determining which pixels each triangle covers and what color each pixel should be.

Ray Tracing

A rendering technique that simulates the physical behavior of light by casting rays from the camera through each pixel and tracing their interactions with scene geometry. It naturally produces accurate reflections, refractions, and shadows.

Example: In a photorealistic architectural visualization, ray tracing calculates how light enters through a glass window, refracts, bounces off a polished floor, and illuminates surrounding walls, producing an image nearly indistinguishable from a photograph.

Shading Models

Mathematical models that determine how the color and brightness of a surface point are computed based on material properties, light sources, and viewing direction. Common models include Phong, Blinn-Phong, and physically based shading (PBR).

Example: A plastic sphere rendered with Phong shading displays a smooth gradient from dark on the side facing away from the light to bright on the lit side, with a sharp specular highlight near the reflection of the light source.

Texture Mapping

The technique of applying a 2D image (texture) onto the surface of a 3D model to add visual detail without increasing geometric complexity. UV coordinates define how the 2D texture wraps onto the 3D surface.

Example: A flat rectangular 3D plane can look like a detailed brick wall simply by mapping a photograph of bricks onto it using texture coordinates, giving it the appearance of thousands of individual bricks without modeling each one.

Transformation Matrices

4x4 matrices used to represent and combine geometric operations such as translation, rotation, and scaling in 3D space. They form the mathematical backbone of the graphics pipeline, converting objects from model space through world space and camera space to screen space.

Example: To position a teapot on a virtual table, the system multiplies the teapot's vertex coordinates by a series of transformation matrices: one to scale it, one to rotate it to the correct orientation, and one to translate it to its position on the table.

The Graphics Pipeline

The sequence of stages that transform 3D scene data into a 2D rendered image. The major stages include vertex processing, primitive assembly, rasterization, fragment processing, and framebuffer operations. Modern GPUs implement this pipeline in hardware.

Example: When rendering a frame in a game engine, geometry data flows through the vertex shader (transforming positions), the rasterizer (converting triangles to fragments), and the fragment shader (computing colors), before the final image appears on the display.

Global Illumination

Rendering algorithms that simulate indirect lighting, where light bounces between surfaces before reaching the camera. Techniques include path tracing, photon mapping, and radiosity, all producing more realistic images than direct-lighting-only approaches.

Example: In a room with red walls and a white ceiling, global illumination causes subtle red color to bleed onto the ceiling near the walls, reproducing a phenomenon observed in real life called color bleeding.

Anti-Aliasing

Techniques that reduce visual artifacts (jagged edges or 'jaggies') caused by the discrete sampling of continuous geometry onto a pixel grid. Common methods include MSAA (multisample anti-aliasing), FXAA, and TAA (temporal anti-aliasing).

Example: Without anti-aliasing, a diagonal line on screen appears as a staircase of pixels. With 4x MSAA, the GPU samples each pixel at four sub-pixel locations and blends the results, producing a much smoother-looking line.

More terms are available in the glossary.

Explore your way

Choose a different way to engage with this topic β€” no grading, just richer thinking.

Explore your way β€” choose one:

Explore with AI β†’

Concept Map

See how the key ideas connect. Nodes color in as you practice.

Worked Example

Walk through a solved problem step-by-step. Try predicting each step before revealing it.

Adaptive Practice

This is guided practice, not just a quiz. Hints and pacing adjust in real time.

Small steps add up.

What you get while practicing:

  • Math Lens cues for what to look for and what to ignore.
  • Progressive hints (direction, rule, then apply).
  • Targeted feedback when a common misconception appears.

Teach It Back

The best way to know if you understand something: explain it in your own words.

Keep Practicing

More ways to strengthen what you just learned.

Computer Graphics Adaptive Course - Learn with AI Support | PiqCue