Back to Projects

Recursive Ray Tracer (C++)

Project: High-Performance Renderer Architecture and Core Ray Tracing Loop

C++ Graphics Ray Tracing Scene Graph Architecture

This project involved creating the foundational architecture for a recursive ray tracing renderer in C++. The core accomplishment was implementing a decoupled rendering pipeline by using an abstract renderer base class and a custom Scene Graph. This allowed the system to switch seamlessly between a fast OpenGL rendering preview and the custom Ray Tracing calculation kernel.

The technical focus was on calculating the correct camera rays, managing polygon transformations from local to world space, and integrating with a specialized intersection engine to find the nearest hit point, forming the basis for lighting, shadows, and reflections.

Architecture: Decoupling Renderer and Scene

I utilized the **Abstract Factory Pattern** (through the `CGrRenderer` base class) and the **Composite Pattern** (through the `CGrObject` Scene Graph) to define all geometric elements and their materials independently of the rendering method.

The Scene Graph handled hierarchical transformations (translations, rotations), ensuring that every polygon's vertices and normals were correctly transformed into **World Space** before being submitted to the ray tracing kernel. This separation was crucial for debugging and future feature expansion.

Technical Insight: The Transformation Stack

The `CMyRaytraceRenderer` maintained a transformation matrix stack, mirroring OpenGL's functionality. This stack was used within the `RendererEndPolygon()` method to apply the full composite transform to every vertex and normal, guaranteeing accurate intersection testing.

Core Ray Tracing Loop & Intersection

The rendering process involved iterating over every pixel in the output image buffer. For each pixel, a corresponding ray was constructed originating from the camera's eye position and passing through the center of the projection plane coordinate.

Mathematical Challenge: Ray Calculation

The camera geometry, defined by the Field of View (FOV) and aspect ratio, was used to calculate the world-space coordinates (`x`, `y`) for the ray's direction vector. This vector was then normalized and used to query the external `CRayIntersection` system.

  1. **Pixel Mapping:** Convert screen coordinates (c, r) to normalized projection plane coordinates (x, y).
  2. **Ray Direction:** Calculate the direction vector from the eye (0, 0, 0) to (x, y, -1).
  3. **Intersection Test:** Call `m_intersection.Intersect(ray, ...)` to find the closest hit point (`t`) and the object (`nearest`).
  4. **Color Assignment:** Assign the pixel color based on the material of the nearest object hit (or background if no intersection is found).

This core loop successfully rendered geometry using only the ambient component of the assigned material properties.

Result and Impact

The successful implementation of the Scene Graph and the abstract rendering architecture laid a robust, scalable foundation for future graphics projects. The clean separation of concerns means that lighting, shadow calculation, and reflection recursion can be added incrementally to the `CMyRaytraceRenderer` without altering the fundamental scene structure. This early-stage success confirmed a strong understanding of fundamental computer graphics geometry, linear algebra (matrix stack), and C++ architectural patterns.