This project generates visualizations of the Newton Fractal for the equation
![]() |
![]() |
![]() |
This project is configured to run instantly in a pre-built environment using a Dev Container (e.g., this repository's Codespace). The simplest way to start is to launch the Codespace, as all dependencies are already configured.
Note: The initial Codepace setup may take a few moments while the container is built and dependencies are installed.
If you choose to run the project locally outside of a Dev Container, you must manually install the following dependencies:
- Make: The standard build automation tool (the project uses a provided custom
Makefileto coordinate C++/ISPC compilation) - ISPC Compiler: Required to generate optimized SIMD (Single Instruction, Multiple Data) code from the parallel kernel source.
- Imagemagick: Required to convert the raw output files (
.ppm) into a standard image format (.png).
Follow these steps to install the required tools and the ISPC compiler:
-
Install Standard Dependencies:
# Install the core build tools, imagemagick, and file utilities sudo apt-get update sudo apt-get install -y make g++ imagemagick wget tar -
Install ISPC:
The ISPC compiler must be downloaded and manually placed into the system PATH.# 1. Download ISPC binary (v1.21.0) wget https://github.com/ispc/ispc/releases/download/v1.21.0/ispc-v1.21.0-linux.tar.gz # 2. Extract the archive tar -xzf ispc-v1.21.0-linux.tar.gz # 3. Move executable to the system PATH sudo mv ispc-v1.21.0-linux/bin/ispc /usr/local/bin/ # 4. Cleanup downloaded files rm -rf ispc-v1.21.0-linux*
-
Final Verification Check:
After installation, use the following commands to ensure all prerequisites are available and correctly linked in your environment:# Check if Make, g++ and Imagemagick are installed make --version | head -n 1 g++ --version | head -n 1 convert --version | head -n 1 # Check if ISPC is installed and accessible ispc --version
Note on Editor Support:
If you plan on contributing or modifying the ISPC kernel code, please consider installing the official Intel ISPC extension for your editor (e.g., VS Code). Doing so makes .ispc code readable via syntax highlighting and makes debugging easier thanks to real-time file validation and error reporting.
Mono Runtime: If you are using Linux or macOS, the advanced features of the editor extension may require the Mono Runtime to be installed[1]. This is separate from running the core ispc compiler, which should work immediately after Step 2.
➡️ Dev Container Users: The ISPC extension and Mono Runtime are preinstalled in the Codespace environment.
The project uses a provided custom Makefile to handle the two-stage compilation: ISPC kernel first, then the C++ host, followed by linking.
-
Build the executable:
make
-
Run the executable:
The resulting executable (newton_fractal) accepts the degree of the polynomial ($n$ ) as a command-line argument.# Usage: ./newton_fractal <degree_n> <optional_width> <optional_height> # Example 1: Generate a fractal for z^5 - 1 = 0 (5 roots) ./newton_fractal 5 # Example 2: Generate a fractal for z^8 - 1 = 0 with custom resolution ./newton_fractal 8 1024 768
The resulting fractal image (
.ppm) is saved in theout/folder. -
Convert the image:
To convert the raw .ppm file to a more common .png format, run:make png
This uses
imagemagickto convert the image and will delete the large original.ppmfile.
In addition to the make commands above, the provided Makefile includes other useful convenience targets:
| Command | Purpose |
|---|---|
make clean |
Removes intermediate build files. Keeps the main executable. |
make fclean |
Deep cleaning. Removes all built artifacts, including object files, the final executable, and the output files (.ppm or .png files). Resets the repository to a clean state. |
make re |
Full Rebuild. Ensures the project is completely cleaned and then rebuilt from scratch. |
make seq |
Rebuilds the project using a sequential (non-ISPC) C++ implementation for comparison. |
make debug |
Rebuilds the executable with a flag that enables verbose runtime logging. Redirect to a logfile via shell redirection: newton_fractal 5 2> log.txt. |
make debug_seq |
Rebuilds the program in sequential mode and with the debug flag. As debug prints are invoked during fractal generation in this mode, this allows you to follow the convergence of individual pixels. |
The define.hpp header file is the project's main control panel for the visual output. By changing the constants in this file, you can easily configure and tweak the final fractal image.
Key parameters you can adjust:
-
Viewport: Camera Lens - Defines the area of the complex plane to render (
$\text{MinRe}$ ,$\text{MaxIm}$ , etc.). This is the primary way you zoom in and out or pan across the fractal. - Gamma: Filter - Adjusts the brightness and contrast of the final image, allowing you to fine-tune the look and feel.
- Maximum Iterations: Detail - Controls the calculation depth and level of patience. Higher values will reveal more intricate patterns (especially at the edges) but will take longer to compute.
- Convergence Tolerance: Accuracy - Sets the threshold for how "close" a point must get to a root to be considered converged. Usually close to zero, but using higher values generates funky pictures!
Example of high convergence tolerance (=0.5) for n=12. The centers of the circles are the roots. This is not a fractal but a picture of the algorithm taking a massive shortcut, creating an abstract image that completely hides the infinitely complex fractal pattern.
For a general introduction to Newton Fractals and Newton's Method, check out this great video by 3Blue1Brown[2].
The Newton Fractal is a type of fractal derived from Newton's Method for finding the roots (solutions) of a complex equation
The fractal is generated by plotting the complex plane
where:
-
Hue (Color): Determined by which of the
$n$ roots the iteration converges to. - Brightness/Saturation: Determined by the number of iterations it takes to converge.
The equation
This follows from Euler’s Formula
and the periodicity of sine and cosine, where
By setting the exponents equal and solving for
The iterative procedure of Newton’s method is defined as:
The program runs this loop for a given starting complex number
For
Substituting these into the Newton formula gives the specific iteration used in the implementation (and later parallelized in ISPC):
The root index (
This project achieves its massive speedup by mapping the pixel-by-pixel computation onto the Single Instruction, Multiple Data (SIMD) model. SIMD enables a single instruction to operate on multiple data elements (e.g., several pixels) simultaneously, leveraging the inherent data parallelism in fractal rendering. A concise introduction to SIMD operations can be found here[4].
This highlights the key distinction between the sequential (scalar) and parallel (vectorized) approaches:
- Sequential Operation: The CPU performs one operation on one data element (e.g.,
A[0] + B[0]→42 + 69). - Vector Operation (SIMD): The CPU performs one operation on an entire vector of elements within the same clock cycle (e.g., $
[0..3] + B[0..3]→[42, -9, 77, 0] + [69, 3, -53, -82])[5].
Before ISPC, the host C++ application (generateSeq() in src/Fractal.cpp) would run a slow, explicit loop over each pixel, calling the complex number iteration function sequentially for all width × height pixels.
Intel's ISPC (Intel Implicit SPMD Program Compiler) simplifies the process of writing this SIMD-optimized code[6]. ISPC uses the Single Program, Multiple Data (SPMD) programming model:
- The programmer writes a single kernel function (in this case, the iterative Newton's method).
- ISPC automatically compiles this kernel to generate low-level SIMD instructions (like AVX or SSE) specifically for the target CPU architecture[7].
- At runtime, the ISPC launches multiple program instances, each processing one SIMD vector of data in parallel. Each of these program instances forms a gang, and each element in the SIMD vector is called a lane. For more details, see the ISPC User Guide.
This programming model is particularly effective for the Newton Fractal, since each pixel’s computation is independent — making it a perfect example of data parallelism ideal for SPMD/SIMD execution.
A primary goal of this project is to demonstrate the significant speedup gained by using ISPC for parallel computation. Performance is benchmarked using the shell's built-in time command to measure the "real" (wall clock) execution time.
First, we compile a purely sequential version (make seq) to establish our performance baseline. This version runs on a single CPU core and does not use ISPC's SIMD optimizations. We measure the time to compute a fractal of order
~$ time ./newton_fractal 42
[...]
...
real 0m28.632s
user 0m28.492s
sys 0m0.025sThe real time (wall clock) of ~28.6 seconds closely matches the user time (total CPU time spent). This confirms the computation was executed on a single CPU core.
Next, we recompile the program with ISPC enabled (make re). This build leverages SIMD instructions, allowing the CPU to perform calculations on multiple data points (e.g., 4, 8 or 16 pixels) simultaneously within a single core.
We run the exact same computation:
~$ time ./newton_fractal 42
[...]
...
real 0m1.973s
user 0m1.954s
sys 0m0.009sThe execution time has dropped dramatically from 28.6 seconds to ~1.97 seconds, that's a ~14.5x speedup.
It is important to note how this ~14.5x speedup is achieved. ISPC provides parallelism in two ways:
- SIMD Vectorization: Performing multiple operations at once within a single core.
- Multi-threading: Distributing the work across multiple CPU cores (via its tasking system).
In this specific test (run in a GitHub Codespace with 2 CPU cores), notice that the real time (1.973s) and user time (1.954s) for the ISPC version are almost identical.
- If ISPC were using multiple cores (e.g., 2 cores fully), we would expect the
usertime (total CPU work) to be roughly double therealtime (wall clock). - Since
realandusertimes are the same, it confirms the program ran on only one core.
Conclusion: The massive speedup is achieved almost entirely through SIMD vectorization on a single core.
The ISPC compiler vectorized the fractal logic so effectively that it could process ~14-15 data points (pixels) in the same amount of time the sequential C++ code processed one.
Furthermore, this speedup factor becomes even more pronounced for higher values of
On a local machine with 4, 8 or 16 cores, ISPC would likely leverage both SIMD and multi-threading, resulting in an even greater speedup. You can find more details on this concept in the official ISPC Performance Guide.
[1] Mono Project; 2025. Install Mono
[2] Sanderson, G. (3Blue1Brown); YouTube (Oct 12, 2021). Newton’s fractal (which Newton knew nothing about)
[3] Rubine, D.; Quora (2017). How do I prove Euler Identity for any integer k?
[4] Sony Computer Entertainment Inc. (2008). Documents of PS3 Linux Distributor's Starter Kit
[5] InfluxData Inc. (2025). What is vector processing?
[6] Intel Corporation (2025). Intel Implicit SPMD Program Compiler
[7] Wikipedia (Oct 6, 2025). Streaming SIMD Extensions


