An academic-focused, from-scratch multilayer perceptron (MLP) project in modern C++ with both:
- a CLI application for experiments, and
- an installable CMake library package for reuse in other C++ projects.
Current scope:
C++17- CPU execution, optional
OpenMP, optionalCUDA - CLI experiments and installable CMake library package
Requirements:
- CMake >= 3.16
- C++17 compiler (
g++orclang++)
Build and run:
cmake -S . -B build
cmake --build build
./build/mlpThe CLI trains on a train split and reports loss and binary metrics on train, validation, and test.
Common CLI options:
./build/mlp --optimizer sgd|momentum|nag|adam|adamw|nadam|rmsprop|adagrad|adadelta|lion
./build/mlp --hidden 16,16,8
./build/mlp --epochs 3000 --lr 0.01
./build/mlp --samples 1000 --seed 42
./build/mlp --train-ratio 0.7 --val-ratio 0.15 --threshold 0.5OpenMP (CPU parallelism):
cmake -S . -B build-omp -DMLP_ENABLE_OPENMP=ON
cmake --build build-omp
./build-omp/mlpCUDA (dense ops):
cmake -S . -B build-cuda -DMLP_ENABLE_CUDA=ON
cmake --build build-cuda
./build-cuda/mlpCUDA notes:
- CUDA support is optional and currently accelerates dense-layer matrix operations.
- The current CUDA path is intended for correctness and experimentation, not peak throughput.
- If CUDA is not detected, configure CMake with
-DCUDAToolkit_ROOT=/path/to/cuda. - If
nvccis unavailable, use the default CPU build or the OpenMP build.
Public API headers (stable surface):
include/mlp/types.hppinclude/mlp/metrics.hppinclude/mlp/library.hppinclude/mlp/io.hppinclude/mlp/version.hpp
Main API entry points:
mlp::run_xor_experiment(...)mlp::save_sequential(...)mlp::load_sequential(...)
CMake targets:
mlp::mlp_coremlp::mlp_optimmlp::mlp_trainmlp::mlp_iomlp::mlp_lib(compatibility aggregate target)
Example targets included in this repo:
cmake --build build --target mlp_library_example
./build/mlp_library_example
cmake --build build --target mlp_io_example
./build/mlp_io_exampleInstall locally:
cmake -S . -B build
cmake --build build
cmake --install build --prefix /tmp/mlp-installConsume from another CMake project:
find_package(mlp REQUIRED)
target_link_libraries(your_app PRIVATE mlp::mlp_lib)Or link only components:
find_package(mlp REQUIRED)
target_link_libraries(your_app PRIVATE mlp::mlp_train mlp::mlp_io)If using custom install prefix:
cmake -S . -B build -DCMAKE_PREFIX_PATH=/tmp/mlp-installROADMAP.md— phased feature roadmap linked to GitHub issuesdocs/TUTORIAL.mddocs/EXPERIMENTS.mddocs/API_POLICY.md
| Name | CLI string | Notes |
|---|---|---|
| SGD | sgd |
Vanilla stochastic gradient descent |
| Momentum | momentum |
SGD with exponential moving-average velocity |
| NAG | nag |
Nesterov Accelerated Gradient |
| Adam | adam |
Adaptive moment estimation |
| AdamW | adamw |
Adam + decoupled weight decay |
| Nadam | nadam |
Adam with Nesterov momentum correction |
| RMSProp | rmsprop |
Root mean square propagation |
| AdaGrad | adagrad |
Adaptive per-parameter learning rates (accumulative) |
| AdaDelta | adadelta |
AdaGrad variant with running averages, no fixed lr |
| Lion | lion |
Evolved Sign Momentum — sign-based, memory-efficient |
| LambdaOptimizer | — | Custom extension hook via user-supplied lambdas |
Run all tests locally:
cmake -S . -B build
cmake --build build
ctest --test-dir build --output-on-failurePre-push local check:
./scripts/pre_push_check.shTest suite includes:
- training/evaluation integration test
- save/load roundtrip test
- installed package consumer test (
find_package(mlp))
CI (.github/workflows/ci.yml) runs:
- OpenMP matrix (
MLP_ENABLE_OPENMP=OFF/ON) - optional CUDA configure/build smoke check when
nvccis available
CI verification:
- use the
CIbadge at the top of this README - use
./scripts/pre_push_check.shbefore pushing changes
- Language: C++17
- Primary use case: academic experiments and library-oriented reuse
- Supported acceleration paths: CPU, OpenMP, and optional CUDA
Released under the MIT License. The project is provided as-is, without warranty. GitHub issues are welcome.