Skip to content

tiagofga/mlp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Modular MLP in C++

CI Release License C++17 OpenMP Optional CUDA Optional Contributions Welcome Issues Welcome

An academic-focused, from-scratch multilayer perceptron (MLP) project in modern C++ with both:

  • a CLI application for experiments, and
  • an installable CMake library package for reuse in other C++ projects.

Current scope:

  • C++17
  • CPU execution, optional OpenMP, optional CUDA
  • CLI experiments and installable CMake library package

Quick Start (CLI)

Requirements:

  • CMake >= 3.16
  • C++17 compiler (g++ or clang++)

Build and run:

cmake -S . -B build
cmake --build build
./build/mlp

The CLI trains on a train split and reports loss and binary metrics on train, validation, and test.

Main Options

Common CLI options:

./build/mlp --optimizer sgd|momentum|nag|adam|adamw|nadam|rmsprop|adagrad|adadelta|lion
./build/mlp --hidden 16,16,8
./build/mlp --epochs 3000 --lr 0.01
./build/mlp --samples 1000 --seed 42
./build/mlp --train-ratio 0.7 --val-ratio 0.15 --threshold 0.5

Backend Options

OpenMP (CPU parallelism):

cmake -S . -B build-omp -DMLP_ENABLE_OPENMP=ON
cmake --build build-omp
./build-omp/mlp

CUDA (dense ops):

cmake -S . -B build-cuda -DMLP_ENABLE_CUDA=ON
cmake --build build-cuda
./build-cuda/mlp

CUDA notes:

  • CUDA support is optional and currently accelerates dense-layer matrix operations.
  • The current CUDA path is intended for correctness and experimentation, not peak throughput.
  • If CUDA is not detected, configure CMake with -DCUDAToolkit_ROOT=/path/to/cuda.
  • If nvcc is unavailable, use the default CPU build or the OpenMP build.

Library Usage

Public API headers (stable surface):

  • include/mlp/types.hpp
  • include/mlp/metrics.hpp
  • include/mlp/library.hpp
  • include/mlp/io.hpp
  • include/mlp/version.hpp

Main API entry points:

  • mlp::run_xor_experiment(...)
  • mlp::save_sequential(...)
  • mlp::load_sequential(...)

CMake targets:

  • mlp::mlp_core
  • mlp::mlp_optim
  • mlp::mlp_train
  • mlp::mlp_io
  • mlp::mlp_lib (compatibility aggregate target)

Example targets included in this repo:

cmake --build build --target mlp_library_example
./build/mlp_library_example

cmake --build build --target mlp_io_example
./build/mlp_io_example

Install and find_package

Install locally:

cmake -S . -B build
cmake --build build
cmake --install build --prefix /tmp/mlp-install

Consume from another CMake project:

find_package(mlp REQUIRED)
target_link_libraries(your_app PRIVATE mlp::mlp_lib)

Or link only components:

find_package(mlp REQUIRED)
target_link_libraries(your_app PRIVATE mlp::mlp_train mlp::mlp_io)

If using custom install prefix:

cmake -S . -B build -DCMAKE_PREFIX_PATH=/tmp/mlp-install

Documentation

  • ROADMAP.md — phased feature roadmap linked to GitHub issues
  • docs/TUTORIAL.md
  • docs/EXPERIMENTS.md
  • docs/API_POLICY.md

Optimizers Included

Name CLI string Notes
SGD sgd Vanilla stochastic gradient descent
Momentum momentum SGD with exponential moving-average velocity
NAG nag Nesterov Accelerated Gradient
Adam adam Adaptive moment estimation
AdamW adamw Adam + decoupled weight decay
Nadam nadam Adam with Nesterov momentum correction
RMSProp rmsprop Root mean square propagation
AdaGrad adagrad Adaptive per-parameter learning rates (accumulative)
AdaDelta adadelta AdaGrad variant with running averages, no fixed lr
Lion lion Evolved Sign Momentum — sign-based, memory-efficient
LambdaOptimizer Custom extension hook via user-supplied lambdas

Testing and CI

Run all tests locally:

cmake -S . -B build
cmake --build build
ctest --test-dir build --output-on-failure

Pre-push local check:

./scripts/pre_push_check.sh

Test suite includes:

  • training/evaluation integration test
  • save/load roundtrip test
  • installed package consumer test (find_package(mlp))

CI (.github/workflows/ci.yml) runs:

  • OpenMP matrix (MLP_ENABLE_OPENMP=OFF/ON)
  • optional CUDA configure/build smoke check when nvcc is available

CI verification:

  • use the CI badge at the top of this README
  • use ./scripts/pre_push_check.sh before pushing changes

Scope

  • Language: C++17
  • Primary use case: academic experiments and library-oriented reuse
  • Supported acceleration paths: CPU, OpenMP, and optional CUDA

License

Released under the MIT License. The project is provided as-is, without warranty. GitHub issues are welcome.

About

MLP in C++

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors