Python bindings for FLUPS (Fourier-based Library of Unbounded Poisson Solvers).
FLUPS is a high-performance library for solving Poisson equations using FFT-based methods. PyFLUPS provides a Python interface to this library, enabling easy integration with Python scientific computing workflows.
- Object-oriented Python API for FLUPS functionality
- Full MPI support via mpi4py
- NumPy integration for array operations
- Support for various boundary conditions: EVEN, ODD, PERiodic, UNBounded
- Multiple Green's function types: LGF, HEJ, CHAT, MEHR
- Both standard (STD) and rotational (ROT) Poisson solvers
- HDF5 export capabilities with XDMF support
First, ensure you have the FLUPS C++ library compiled:
cd /path/to/flups
ARCH_FILE=/your/arch/file make pythonRequired dependencies:
- Python >= 3.7
- numpy >= 1.18.0
- mpi4py >= 3.0.0
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Upgrade pip
pip install --upgrade pip# Navigate to the python_api directory
cd python_api
# Install dependencies
pip install numpy mpi4py
# Install PyFLUPS in development mode (recommended for testing)
pip install -e .
# Or install normally
pip install .Add to your .bashrc or .zshrc:
export FLUPS_LIB_PATH=/path/to/flups/libOr set it temporarily:
export FLUPS_LIB_PATH=/path/to/flups/libCreate a file test_import.py:
import pyflups as pf
print("PyFLUPS version:", pf.__version__)
print("Available classes:", dir(pf))Run:
python test_import.pycd examples
mpirun -np 4 python simple_poisson.pyError: Could not find FLUPS library
Solution:
- Verify library exists:
ls $FLUPS_LIB_PATH - Check for
libflups.so(Linux) orlibflups.dylib(macOS) - Ensure FLUPS is compiled:
make ARCH_FILE=make_arch/make.osx_gcc(for macOS)
Make sure mpi4py is installed and compatible with your MPI installation.
Run all examples:
cd examples
# Simple Poisson
mpirun -np 4 python simple_poisson.py
# Vector field
mpirun -np 4 python vector_field.py
# Advanced usage
mpirun -np 4 python advanced_usage.pyExpected output: Each script should complete without errors and display results.
import numpy as np
from mpi4py import MPI
import pyflups as pf
# MPI setup
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# Domain parameters
nglob = [64, 64, 64] # Global grid size
L = [1.0, 1.0, 1.0] # Domain size
h = [L[i]/nglob[i] for i in range(3)] # Grid spacing
# Process decomposition
nproc = [2, 2, 1] # Number of processes per dimension
# Boundary conditions
bc = [
[pf.BoundaryType.PER, pf.BoundaryType.PER], # x: periodic
[pf.BoundaryType.PER, pf.BoundaryType.PER], # y: periodic
[pf.BoundaryType.EVEN, pf.BoundaryType.EVEN] # z: zero flux
]
# Create topology
topo = pf.Topology(
axis=0,
lda=1, # Scalar field
nglob=nglob,
nproc=nproc,
is_complex=False,
comm=comm
)
# Create solver
solver = pf.Solver(
topology=topo,
boundary_conditions=bc,
h=h,
L=L,
order_diff=pf.DiffType.NOD
)
# Set Green's function type
solver.set_green_type(pf.GreenType.LGF_2)
# Setup solver
solver.setup()
# Prepare arrays - use nmem (padded memory size) for allocation
nloc = [topo.get_nloc(i) for i in range(3)]
nmem = [topo.get_nmem(i) for i in range(3)]
size = nmem[0] * nmem[1] * nmem[2]
rhs = np.zeros(size, dtype=np.float64)
field = np.zeros(size, dtype=np.float64)
# Initialize RHS (example: point source at center)
center_idx = pf.locID(0, nloc[0]//2, nloc[1]//2, nloc[2]//2, 0,
topo.axis, nmem, 1)
if rank == 0:
rhs[center_idx] = 1.0
# Solve
solver.solve(field, rhs, pf.SolverType.STD)
# Export results
if rank == 0:
print("Solution computed successfully")Manages domain decomposition and memory layout for parallel FFT operations.
Constructor:
Topology(axis, lda, nglob, nproc, is_complex=False,
axproc=None, alignment=16, comm=None)Main class for solving Poisson equations.
Constructor:
Solver(topology, boundary_conditions, h, L,
order_diff=DiffType.NOD, center_type=None, profiler=None)Key Methods:
set_green_type(green_type): Set Green's function typeset_alpha(alpha): Set alpha for Hejlesen kernelssetup(change_comm=False): Setup and allocate memorysolve(field, rhs, solver_type=STD): Solve the equation
Timing measurements for performance analysis.
Constructor:
Profiler(name="default")BoundaryType: EVEN, ODD, PER, UNB, NONEGreenType: CHAT_2, LGF_2/4/6/8, HEJ_0/2/4/6/8/10, MEHR_4L/6L/4F/6FSolverType: STD (standard), ROT (rotational/Biot-Savart)DiffType: NOD, SPE, FD2, FD4, FD6CenterType: NODE_CENTER, CELL_CENTER
hdf5_dump(topology, filename, data): Export data to HDF5print_data(topology, data): Print data to consolelocID(axsrc, i0, i1, i2, lia, axtrg, size, nf): Compute memory index
# Get FLUPS internal buffer
buffer = solver.get_inner_buffer()
# Get physical topology
topo_phys = solver.get_inner_topo_physical()# Create profiler
prof = pf.Profiler("my_solver")
# Pass to solver
solver = pf.Solver(..., profiler=prof)
# Display results
prof.display()# Get spectral information
kfact, koffset, symstart = solver.get_spectral_info()
# Manual FFT operations
solver.do_copy(topology, data, pf.FLUPS_FORWARD)
solver.do_fft(data, pf.FLUPS_FORWARD)
solver.do_mult(data, pf.SolverType.STD)
solver.do_fft(data, pf.FLUPS_BACKWARD)
solver.do_copy(topology, data, pf.FLUPS_BACKWARD)Complete examples are provided in the examples/ directory:
simple_poisson.py: Basic Poisson solvervector_field.py: Solving for vector fieldsadvanced_usage.py: Using advanced features
Run examples with MPI:
mpirun -np 4 python examples/simple_poisson.pyCopyright © UCLouvain 2020
Licensed under the Apache License, Version 2.0. See LICENSE file for details.
If you use FLUPS in your research, please cite:
@article{gillis2019,
title={A parallel multigrid-based preconditioner for the 3D heterogeneous Helmholtz equation},
author={Gillis, Thomas and others},
journal={Journal of Computational Physics},
year={2019}
}
- Issues: https://github.com/vortexlab-uclouvain/flups/issues
- Documentation: See
doc/directory in main FLUPS repository
- Pierre Balty
- Thomas Gillis
- Denis-Gabriel Caprace
- Contributors from UCLouvain Vortex Lab