Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
26fd4af
ilmd + sqlitereader
RasmusOrsoe Nov 4, 2025
cf39cc0
automatically infer memmap size
RasmusOrsoe Nov 4, 2025
fbdd12b
add method for identifying serialization method in used lmdbwriter
RasmusOrsoe Nov 4, 2025
b3fd015
fix serialization of lambda func
RasmusOrsoe Nov 4, 2025
3c2e3c1
add query_database
RasmusOrsoe Nov 4, 2025
c397d2f
add `get_all_indices` utility method
RasmusOrsoe Nov 4, 2025
bd04276
allow lists of data representations to lmdbwriter
RasmusOrsoe Nov 4, 2025
4ffa2c6
add lmdb dataset
RasmusOrsoe Nov 6, 2025
6255999
adjust `GraphNeTDataModule` to accept lmdb
RasmusOrsoe Nov 6, 2025
70c452b
Error handling in lmdbwriter
RasmusOrsoe Nov 11, 2025
8698fdc
add missing truth variables to data representation in lmdb
RasmusOrsoe Nov 12, 2025
948e4d3
add meta_data to lmdb
RasmusOrsoe Nov 17, 2025
4fc40f5
add ´SQLiteToLMDBConverter´
RasmusOrsoe Dec 9, 2025
fe19562
add unit tests for lmdb
RasmusOrsoe Dec 9, 2025
9212fdb
update docs and add lmdb to setup.py
RasmusOrsoe Dec 9, 2025
97807dc
update conversion example with lmdb backend
RasmusOrsoe Dec 9, 2025
54d4be0
Merge branch 'main' into lmdb_pr
RasmusOrsoe Dec 9, 2025
c12049b
# noqa: C901
RasmusOrsoe Dec 9, 2025
602c3cf
Merge branch 'lmdb_pr' of https://github.com/RasmusOrsoe/graphnet int…
RasmusOrsoe Dec 9, 2025
28ba1cf
mypy update
RasmusOrsoe Dec 9, 2025
01af352
mypy
RasmusOrsoe Dec 9, 2025
3e4a88d
update missing column logic
RasmusOrsoe Dec 9, 2025
b055391
remove stray function call in `test_dataconverters_and_datasets.py`
RasmusOrsoe Dec 9, 2025
ea6ee69
expand unit tests
RasmusOrsoe Dec 9, 2025
72d248a
Update deprecated converters in unit test
RasmusOrsoe Dec 9, 2025
206f7a4
fix unit test
RasmusOrsoe Dec 9, 2025
e6bc3c3
Fix error message.
RasmusOrsoe Mar 7, 2026
36672d0
Add precomputed representation for conversion example.
RasmusOrsoe Mar 7, 2026
41d48c7
Merge pull request #35 from RasmusOrsoe/main
RasmusOrsoe Mar 7, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/data_conversion/data_conversion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -265,8 +265,8 @@ In this example, the writer will save the entire set of extractor outputs - a di



Two writers are implemented in GraphNeT; the :code:`SQLiteWriter` and :code:`ParquetWriter`, each of which output files that are directly used for
training by :code:`ParquetDataset` and :code:`SQLiteDataset`.
Three writers are implemented in GraphNeT; the :code:`SQLiteWriter`, :code:`ParquetWriter`, and :code:`LMDBWriter`, each of which output files that are directly used for
training by :code:`SQLiteDataset`, :code:`ParquetDataset`, and :code:`LMDBDataset` respectively.



Expand Down
54 changes: 45 additions & 9 deletions docs/source/datasets/datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -155,18 +155,19 @@ It looks like so:
</details>


:code:`SQLiteDataset` & :code:`ParquetDataset`
----------------------------------------------
:code:`SQLiteDataset`, :code:`ParquetDataset` & :code:`LMDBDataset`
--------------------------------------------------------------------

The two specific implementations of :code:`Dataset` exists :
The three specific implementations of :code:`Dataset` exists :

- `ParquetDataset <https://graphnet-team.github.io/graphnet/api/graphnet.data.parquet.parquet_dataset.html>`_ : Constructs :code:`Dataset` from files created by :code:`ParquetWriter`.
- `SQLiteDataset <https://graphnet-team.github.io/graphnet/api/graphnet.data.sqlite.sqlite_dataset.html>`_ : Constructs :code:`Dataset` from files created by :code:`SQLiteWriter`.
- `LMDBDataset <https://graphnet-team.github.io/graphnet/api/graphnet.data.dataset.lmdb.lmdb_dataset.html>`_ : Constructs :code:`Dataset` from files created by :code:`LMDBWriter`.


To instantiate a :code:`Dataset` from your files, you must specify at least the following:

- :code:`pulsemaps`: These are named fields in your Parquet files, or tables in your SQLite databases, which store one or more pulse series from which you would like to create a dataset. A pulse series represents the detector response, in the form of a series of PMT hits or pulses, in some time window, usually triggered by a single neutrino or atmospheric muon interaction. This is the data that will be served as input to the `Model`.
- :code:`pulsemaps`: These are named fields in your Parquet files, or tables in your SQLite or LMDB databases, which store one or more pulse series from which you would like to create a dataset. A pulse series represents the detector response, in the form of a series of PMT hits or pulses, in some time window, usually triggered by a single neutrino or atmospheric muon interaction. This is the data that will be served as input to the `Model`.
- :code:`truth_table`: The name of a table/array that contains the truth-level information associated with the pulse series, and should contain the truth labels that you would like to reconstruct or classify. Often this table will contain the true physical attributes of the primary particle — such as its true direction, energy, PID, etc. — and is therefore graph- or event-level (as opposed to the pulse series tables, which are node- or hit-level) truth information.
- :code:`features`: The names of the columns in your pulse series table(s) that you would like to include for training; they typically constitute the per-node/-hit features such as xyz-position of sensors, charge, and photon arrival times.
- :code:`truth`: The columns in your truth table/array that you would like to include in the dataset.
Expand Down Expand Up @@ -225,6 +226,32 @@ Or similarly for Parquet files:

graph = dataset[0] # torch_geometric.data.Data

Or similarly for LMDB files:

.. code-block:: python

from graphnet.data.dataset.lmdb.lmdb_dataset import LMDBDataset
from graphnet.models.detector.prometheus import Prometheus
from graphnet.models.graphs import KNNGraph
from graphnet.models.graphs.nodes import NodesAsPulses

graph_definition = KNNGraph(
detector=Prometheus(),
node_definition=NodesAsPulses(),
nb_nearest_neighbours=8,
)

dataset = LMDBDataset(
path="data/examples/lmdb/prometheus/prometheus-events.lmdb",
pulsemaps="total",
truth_table="mc_truth",
features=["sensor_pos_x", "sensor_pos_y", "sensor_pos_z", "t", ...],
truth=["injection_energy", "injection_zenith", ...],
graph_definiton = graph_definition,
)

graph = dataset[0] # torch_geometric.data.Data

It's then straightforward to create a :code:`DataLoader` for training, which will take care of batching, shuffling, and such:

.. code-block:: python
Expand All @@ -250,10 +277,10 @@ By default, the following fields will be available in a graph built by :code:`Da
- :code:`graph[truth_label] for truth_label in truth`: For each truth label in the :code:`truth` argument, the corresponding data is stored as a :code:`[num_rows, 1]` dimensional tensor. E.g., :code:`graph["energy"] = torch.tensor(26, dtype=torch.float)`
- :code:`graph[feature] for feature in features`: For each feature given in the :code:`features` argument, the corresponding data is stored as a :code:`[num_rows, 1]` dimensional tensor. E.g., :code:`graph["sensor_x"] = torch.tensor([100, -200, -300, 200], dtype=torch.float)``

:code:`SQLiteDataset` vs. :code:`ParquetDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:code:`SQLiteDataset` vs. :code:`ParquetDataset` vs. :code:`LMDBDataset`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Besides working on different file formats, :code:`SQLiteDataset` and :code:`ParquetDataset` have significant differences,
Besides working on different file formats, :code:`SQLiteDataset`, :code:`ParquetDataset`, and :code:`LMDBDataset` have significant differences,
which may lead you to choose one over the other, depending on the problem at hand.

:SQLiteDataset: SQLite provides fast random access to all events inside it. This makes plotting and subsampling your dataset particularly easy,
Expand All @@ -265,13 +292,20 @@ which may lead you to choose one over the other, depending on the problem at han
This means that the subsampling of your dataset needs to happen prior to the conversion to :code:`parquet`, unlike `SQLiteDataset` which allows for subsampling after conversion, due to it's fast random access.
Conversion of files to :code:`parquet` is significantly faster than its :code:`SQLite` counterpart.

:LMDBDataset: LMDB databases produced by :code:`LMDBWriter` store events as key-value pairs with configurable serialization methods (pickle, json, msgpack, dill).
:code:`LMDBDataset` supports two modes: reading raw tables and computing data representations in real-time (similar to :code:`SQLiteDataset`), or reading pre-computed data representations directly from the database for faster access.
LMDB provides fast random access similar to SQLite, while also supporting efficient storage of pre-computed graph representations, making it suitable for scenarios where you want to pre-compute and cache data representations.
LMDB takes up roughly half the space of SQLite, and is therefore a good compromise between SQLite and Parquet.


.. note::

:code:`ParquetDataset` is scalable to ultra large datasets, but is more difficult to work with and has a higher memory consumption.

:code:`SQLiteDataset` does not scale to very large datasets, but is easy to work with and has minimal memory consumption.

:code:`LMDBDataset` provides a balance between SQLite and Parquet, offering fast random access and support for pre-computed representations, making it well-suited for scenarios where data representations are computed once and reused multiple times.


Choosing a subset of events using `selection`
----------------------------------------------
Expand All @@ -297,7 +331,7 @@ would produce a :code:`Dataset` with only those five events.

.. note::

For :code:`SQLiteDatase`, the :code:`selection` argument specifies individual events chosen for the dataset,
For :code:`SQLiteDataset` and :code:`LMDBDataset`, the :code:`selection` argument specifies individual events chosen for the dataset,
whereas for :code:`ParquetDataset`, the :code:`selection` argument specifies which batches are used in the dataset.


Expand Down Expand Up @@ -347,12 +381,14 @@ You can combine multiple instances of :code:`Dataset` from GraphNeT into a singl
from graphnet.data import EnsembleDataset
from graphnet.data.parquet import ParquetDataset
from graphnet.data.sqlite import SQLiteDataset
from graphnet.data.dataset.lmdb.lmdb_dataset import LMDBDataset

dataset_1 = SQLiteDataset(...)
dataset_2 = SQLiteDataset(...)
dataset_3 = ParquetDataset(...)
dataset_4 = LMDBDataset(...)

ensemble_dataset = EnsembleDataset([dataset_1, dataset_2, dataset_3])
ensemble_dataset = EnsembleDataset([dataset_1, dataset_2, dataset_3, dataset_4])

You can find a detailed example `here <https://github.com/graphnet-team/graphnet/blob/main/examples/02_data/04_ensemble_dataset.py>`_ .

Expand Down
171 changes: 136 additions & 35 deletions examples/01_icetray/01_convert_i3_files.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
"""Example of converting I3-files to SQLite and Parquet."""
"""Example of converting I3-files to SQLite, Parquet, and LMDB.

from glob import glob
When using the LMDB backend, the ``--precompute-representation`` flag can be
used to pre-compute a DataRepresentation and store it alongside the raw
data. Pre-computed representations can later be loaded directly,
avoiding the cost of real-time DataRepresentation construction during training.
"""

from glob import glob
from typing import Any, Dict
from graphnet.constants import EXAMPLE_OUTPUT_DIR, TEST_DATA_DIR
from graphnet.data.constants import FEATURES, TRUTH
from graphnet.data.extractors.icecube import (
I3FeatureExtractorIceCubeUpgrade,
I3FeatureExtractorIceCube86,
Expand All @@ -12,6 +19,9 @@
from graphnet.data.dataconverter import DataConverter
from graphnet.data.parquet import ParquetDataConverter
from graphnet.data.sqlite import SQLiteDataConverter
from graphnet.data.pre_configured.dataconverters import I3ToLMDBConverter
from graphnet.models.detector.icecube import IceCube86, IceCubeUpgrade
from graphnet.models.graphs import KNNGraph
from graphnet.utilities.argparse import ArgumentParser
from graphnet.utilities.imports import has_icecube_package
from graphnet.utilities.logging import Logger
Expand All @@ -29,14 +39,18 @@
)

CONVERTER_CLASS = {
"lmdb": I3ToLMDBConverter,
"sqlite": SQLiteDataConverter,
"parquet": ParquetDataConverter,
}


def main_icecube86(backend: str) -> None:
def main_icecube86(
backend: str,
precompute_representation: bool = False,
num_workers: int = 1,
) -> None:
"""Convert IceCube-86 I3 files to intermediate `backend` format."""
# Check(s)
assert backend in CONVERTER_CLASS

inputs = [f"{TEST_DATA_DIR}/i3/oscNext_genie_level7_v02"]
Expand All @@ -45,45 +59,99 @@ def main_icecube86(backend: str) -> None:
f"{TEST_DATA_DIR}/i3/oscNext_genie_level7_v02/*GeoCalib*"
)[0]

converter = CONVERTER_CLASS[backend](
extractors=[
I3FeatureExtractorIceCube86("SRTInIcePulses"),
I3TruthExtractor(),
],
outdir=outdir,
gcd_rescue=gcd_rescue,
workers=1,
)
extractors = [
I3FeatureExtractorIceCube86("SRTInIcePulses"),
I3TruthExtractor(),
]

if backend == "lmdb":
lmdb_kwargs: Dict[str, Any] = {}
if precompute_representation:
# Could be any DataRepresentation, not just KNNGraph
data_representation = KNNGraph(
detector=IceCube86(),
nb_nearest_neighbours=8,
input_feature_names=FEATURES.ICECUBE86,
)
lmdb_kwargs.update(
data_representation=data_representation,
pulsemap_extractor_name="SRTInIcePulses",
truth_extractor_name="truth",
truth_label_names=TRUTH.ICECUBE86,
)
converter: DataConverter = I3ToLMDBConverter(
extractors=extractors,
outdir=outdir,
gcd_rescue=gcd_rescue,
num_workers=num_workers,
**lmdb_kwargs,
)
else:
converter = CONVERTER_CLASS[backend](
extractors=extractors,
outdir=outdir,
gcd_rescue=gcd_rescue,
workers=num_workers,
)

converter(inputs)
if backend == "sqlite":
if backend in ["sqlite", "lmdb"]:
converter.merge_files()


def main_icecube_upgrade(backend: str) -> None:
def main_icecube_upgrade(
backend: str,
precompute_representation: bool = False,
num_workers: int = 1,
) -> None:
"""Convert IceCube-Upgrade I3 files to intermediate `backend` format."""
# Check(s)
assert backend in CONVERTER_CLASS

inputs = [f"{TEST_DATA_DIR}/i3/upgrade_genie_step4_140028_000998"]
outdir = f"{EXAMPLE_OUTPUT_DIR}/convert_i3_files/upgrade"
gcd_rescue = glob(
"{TEST_DATA_DIR}/i3/upgrade_genie_step4_140028_000998/*GeoCalib*"
)[0]
workers = 1

converter: DataConverter = CONVERTER_CLASS[backend](
extractors=[
I3TruthExtractor(),
I3RetroExtractor(),
I3FeatureExtractorIceCubeUpgrade("I3RecoPulseSeriesMap_mDOM"),
I3FeatureExtractorIceCubeUpgrade("I3RecoPulseSeriesMap_DEgg"),
],
outdir=outdir,
workers=workers,
gcd_rescue=gcd_rescue,
)

pulsemap = "I3RecoPulseSeriesMap_mDOM"
extractors = [
I3TruthExtractor(),
I3RetroExtractor(),
I3FeatureExtractorIceCubeUpgrade(pulsemap),
I3FeatureExtractorIceCubeUpgrade("I3RecoPulseSeriesMap_DEgg"),
]

if backend == "lmdb":
lmdb_kwargs: Dict[str, Any] = {}
if precompute_representation:
data_representation = KNNGraph(
detector=IceCubeUpgrade(),
nb_nearest_neighbours=8,
input_feature_names=FEATURES.UPGRADE,
)
lmdb_kwargs.update(
data_representation=data_representation,
pulsemap_extractor_name=pulsemap,
truth_extractor_name="truth",
truth_label_names=TRUTH.UPGRADE,
)
converter: DataConverter = I3ToLMDBConverter(
extractors=extractors,
outdir=outdir,
gcd_rescue=gcd_rescue,
num_workers=num_workers,
**lmdb_kwargs,
)
else:
converter = CONVERTER_CLASS[backend](
extractors=extractors,
outdir=outdir,
gcd_rescue=gcd_rescue,
workers=num_workers,
)

converter(inputs)
if backend == "sqlite":
if backend in ["sqlite", "lmdb"]:
converter.merge_files()


Expand All @@ -92,22 +160,55 @@ def main_icecube_upgrade(backend: str) -> None:
if not has_icecube_package():
Logger(log_folder=None).error(ERROR_MESSAGE_MISSING_ICETRAY)
else:
# Parse command-line arguments
parser = ArgumentParser(
description="""
Convert I3 files to an intermediate format.
"""
)

parser.add_argument("backend", choices=["sqlite", "parquet"])
parser.add_argument(
"backend",
nargs="?",
choices=["lmdb", "sqlite", "parquet"],
default="lmdb",
help="Backend format to convert to (default: %(default)s)",
)
parser.add_argument(
"detector", choices=["icecube-86", "icecube-upgrade"]
)
parser.add_argument(
"--precompute-representation",
action="store_true",
default=False,
help="Pre-compute a KNN graph representation and store it in "
"the LMDB database. Only supported with the lmdb backend.",
)
parser.add_argument(
"--workers",
type=int,
default=1,
help="Number of worker processes for parallel conversion "
"(default: %(default)s).",
)

args, unknown = parser.parse_known_args()

# Run example script
if args.precompute_representation and args.backend != "lmdb":
Logger(log_folder=None).warning(
"--precompute-representation is only supported with the lmdb "
"backend. Ignoring."
)
args.precompute_representation = False

if args.detector == "icecube-86":
main_icecube86(args.backend)
main_icecube86(
args.backend,
args.precompute_representation,
args.workers,
)
else:
main_icecube_upgrade(args.backend)
main_icecube_upgrade(
args.backend,
args.precompute_representation,
args.workers,
)
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
"polars >=0.19",
"torchscale==0.2.0",
"h5py>= 3.7.0",
"lmdb>=1.4.1",
]

EXTRAS_REQUIRE = {
Expand Down
6 changes: 6 additions & 0 deletions src/graphnet/data/dataclasses.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ class I3FileSet: # noqa: D101
gcd_file: str


@dataclass
class SQLiteFileSet: # noqa: D101
db_path: str
event_nos: List[int]


@dataclass
class Settings:
"""Dataclass for workers in I3Deployer."""
Expand Down
Loading