This repository contains a collection of machine learning models developed in collaboration between University College Dublin (UCD) and Chocolate Cloud (CC).
These models are deployed within the SkyFlok Gateway component (hosted in London). They perform Latency Prediction to estimate the time required to retrieve a file from specific cloud storage backends.
By predicting download times based on temporal patterns and file size, these models enable the Gateway to intelligently route download requests to the fastest available storage region. This minimizes retrieval latency and optimizes network efficiency for the end user.
The models are exported in ONNX format (Opset 18) to ensure high-performance, low-latency inference within the real-time routing logic.
These models perform Regression to predict a continuous value:
- Input: File size and detailed timestamps (Hour, Day, Time of Day).
- Output: Estimated Transfer Time (Latency) in milliseconds.
Since latency characteristics vary between cloud providers, a separate model is trained for each storage backend. Use the table below to identify which model corresponds to which cloud provider/region.
| Backend ID | Model Filename | Cloud Provider | Region | Location |
|---|---|---|---|---|
| 4 | model_backend_id_4.onnx |
Google Cloud | europe-west1 |
St. Ghislain, Belgium 🇧🇪 |
| 20 | model_backend_id_20.onnx |
AWS | eu-west-1 |
Dublin, Ireland 🇮🇪 |
| 39 | model_backend_id_39.onnx |
Microsoft Azure | WEST EUROPE |
Amsterdam, Netherlands 🇳🇱 |
| 79 | model_backend_id_79.onnx |
OVH Cloud | GRA |
Gravelines, France 🇫🇷 |
| 137 | model_backend_id_137.onnx |
Exoscale | Geneva |
Geneva, Switzerland 🇨🇭 |
| 144 | model_backend_id_144.onnx |
Scaleway | Warsaw |
Warsaw, Poland 🇵🇱 |
/models
├── model_backend_id_4.onnx # Model for Backend ID 4
├── model_backend_id_20.onnx # Model for Backend ID 20
├── model_backend_id_39.onnx # ...
├── model_backend_id_79.onnx
├── model_backend_id_137.onnx
└── model_backend_id_144.onnx
└── model_config.json # Hyperparameters & metadata
The models were trained on historical transfer logs collected from the SkyFlok platform.
The dataset captures real-world network performance metrics across different times of day and days of the week.
- Workload: File size (bytes).
- Temporal: Time of day (categorical: morning, afternoon, etc.), Hour, Minute, Second, Day of Week.
These models utilize a Scikit-Learn Pipeline architecture, fully converted to ONNX:
1. Preprocessing: A ColumnTransformer that handles mixed data types (One-Hot Encoding for strings, pass-through for numbers).
2. Regressor: Gradient Boosting Regressor (2000 trees, Max Depth 12).
This architecture allows the model to capture complex, non-linear relationships between network congestion (time of day) and transfer speeds.
All models in this collection share the same input/output interface specifications.
The models accept a dictionary of standard Python lists.
Note: The double brackets [[ ]] are required to represent a batch size of 1.
| Input Name | Type | Shape | Description | Example |
|---|---|---|---|---|
time_of_day |
String | [batch, 1] |
Categorical time block | "morning", "night" |
hour |
Int64 | [batch, 1] |
Hour of the day (0-23) | 14 |
minute |
Int64 | [batch, 1] |
Minute of the hour | 30 |
second |
Int64 | [batch, 1] |
Second of the minute | 45 |
day_of_week |
String | [batch, 1] |
Full day name | "Monday", "Friday" |
size |
Float32 | [batch, 1] |
File size in Bytes | 100000.0 |
- Name:
predicted_latency - Data Type:
float32 - Shape:
[batch_size, 1] - Unit: Milliseconds (ms)
- Backend Specificity:
model_backend_id_4.onnxis trained specifically on the performance history of Backend #4. It should not be used to predict latency for other backends. - Historical Bias: Predictions are based on historical trends; sudden network outages or unprecedented congestion events may result in prediction errors.
To run any model from this collection, follow the steps below.
python3.13 -m venv venv
source venv/bin/activate
pip install onnxruntimepython inference_demo.pyIf you prefer to integrate it into your own application, here is the minimal code required:
import onnxruntime as ort
from datetime import datetime
# --- Configuration ---
backend_id = 4
file_size = 100000.0 # 100 KB
# --- Helper ---
def get_time_of_day(hour):
if 5 <= hour < 12: return 'morning'
elif 12 <= hour < 17: return 'afternoon'
elif 17 <= hour < 21: return 'evening'
return 'night'
# --- 1. Prepare Input ---
# Note: Double brackets [[ ]] create the required batch dimension (Batch=1)
t = datetime.now()
inputs = {
'time_of_day': [[get_time_of_day(t.hour)]],
'hour': [[t.hour]],
'minute': [[t.minute]],
'second': [[t.second]],
'day_of_week': [[t.strftime('%A')]],
'size': [[file_size]]
}
# --- 2. Run Inference ---
session = ort.InferenceSession(f"models/model_backend_id_{backend_id}.onnx")
result = session.run(None, inputs)
print(f"Predicted Latency: {result[0][0][0]:.2f} ms")If you wish to cite this specific model collection, please use the citation generated by Zenodo (located in the right sidebar of this record).
This work is part of the MLSysOps project, funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No. 101092912.
More information about the project is available at https://mlsysops.eu/