Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
9a9d42e
initial sebs cloudflare infra, functions, config, triggers. readme in…
Oct 31, 2025
aa24a07
systems.json cloudflare config
MisterMM23 Nov 2, 2025
4cc0476
highly incomplete work on benchmark wrappers, using R2 and KV.
Nov 2, 2025
cd24fcf
wrappers - changes to handler and storage - can now run benchmark 110…
Nov 8, 2025
eaa42a1
just some changes. storage still not properly tested...
Nov 9, 2025
57452fa
concept for r2 storage
MisterMM23 Nov 10, 2025
9e47e0f
translated wrapper to js
Nov 10, 2025
822a9d9
used output from workers as analytics measurements in sebs
Nov 10, 2025
f7bb950
last changes necessary for sebs to run cloudflare. now just the stora…
Nov 10, 2025
1f0a979
javascript wrapper with polyfills reading from r2. r2 implementation,…
Nov 11, 2025
b117e75
adapted handler to measure invocation time
Nov 11, 2025
d42b157
fixed the fs polyfill to also support write operationst to r2 storage…
Nov 12, 2025
ffd3f78
added compatibility for benchmarks 100 in nodejs. translated all 100 …
Nov 12, 2025
272a372
current situation where asyncio cannot run the async function
Nov 13, 2025
556d799
dynamically add async to benchmark function *shrug*
Nov 16, 2025
93c8a73
nosql updates
Nov 16, 2025
e17982f
idea for cicrumvention of asyncio
MisterMM23 Nov 17, 2025
214c947
wrappers - run_sync for storage.py
Nov 17, 2025
b8f7c5c
nosql wrapper uses run_sync
Nov 19, 2025
dba2992
cloudflare nodejs wrapper without r2 as fs polyfill, just node_compat…
Nov 19, 2025
5390021
cleanup nodejs deployment cloudflare, no uploading files necessary an…
Nov 19, 2025
24497a2
add folder structure to python code package
Nov 23, 2025
8812708
nosql wrapper - duarble object - may work
Nov 28, 2025
5b3d784
fix python. 110 runs for me.
Nov 28, 2025
cd183b8
make it read the requirements.txt when it has a number
Nov 28, 2025
9379f39
durable objects compatibility for nodejs
Nov 30, 2025
5f9ad9c
asyncified the function calls...
Nov 30, 2025
92db5ae
fix python vendored modules
Dec 1, 2025
51892b0
added request polyfill for benchmark 120
Nov 30, 2025
3235d3f
fixed r2 usage for 120, 311
Dec 4, 2025
416b67b
support for cloudflare containers (python and nodejs), container work…
Dec 7, 2025
5284880
bigger container for python containers
Dec 8, 2025
b6de39b
sleep delay longer
Dec 8, 2025
812f592
request_id has to be string
Dec 8, 2025
9229f9f
update container fixed
Dec 8, 2025
5899d87
fixed benchmark wrapper request ids for experiment results as well as…
Dec 13, 2025
6e0cd2b
extract memory correctly
Dec 13, 2025
3cd741f
pyiodide does not support resource module for memory measurement
Dec 14, 2025
2615a36
timing fix for cloudflare handler
Dec 15, 2025
e69243a
fixed python timing issue
Dec 15, 2025
f39aad0
removed faulty nodejs implementations of 000 bmks
Jan 5, 2026
e76f846
removed unnecessary logging
Jan 5, 2026
dc2f6ed
removed experiments.json and package*.json
Jan 6, 2026
437cc97
updated cloudflare readme to reflect final changes
Jan 6, 2026
1eb375c
has platform check according to convention, durable object items remo…
Jan 6, 2026
6c0768e
updated readme to document the correct return object structure by the…
Jan 6, 2026
0dfcfa8
documented cold start tracking limitation
Jan 6, 2026
b2465f9
removed unreachable return statement in cloudflare.py
Jan 6, 2026
0eb4d0b
small fix to use public property
Jan 6, 2026
db84f2d
small fix for public field in durable objects
Jan 6, 2026
7e2d8ac
converted nosql client calls to async and removed the corresponding p…
Jan 7, 2026
35a556d
Fix instance variable naming in nosql_do class
ldzgch Jan 7, 2026
92c5dea
Rename class instance reference from nosql to nosql_kv
ldzgch Jan 7, 2026
bcd5ecb
Apply suggestions from code review - storage.py
ldzgch Jan 7, 2026
96ac2c1
Apply suggestions from code review
ldzgch Jan 7, 2026
b028151
config placeholder for api tokens, r2 etc
Jan 18, 2026
03e274e
variable base image in docker file... have to replace the right image…
Jan 18, 2026
a11236a
copy and execute init.sh file from benchmark directory, and execute i…
Jan 18, 2026
35755d6
add to existing markdown for cloudflare specific documentation instea…
Jan 18, 2026
b427c5b
more detail about download_metrics()
Jan 18, 2026
734eadf
Docker build container for build orchestration locally
Jan 18, 2026
5ffcb06
using docker client to build local image
Jan 18, 2026
4da0c31
do not create library trigger
Jan 18, 2026
4874794
removed some deprecated logging, throw exception if cold start is used
Jan 18, 2026
6b8e695
split the cloudflare.py into containers.py + workers.py... each handl…
Jan 18, 2026
865ca06
use toml writer, cleaner approach to write wrangler.toml
Jan 18, 2026
20eb8db
accidental capitalization of the table name resulting in errors in 130
Jan 18, 2026
ebe0794
warning that locationHint is not supported
Jan 18, 2026
9dd0a6e
s3 client for r2 storage code duplication removed. and also removed l…
Jan 18, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"timeout": 120,
"memory": 128,
"languages": ["python", "nodejs"],
"languages": ["python"],
"modules": []
}
56 changes: 56 additions & 0 deletions benchmarks/100.webapps/120.uploader/python/function_cloudflare.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@

import datetime
import os

from pyodide.ffi import run_sync
from pyodide.http import pyfetch

from . import storage
client = storage.storage.get_instance()

SEBS_USER_AGENT = "SeBS/1.2 (https://github.com/spcl/serverless-benchmarks) SeBS Benchmark Suite/1.2"

async def do_request(url, download_path):
headers = {'User-Agent': SEBS_USER_AGENT}

res = await pyfetch(url, headers=headers)
bs = await res.bytes()

with open(download_path, 'wb') as f:
f.write(bs)
Comment on lines +16 to +20
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate HTTP download success before writing and uploading.

Right now any HTTP response body (including 404/500 pages) is written and then uploaded as if it were valid content. Add an explicit status check and fail fast on non-success responses.

🔧 Proposed fix
 async def do_request(url, download_path):
     headers = {'User-Agent': SEBS_USER_AGENT}

     res = await pyfetch(url, headers=headers)
+    if not res.ok:
+        raise RuntimeError(f"Download failed with status {res.status} for URL: {url}")
     bs = await res.bytes()
     
     with open(download_path, 'wb') as f:
         f.write(bs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmarks/100.webapps/120.uploader/python/function_cloudflare.py` around
lines 16 - 20, The code writes and uploads whatever body pyfetch returned
without checking HTTP status; update the download flow that uses pyfetch/res and
download_path to first inspect the response status (e.g., res.status or res.ok)
and raise/return an error (fail fast) for non-2xx responses so the file is not
written or uploaded; only call res.bytes() and open/download_path for writing
when the response indicates success, and include a clear error/exception message
when rejecting non-success responses.


def handler(event):

bucket = event.get('bucket').get('bucket')
output_prefix = event.get('bucket').get('output')
url = event.get('object').get('url')
name = os.path.basename(url)
download_path = '/tmp/{}'.format(name)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a secure temporary file path instead of a predictable /tmp/<name>.

The current path is predictable and can collide across concurrent invocations. Use tempfile (mkstemp/NamedTemporaryFile) to avoid race/collision issues.

🔒 Proposed fix
 import datetime
 import os
+import tempfile
@@
-    name = os.path.basename(url)
-    download_path = '/tmp/{}'.format(name)
+    name = os.path.basename(url)
+    fd, download_path = tempfile.mkstemp(prefix="sebs-", suffix=f"-{name}", dir="/tmp")
+    os.close(fd)
🧰 Tools
🪛 Ruff (0.15.6)

[error] 28-28: Probable insecure usage of temporary file or directory: "/tmp/{}"

(S108)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmarks/100.webapps/120.uploader/python/function_cloudflare.py` at line
28, The assignment download_path = '/tmp/{}'.format(name) uses a predictable
filename; replace it by creating a secure temp file (e.g., tempfile.mkstemp() or
tempfile.NamedTemporaryFile()) and use the returned path/filename instead of
composing /tmp/<name>. Update the code that writes/read the file to use the file
descriptor or file object from tempfile, close the fd (if using mkstemp) and
ensure proper cleanup; refer to the download_path variable and the surrounding
download/save logic when making this change.


process_begin = datetime.datetime.now()

run_sync(do_request(url, download_path))

size = os.path.getsize(download_path)
process_end = datetime.datetime.now()

upload_begin = datetime.datetime.now()
key_name = client.upload(bucket, os.path.join(output_prefix, name), download_path)
upload_end = datetime.datetime.now()

process_time = (process_end - process_begin) / datetime.timedelta(microseconds=1)
upload_time = (upload_end - upload_begin) / datetime.timedelta(microseconds=1)
return {
'result': {
'bucket': bucket,
'url': url,
'key': key_name
},
'measurement': {
'download_time': 0,
'download_size': 0,
'upload_time': upload_time,
'upload_size': size,
'compute_time': process_time
}
}
78 changes: 78 additions & 0 deletions benchmarks/100.webapps/130.crud-api/nodejs/function.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
const nosql = require('./nosql');

const nosqlClient = nosql.nosql.get_instance();
const nosqlTableName = "shopping_cart";

async function addProduct(cartId, productId, productName, price, quantity) {
await nosqlClient.insert(
nosqlTableName,
["cart_id", cartId],
["product_id", productId],
{ price: price, quantity: quantity, name: productName }
);
}

async function getProducts(cartId, productId) {
return await nosqlClient.get(
nosqlTableName,
["cart_id", cartId],
["product_id", productId]
);
}

async function queryProducts(cartId) {
const res = await nosqlClient.query(
nosqlTableName,
["cart_id", cartId],
"product_id"
);

const products = [];
let priceSum = 0;
let quantitySum = 0;

for (const product of res) {
products.push(product.name);
priceSum += product.price;
quantitySum += product.quantity;
}

const avgPrice = quantitySum > 0 ? priceSum / quantitySum : 0.0;

return {
products: products,
total_cost: priceSum,
avg_price: avgPrice
};
}

exports.handler = async function(event) {
const results = [];

for (const request of event.requests) {
const route = request.route;
const body = request.body;
let res;

if (route === "PUT /cart") {
await addProduct(
body.cart,
body.product_id,
body.name,
body.price,
body.quantity
);
res = {};
} else if (route === "GET /cart/{id}") {
res = await getProducts(body.cart, request.path.id);
} else if (route === "GET /cart") {
res = await queryProducts(body.cart);
} else {
throw new Error(`Unknown request route: ${route}`);
}

results.push(res);
}

return { result: results };
};
9 changes: 9 additions & 0 deletions benchmarks/100.webapps/130.crud-api/nodejs/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "crud-api",
"version": "1.0.0",
"description": "CRUD API benchmark",
"author": "",
"license": "",
"dependencies": {
}
}
147 changes: 147 additions & 0 deletions benchmarks/300.utilities/311.compression/nodejs/function.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
const fs = require('fs');
const path = require('path');
const zlib = require('zlib');
const { v4: uuidv4 } = require('uuid');
const storage = require('./storage');

let storage_handler = new storage.storage();

/**
* Calculate total size of a directory recursively
* @param {string} directory - Path to directory
* @returns {number} Total size in bytes
*/
function parseDirectory(directory) {
let size = 0;

function walkDir(dir) {
const files = fs.readdirSync(dir);
for (const file of files) {
const filepath = path.join(dir, file);
const stat = fs.statSync(filepath);
if (stat.isDirectory()) {
walkDir(filepath);
} else {
size += stat.size;
}
}
}

walkDir(directory);
return size;
}

/**
* Create a simple tar.gz archive from a directory using native zlib
* This creates a gzip-compressed tar archive without external dependencies
* @param {string} sourceDir - Directory to compress
* @param {string} outputPath - Path for the output archive file
* @returns {Promise<void>}
*/
async function createTarGzArchive(sourceDir, outputPath) {
// Create a simple tar-like format (concatenated files with headers)
const files = [];

function collectFiles(dir, baseDir = '') {
const entries = fs.readdirSync(dir);
for (const entry of entries) {
const fullPath = path.join(dir, entry);
const relativePath = path.join(baseDir, entry);
const stat = fs.statSync(fullPath);

if (stat.isDirectory()) {
collectFiles(fullPath, relativePath);
} else {
files.push({
path: relativePath,
fullPath: fullPath,
size: stat.size
});
}
}
}

collectFiles(sourceDir);

// Create a concatenated buffer of all files with simple headers
const chunks = [];
for (const file of files) {
const content = fs.readFileSync(file.fullPath);
// Simple header: filename length (4 bytes) + filename + content length (4 bytes) + content
const pathBuffer = Buffer.from(file.path);
const pathLengthBuffer = Buffer.allocUnsafe(4);
pathLengthBuffer.writeUInt32BE(pathBuffer.length, 0);
const contentLengthBuffer = Buffer.allocUnsafe(4);
contentLengthBuffer.writeUInt32BE(content.length, 0);

chunks.push(pathLengthBuffer);
chunks.push(pathBuffer);
chunks.push(contentLengthBuffer);
chunks.push(content);
}

const combined = Buffer.concat(chunks);

// Compress using gzip
const compressed = zlib.gzipSync(combined, { level: 9 });
fs.writeFileSync(outputPath, compressed);
}

exports.handler = async function(event) {
const bucket = event.bucket.bucket;
const input_prefix = event.bucket.input;
const output_prefix = event.bucket.output;
const key = event.object.key;

// Create unique download path
const download_path = path.join('/tmp', `${key}-${uuidv4()}`);
fs.mkdirSync(download_path, { recursive: true });

// Download directory from storage
const s3_download_begin = Date.now();
await storage_handler.download_directory(bucket, path.join(input_prefix, key), download_path);
const s3_download_stop = Date.now();

// Calculate size of downloaded files
const size = parseDirectory(download_path);

// Compress directory
const compress_begin = Date.now();
const archive_name = `${key}.tar.gz`;
const archive_path = path.join(download_path, archive_name);
await createTarGzArchive(download_path, archive_path);
const compress_end = Date.now();

// Get archive size
const archive_size = fs.statSync(archive_path).size;

// Upload compressed archive
const s3_upload_begin = Date.now();
const [key_name, uploadPromise] = storage_handler.upload(
bucket,
path.join(output_prefix, archive_name),
archive_path
);
await uploadPromise;
const s3_upload_stop = Date.now();

// Calculate times in microseconds
const download_time = (s3_download_stop - s3_download_begin) * 1000;
const upload_time = (s3_upload_stop - s3_upload_begin) * 1000;
const process_time = (compress_end - compress_begin) * 1000;

return {
result: {
bucket: bucket,
key: key_name
},
measurement: {
download_time: download_time,
download_size: size,
upload_time: upload_time,
upload_size: archive_size,
compute_time: process_time
}
};
};

9 changes: 9 additions & 0 deletions benchmarks/300.utilities/311.compression/nodejs/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"name": "compression-benchmark",
"version": "1.0.0",
"description": "Compression benchmark for serverless platforms",
"main": "function.js",
"dependencies": {
"uuid": "^10.0.0"
}
}
Loading