Skip to content

Commit 85108dc

Browse files
committed
wip: feat: volume management
1 parent a44e659 commit 85108dc

23 files changed

Lines changed: 2106 additions & 13 deletions

create-a-container/README.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,17 @@ A web application for managing LXC container creation, configuration, and lifecy
77
```mermaid
88
erDiagram
99
Node ||--o{ Container : "hosts"
10+
Node ||--o{ Volume : "stores"
1011
Container ||--o{ Service : "exposes"
12+
Container ||--o{ ContainerVolume : "mounts"
13+
ContainerVolume }o--|| Volume : "references"
1114
1215
Node {
1316
int id PK
1417
string name UK "Proxmox node name"
1518
string apiUrl "Proxmox API URL"
1619
boolean tlsVerify "Verify TLS certificates"
20+
int placeholderCtId "VMID for volume storage"
1721
datetime createdAt
1822
datetime updatedAt
1923
}
@@ -34,6 +38,27 @@ erDiagram
3438
datetime updatedAt
3539
}
3640
41+
Volume {
42+
int id PK
43+
string name "User-friendly name"
44+
string username "Owner username"
45+
string proxmoxVolume "Proxmox reference"
46+
int sizeGb "Size in GB"
47+
int siteId FK "References Site"
48+
int nodeId FK "References Node"
49+
datetime createdAt
50+
datetime updatedAt
51+
}
52+
53+
ContainerVolume {
54+
int id PK
55+
int containerId FK "References Container"
56+
int volumeId FK "References Volume"
57+
string mountPath "Mount point path"
58+
datetime createdAt
59+
datetime updatedAt
60+
}
61+
3762
Service {
3863
int id PK
3964
int containerId FK "References Container"
@@ -51,6 +76,9 @@ erDiagram
5176
- `(Node.name)` - Unique
5277
- `(Container.hostname)` - Unique
5378
- `(Container.nodeId, Container.containerId)` - Unique (same VMID can exist on different nodes)
79+
- `(Volume.username, Volume.name, Volume.siteId)` - Unique (volume names unique per user per site)
80+
- `(ContainerVolume.containerId, ContainerVolume.volumeId)` - Unique (one attachment per volume per container)
81+
- `(ContainerVolume.containerId, ContainerVolume.mountPath)` - Unique (mount paths unique per container)
5482
- `(Service.externalHostname)` - Unique when type='http'
5583
- `(Service.type, Service.externalPort)` - Unique when type='tcp' or type='udp'
5684

@@ -59,6 +87,7 @@ erDiagram
5987
- **User Authentication** - Proxmox VE authentication integration
6088
- **Container Management** - Create, list, and track LXC containers
6189
- **Docker/OCI Support** - Pull and deploy containers from Docker Hub, GHCR, or any OCI registry
90+
- **Persistent Volumes** - Named volumes that survive container deletion for data persistence
6291
- **Service Registry** - Track HTTP/TCP/UDP services running on containers
6392
- **Dynamic Nginx Config** - Generate nginx reverse proxy configurations on-demand
6493
- **Real-time Progress** - SSE (Server-Sent Events) for container creation progress
@@ -407,6 +436,75 @@ SELECT id, status FROM Jobs WHERE id = <ID>;
407436
- Add batching or file-based logs for high-volume output to reduce DB pressure
408437
- Implement job timeout/deadline and automatic cancellation
409438

439+
### Volume Management Routes
440+
441+
#### `GET /sites/:siteId/volumes` (Auth Required)
442+
List all volumes owned by the authenticated user in a site
443+
- **Returns**: HTML page with volume list
444+
445+
#### `GET /sites/:siteId/volumes/new` (Auth Required)
446+
Display volume creation form
447+
- **Returns**: HTML page with form
448+
449+
#### `POST /sites/:siteId/volumes` (Auth Required)
450+
Create a new persistent volume
451+
- **Body**: `{ name, nodeId }`
452+
- `name`: Volume name (alphanumeric, dash, underscore only)
453+
- `nodeId`: Node where the volume should be created
454+
- **Process**:
455+
1. Allocates disk on the node's storage
456+
2. Attaches to the node's placeholder container
457+
3. Creates Volume record in database
458+
- **Returns**: Redirect to volumes list
459+
460+
#### `DELETE /sites/:siteId/volumes/:id` (Auth Required)
461+
Delete a volume permanently
462+
- **Path Parameter**: `id` - Volume database ID
463+
- **Authorization**: User can only delete their own volumes
464+
- **Validation**: Volume must not be attached to any container
465+
- **Process**:
466+
1. Detaches from placeholder container
467+
2. Deletes disk from Proxmox storage
468+
3. Removes Volume record from database
469+
- **Returns**: `{ success: true, message: "Volume deleted successfully" }`
470+
- **Errors**:
471+
- `400` - Volume is currently attached to a container
472+
- `403` - User doesn't own the volume
473+
- `404` - Volume not found
474+
475+
### Volume Attachment
476+
477+
Volumes can be attached to containers during container creation:
478+
479+
#### During `POST /sites/:siteId/containers`
480+
- **Additional Body Fields**:
481+
- `volumes`: Array of volume attachments
482+
- `volumes[N][volumeId]`: Volume ID to attach
483+
- `volumes[N][mountPath]`: Mount point inside container (e.g., `/data`)
484+
- **Process**:
485+
1. Validates all volumes exist and are owned by user
486+
2. Validates volumes are on the same node as the target container
487+
3. Creates container and ContainerVolume records
488+
4. Job runner moves volumes from placeholder to new container
489+
- **Note**: Cross-node volume attachment requires manual migration
490+
491+
#### During `DELETE /sites/:siteId/containers/:id`
492+
When a container with attached volumes is deleted:
493+
1. All attached volumes are transferred to the placeholder container
494+
2. ContainerVolume records are deleted
495+
3. Volume records are preserved (data persists)
496+
4. Volumes can be reattached to new containers
497+
498+
### Placeholder Container
499+
500+
Each Proxmox node has a "placeholder container" for volume storage:
501+
502+
- **Purpose**: Holds volumes not attached to user containers
503+
- **Auto-creation**: Created when node is registered
504+
- **Configuration**: Minimal Alpine, 16MB RAM, no network, protection enabled
505+
- **VMID**: Stored in `Node.placeholderCtId`
506+
- **Never started**: Exists only to own volumes
507+
410508
### Configuration Routes
411509

412510
#### `GET /nginx.conf`

create-a-container/bin/create-container.js

Lines changed: 170 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ const https = require('https');
2929

3030
// Load models from parent directory
3131
const db = require(path.join(__dirname, '..', 'models'));
32-
const { Container, Node, Site } = db;
32+
const { Container, Node, Site, Volume, ContainerVolume } = db;
3333

3434
// Load utilities
3535
const { parseArgs } = require(path.join(__dirname, '..', 'utils', 'cli'));
@@ -174,6 +174,25 @@ async function main() {
174174
const containerId = parseInt(args['container-id'], 10);
175175
console.log(`Starting container creation for container ID: ${containerId}`);
176176

177+
// Parse new volume arguments
178+
const newVolumes = [];
179+
for (const key in args) {
180+
if (key === 'new-volume') {
181+
const values = Array.isArray(args[key]) ? args[key] : [args[key]];
182+
for (const val of values) {
183+
try {
184+
newVolumes.push(JSON.parse(decodeURIComponent(val)));
185+
} catch (e) {
186+
console.error(`Failed to parse new-volume argument: ${val}`);
187+
}
188+
}
189+
}
190+
}
191+
192+
if (newVolumes.length > 0) {
193+
console.log(`Will create ${newVolumes.length} new volume(s)`);
194+
}
195+
177196
// Load the container record with its node and site
178197
const container = await Container.findByPk(containerId, {
179198
include: [{
@@ -385,6 +404,156 @@ async function main() {
385404
await container.update({ containerId: vmid });
386405
console.log(`Container VMID ${vmid} stored in database`);
387406

407+
// Create new volumes if any were requested
408+
if (newVolumes.length > 0) {
409+
console.log(`Creating ${newVolumes.length} new volume(s)...`);
410+
411+
// Check node has placeholder
412+
if (!node.placeholderCtId) {
413+
throw new Error(`Node ${node.name} does not have a placeholder container. Create one first.`);
414+
}
415+
416+
// Find storage for volumes
417+
const storages = await client.datastores(node.name, 'rootdir', true);
418+
if (storages.length === 0) {
419+
throw new Error('No storage available for volumes');
420+
}
421+
const volumeStorage = storages[0].storage;
422+
423+
for (const volSpec of newVolumes) {
424+
console.log(` Creating volume "${volSpec.name}" (${volSpec.sizeGb}GB)...`);
425+
426+
// Allocate the disk on the placeholder
427+
const volumeId = await client.allocateDisk(
428+
node.name,
429+
volumeStorage,
430+
node.placeholderCtId,
431+
volSpec.sizeGb
432+
);
433+
console.log(` Allocated: ${volumeId}`);
434+
435+
// Create volume record in database
436+
const volume = await Volume.create({
437+
name: volSpec.name,
438+
username: container.username,
439+
proxmoxVolume: volumeId,
440+
sizeGb: volSpec.sizeGb,
441+
siteId: site.id,
442+
nodeId: node.id
443+
});
444+
console.log(` Volume record created: ID ${volume.id}`);
445+
446+
// Attach to placeholder temporarily
447+
const placeholderMp = await client.findNextMountPoint(node.name, node.placeholderCtId);
448+
const placeholderMountPath = `/${container.username}/${volSpec.name}`;
449+
await client.updateLxcConfig(node.name, node.placeholderCtId, {
450+
[placeholderMp]: `${volumeId},mp=${placeholderMountPath}`
451+
});
452+
console.log(` Attached to placeholder at ${placeholderMp}`);
453+
454+
// Create ContainerVolume record for attachment
455+
await ContainerVolume.create({
456+
containerId: container.id,
457+
volumeId: volume.id,
458+
mountPath: volSpec.mountPath
459+
});
460+
console.log(` Queued for attachment at ${volSpec.mountPath}`);
461+
}
462+
}
463+
464+
// Attach volumes if any were requested (including newly created ones)
465+
const volumeAttachments = await ContainerVolume.findAll({
466+
where: { containerId: container.id },
467+
include: [{
468+
model: Volume,
469+
as: 'volume',
470+
include: [{ model: Node, as: 'node' }]
471+
}]
472+
});
473+
474+
if (volumeAttachments.length > 0) {
475+
console.log(`Attaching ${volumeAttachments.length} volume(s)...`);
476+
477+
for (const attachment of volumeAttachments) {
478+
const volume = attachment.volume;
479+
const mountPath = attachment.mountPath;
480+
481+
console.log(` Attaching volume "${volume.name}" at ${mountPath}`);
482+
483+
// Check if volume is on same node
484+
if (volume.nodeId !== node.id) {
485+
console.log(` Volume is on different node (${volume.node.name}), migrating to ${node.name}...`);
486+
487+
// Get the source node
488+
const sourceNode = await Node.findByPk(volume.nodeId);
489+
if (!sourceNode || !sourceNode.placeholderCtId) {
490+
throw new Error(`Source node for volume "${volume.name}" not found or has no placeholder`);
491+
}
492+
493+
// Get API client for source node
494+
const sourceClient = await sourceNode.api();
495+
496+
// Find the volume on source placeholder
497+
const sourceMp = await sourceClient.findMountPointForVolume(sourceNode.name, sourceNode.placeholderCtId, volume.proxmoxVolume);
498+
if (!sourceMp) {
499+
throw new Error(`Volume "${volume.name}" not found on source placeholder container`);
500+
}
501+
502+
// Strategy: Move volume to a temporary minimal container, migrate it, then extract
503+
// For now, we'll use a simpler approach: create the volume fresh on target and warn about data loss
504+
// TODO: Implement proper storage-level migration when Proxmox supports it better
505+
506+
// Alternative: Use pct move command with --target-node option (requires shared storage)
507+
// For local storage, we need to:
508+
// 1. Create a temp container with just this volume on source
509+
// 2. Migrate the temp container to target
510+
// 3. Move volume from temp to target placeholder
511+
// 4. Delete temp container
512+
513+
// Check if target node has placeholder
514+
if (!node.placeholderCtId) {
515+
throw new Error(`Target node ${node.name} does not have a placeholder container configured`);
516+
}
517+
518+
// For MVP: Use backup/restore approach through shared storage or error out
519+
// This is complex and depends on infrastructure setup
520+
throw new Error(
521+
`Cross-node volume migration for "${volume.name}" requires manual intervention. ` +
522+
`Volume is on node "${sourceNode.name}" but container is being created on "${node.name}". ` +
523+
`Please create the container on the same node as the volume, or migrate the volume manually using Proxmox.`
524+
);
525+
}
526+
527+
// Find the mount point on the placeholder container
528+
const placeholderCtId = node.placeholderCtId;
529+
if (!placeholderCtId) {
530+
throw new Error(`Node ${node.name} does not have a placeholder container configured`);
531+
}
532+
533+
const sourceMp = await client.findMountPointForVolume(node.name, placeholderCtId, volume.proxmoxVolume);
534+
if (!sourceMp) {
535+
throw new Error(`Volume "${volume.name}" not found on placeholder container`);
536+
}
537+
538+
// Find next available mount point on target container
539+
const targetMp = await client.findNextMountPoint(node.name, vmid);
540+
541+
// Move volume from placeholder to new container
542+
console.log(` Moving ${sourceMp} from placeholder CT ${placeholderCtId} to ${targetMp} on CT ${vmid}`);
543+
const moveUpid = await client.moveVolume(node.name, placeholderCtId, sourceMp, vmid, targetMp);
544+
await client.waitForTask(node.name, moveUpid);
545+
546+
// Update the mount path on the target container
547+
await client.updateLxcConfig(node.name, vmid, {
548+
[targetMp]: `${volume.proxmoxVolume},mp=${mountPath}`
549+
});
550+
551+
console.log(` Volume "${volume.name}" attached at ${mountPath}`);
552+
}
553+
554+
console.log('All volumes attached successfully');
555+
}
556+
388557
// Start the container
389558
console.log('Starting container...');
390559
const startUpid = await client.startLxc(node.name, vmid);

0 commit comments

Comments
 (0)