Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 23 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,8 +82,8 @@ branchfs create experiment /mnt/workspace
cd /mnt/workspace
echo "new code" > feature.py

# List branches for this mount
branchfs list /mnt/workspace
# List branches
branchfs list

# Commit changes to base (switches back to main, stays mounted)
branchfs commit /mnt/workspace
Expand Down Expand Up @@ -145,43 +145,42 @@ cat /mnt/workspace/@feature-a/@child/file.txt

This is useful for multi-agent workflows where each agent can bind-mount a different `@branch` path to work on isolated branches in parallel within the same mount.

### Parallel Speculation (Multiple Mount Points)
### Parallel Speculation (Multiple Agents)

Each mount has its own isolated branch namespace:
With `@branch` virtual paths, multiple agents can work in parallel through a single mount:

```bash
# Mount two isolated workspaces from the same base
branchfs mount --base ~/project /mnt/approach-a
branchfs mount --base ~/project /mnt/approach-b
# Mount once
branchfs mount --base ~/project /mnt/workspace

# Create branches in each (isolated from each other)
branchfs create experiment /mnt/approach-a
branchfs create experiment /mnt/approach-b # same name, different mount = OK
# Create branches for each agent
branchfs create agent-a /mnt/workspace
branchfs create agent-b /mnt/workspace

# Work in parallel...
echo "approach a" > /mnt/approach-a/solution.py
echo "approach b" > /mnt/approach-b/solution.py
# Each agent works via its own @branch path (no switching needed)
echo "approach a" > /mnt/workspace/@agent-a/solution.py
echo "approach b" > /mnt/workspace/@agent-b/solution.py

# Commit one approach
branchfs commit /mnt/approach-a
# Commit one agent's work
echo "commit" > /mnt/workspace/@agent-a/.branchfs_ctl

# approach-b is unaffected (isolated mount)
cat /mnt/approach-b/solution.py # still works
# agent-b is unaffected
cat /mnt/workspace/@agent-b/solution.py # still works
```

## Semantics

### Per-Mount Isolation
### Shared Branch Namespace

Each mount point has its own isolated branch namespace. Branches created in one mount are not visible to other mounts, even if they share the same base directory. This includes `@branch` virtual paths — `/@feature-a` on one mount has no relation to `/@feature-a` on another mount, even if they share the same base. This enables true parallel speculation without interference.
All mounts share a single branch namespace managed by the daemon. Branches created through any mount are visible from all mounts via `@branch` virtual paths. This simplifies multi-agent workflows — each agent accesses its branch via `/@branch-name/` without needing separate mount points.

### Commit

Committing a branch applies the entire chain of changes to the base filesystem:

1. Changes are collected from the current branch up through its ancestors
2. Deletions are applied first, then file modifications
3. Mount's epoch increments, invalidating all branches in this mount
3. Epoch increments, invalidating all branches across all mounts
4. **Mount automatically switches to main branch** (stays mounted)
5. Memory-mapped regions trigger `SIGBUS` on next access

Expand All @@ -190,15 +189,13 @@ Committing a branch applies the entire chain of changes to the base filesystem:
Aborting discards the entire branch chain without affecting the base:

1. The entire branch chain (current branch up to main) is discarded
2. Sibling branches in the same mount continue operating normally
2. Other branches continue operating normally
3. **Mount automatically switches to main branch** (stays mounted)
4. Memory-mapped regions in aborted branches trigger `SIGBUS`

### Unmount

Unmounting removes the mount and cleans up all its branches:
Unmounting removes the FUSE mount:

1. **All branches for this mount are discarded** (full cleanup)
2. Mount-specific storage is deleted
3. The daemon automatically exits when the last mount is removed
4. Other mounts are unaffected (per-mount isolation)
1. The FUSE session is torn down
2. The daemon automatically exits when the last mount is removed
181 changes: 54 additions & 127 deletions src/daemon.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
use std::collections::hash_map::DefaultHasher;
use std::collections::HashMap;
use std::fs;
use std::hash::{Hash, Hasher};
use std::io::{BufRead, BufReader, Write};
use std::os::unix::net::{UnixListener, UnixStream};
use std::path::{Path, PathBuf};
Expand All @@ -21,25 +19,11 @@ use crate::fs::BranchFs;
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "cmd", rename_all = "snake_case")]
pub enum Request {
Mount {
branch: String,
mountpoint: String,
},
Unmount {
mountpoint: String,
},
Create {
name: String,
parent: String,
mountpoint: String,
},
NotifySwitch {
mountpoint: String,
branch: String,
},
List {
mountpoint: String,
},
Mount { branch: String, mountpoint: String },
Unmount { mountpoint: String },
Create { name: String, parent: String },
NotifySwitch { mountpoint: String, branch: String },
List,
Shutdown,
}

Expand Down Expand Up @@ -78,29 +62,19 @@ impl Response {
}
}

/// Per-mount state including the FUSE session, current branch, and isolated branch manager
/// Per-mount state including the FUSE session and current branch
pub struct MountInfo {
session: BackgroundSession,
current_branch: String,
manager: Arc<BranchManager>,
mount_storage: PathBuf,
}

pub struct Daemon {
base_path: PathBuf,
storage_path: PathBuf,
manager: Arc<BranchManager>,
mounts: Mutex<HashMap<PathBuf, MountInfo>>,
socket_path: PathBuf,
shutdown: AtomicBool,
}

/// Generate a hash-based directory name for a mountpoint
fn mount_hash(mountpoint: &Path) -> String {
let mut hasher = DefaultHasher::new();
mountpoint.hash(&mut hasher);
format!("{:016x}", hasher.finish())
}

impl Daemon {
pub fn new(
base_path: PathBuf,
Expand All @@ -109,7 +83,15 @@ impl Daemon {
) -> Result<Self> {
let socket_path = storage_path.join("daemon.sock");

// Clean up orphaned mount directories on startup
// Clean up branches from previous daemon run for fresh state
let branches_dir = storage_path.join("branches");
if branches_dir.exists() {
if let Err(e) = fs::remove_dir_all(&branches_dir) {
log::warn!("Failed to clean up branches directory: {}", e);
}
}

// Also clean up legacy mounts directory if present
let mounts_dir = storage_path.join("mounts");
if mounts_dir.exists() {
if let Err(e) = fs::remove_dir_all(&mounts_dir) {
Expand All @@ -122,9 +104,15 @@ impl Daemon {
fs::create_dir_all(&storage_path)?;
fs::write(&base_file, base_path.to_string_lossy().as_bytes())?;

// Create the single shared BranchManager
let manager = Arc::new(BranchManager::new(
storage_path.clone(),
base_path.clone(),
base_path.clone(),
)?);

Ok(Self {
base_path,
storage_path,
manager,
mounts: Mutex::new(HashMap::new()),
socket_path,
shutdown: AtomicBool::new(false),
Expand All @@ -136,45 +124,29 @@ impl Daemon {
}

pub fn spawn_mount(&self, branch_name: &str, mountpoint: &Path) -> Result<()> {
// Create mount-specific storage directory
let mount_storage = self
.storage_path
.join("mounts")
.join(mount_hash(mountpoint));
fs::create_dir_all(&mount_storage)?;

// Create a new BranchManager for this mount
let manager = Arc::new(BranchManager::new(
mount_storage.clone(),
self.base_path.clone(),
mountpoint.to_path_buf(),
)?);

let fs = BranchFs::new(manager.clone(), branch_name.to_string());
let fs = BranchFs::new(self.manager.clone(), branch_name.to_string());
let options = vec![
MountOption::FSName("branchfs".to_string()),
MountOption::DefaultPermissions,
];

log::info!(
"Spawning mount for branch '{}' at {:?} with storage {:?}",
"Spawning mount for branch '{}' at {:?}",
branch_name,
mountpoint,
mount_storage
);

let session =
fuser::spawn_mount2(fs, mountpoint, &options).map_err(crate::error::BranchError::Io)?;

// Get the notifier for cache invalidation and register it with the manager
let notifier = Arc::new(session.notifier());
manager.register_notifier(branch_name, mountpoint.to_path_buf(), notifier);
self.manager
.register_notifier(branch_name, mountpoint.to_path_buf(), notifier);

let mount_info = MountInfo {
session,
current_branch: branch_name.to_string(),
manager,
mount_storage,
};

self.mounts
Expand All @@ -199,22 +171,9 @@ impl Daemon {
}
};

// Clean up mount storage directory (full cleanup on unmount)
if let Some(info) = mount_info {
info.manager
self.manager
.unregister_notifier(&info.current_branch, mountpoint);
// Delete the entire mount storage directory
if info.mount_storage.exists() {
if let Err(e) = fs::remove_dir_all(&info.mount_storage) {
log::warn!(
"Failed to clean up mount storage {:?}: {}",
info.mount_storage,
e
);
} else {
log::info!("Cleaned up mount storage {:?}", info.mount_storage);
}
}
}

if should_shutdown {
Expand All @@ -230,18 +189,9 @@ impl Daemon {
let mountpoints: Vec<PathBuf> = mounts.keys().cloned().collect();
for mountpoint in &mountpoints {
if let Some(info) = mounts.remove(mountpoint) {
info.manager
self.manager
.unregister_notifier(&info.current_branch, mountpoint);
// BackgroundSession dropped here → FUSE unmount
if info.mount_storage.exists() {
if let Err(e) = fs::remove_dir_all(&info.mount_storage) {
log::warn!(
"Failed to clean up mount storage {:?}: {}",
info.mount_storage,
e
);
}
}
log::info!("Cleaned up mount at {:?}", mountpoint);
}
}
Expand All @@ -251,27 +201,16 @@ impl Daemon {
self.mounts.lock().len()
}

pub fn create_branch(&self, name: &str, parent: &str, mountpoint: &Path) -> Result<()> {
let mounts = self.mounts.lock();
let mount_info = mounts
.get(mountpoint)
.ok_or_else(|| crate::error::BranchError::MountNotFound(format!("{:?}", mountpoint)))?;
mount_info.manager.create_branch(name, parent)
pub fn create_branch(&self, name: &str, parent: &str) -> Result<()> {
self.manager.create_branch(name, parent)
}

pub fn list_branches(&self, mountpoint: &Path) -> Result<Vec<(String, Option<String>)>> {
let mounts = self.mounts.lock();
let mount_info = mounts
.get(mountpoint)
.ok_or_else(|| crate::error::BranchError::MountNotFound(format!("{:?}", mountpoint)))?;
Ok(mount_info.manager.list_branches())
pub fn list_branches(&self) -> Vec<(String, Option<String>)> {
self.manager.list_branches()
}

pub fn get_manager(&self, mountpoint: &Path) -> Option<Arc<BranchManager>> {
self.mounts
.lock()
.get(mountpoint)
.map(|info| info.manager.clone())
pub fn get_manager(&self) -> Arc<BranchManager> {
self.manager.clone()
}

pub fn run(&self) -> Result<()> {
Expand Down Expand Up @@ -361,29 +300,22 @@ impl Daemon {
Err(e) => Response::error(&format!("{}", e)),
}
}
Request::Create {
name,
parent,
mountpoint,
} => {
let path = PathBuf::from(&mountpoint);
match self.create_branch(&name, &parent, &path) {
Ok(()) => Response::success(),
Err(e) => Response::error(&format!("{}", e)),
}
}
Request::Create { name, parent } => match self.create_branch(&name, &parent) {
Ok(()) => Response::success(),
Err(e) => Response::error(&format!("{}", e)),
},
Request::NotifySwitch { mountpoint, branch } => {
let path = PathBuf::from(&mountpoint);
let mut mounts = self.mounts.lock();
if let Some(ref mut info) = mounts.get_mut(&path) {
// Unregister old notifier
info.manager
self.manager
.unregister_notifier(&info.current_branch, &path);
// Update tracked branch
let old_branch = std::mem::replace(&mut info.current_branch, branch.clone());
// Register notifier for new branch
let notifier = Arc::new(info.session.notifier());
info.manager
self.manager
.register_notifier(&branch, path.clone(), notifier);
log::info!(
"Mount {:?} switched from '{}' to '{}'",
Expand All @@ -396,23 +328,18 @@ impl Daemon {
Response::error(&format!("Mount not found: {:?}", path))
}
}
Request::List { mountpoint } => {
let path = PathBuf::from(&mountpoint);
match self.list_branches(&path) {
Ok(branches) => {
let branches: Vec<_> = branches
.into_iter()
.map(|(name, parent)| {
serde_json::json!({
"name": name,
"parent": parent
})
})
.collect();
Response::success_with_data(serde_json::json!(branches))
}
Err(e) => Response::error(&format!("{}", e)),
}
Request::List => {
let branches: Vec<_> = self
.list_branches()
.into_iter()
.map(|(name, parent)| {
serde_json::json!({
"name": name,
"parent": parent
})
})
.collect();
Response::success_with_data(serde_json::json!(branches))
}
Request::Shutdown => {
log::info!("Shutdown requested, cleaning up all mounts");
Expand Down
Loading