Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.pytest_artifacts/
cover/

# Translations
Expand Down Expand Up @@ -155,5 +156,5 @@ cython_debug/
# UV cache directory (for hardlinking optimization)
.uv_cache/

# Generated demo venvs
comfy_hello_world/node-venvs/
# Generated test venvs
.smoke_venv/
63 changes: 62 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

> 🚨 **Fail Loud Policy**: pyisolate assumes the rest of ComfyUI core is correct. Missing prerequisites or runtime failures immediately raise descriptive exceptions instead of being silently ignored.

pyisolate enables you to run Python extensions with conflicting dependencies in the same application by automatically creating isolated virtual environments for each extension using `uv`. Extensions communicate with the host process through a transparent RPC system, making the isolation invisible to your code while keeping the host environment dependency-free.
pyisolate enables you to run Python extensions with conflicting dependencies in the same application by automatically creating isolated environments for each extension. The default provisioner uses `uv`, and ComfyUI integrations can also provision a conda environment through `pixi` when an extension needs conda-first packages. Extensions communicate with the host process through a transparent RPC system, making the isolation invisible to your code while keeping the host environment dependency-free.

## Requirements

Expand Down Expand Up @@ -86,6 +86,7 @@ The script installs `uv`, creates the dev venv, installs pyisolate in editable m

- 🔒 **Dependency Isolation**: Run extensions with incompatible dependencies (e.g., numpy 1.x and 2.x) in the same application
- 🚀 **Zero-Copy PyTorch Tensor Sharing**: Share PyTorch tensors between processes without serialization overhead
- 📦 **Multiple Environment Backends**: Use `uv` by default or a conda/pixi environment when the extension needs conda-native dependencies
- 🔄 **Transparent Communication**: Call async methods across process boundaries as if they were local
- 🎯 **Simple API**: Clean, intuitive interface with minimal boilerplate
- ⚡ **Fast**: Uses `uv` for blazing-fast virtual environment creation
Expand Down Expand Up @@ -185,6 +186,66 @@ large_tensor = torch.randn(1000, 1000)
mean = await extension.process_tensor(large_tensor)
```

### Execution Model Axis

ComfyUI integrations now treat environment provisioning and runtime boundary as separate choices:

- `package_manager = "uv"` or `package_manager = "conda"` chooses how the child environment is built
- `execution_model = "host-coupled"` or `execution_model = "sealed_worker"` chooses how much host runtime state the child may inherit

`host-coupled` remains the default for the classic `uv` path. `sealed_worker` is the foreign-interpreter path: no host `sys.path` reconstruction, no host framework runtime imports as a crutch, JSON-RPC tensor transport, and no sandbox in this phase.

### UV Backend for Sealed Workers

ComfyUI extensions can also request a sealed `uv` worker explicitly:

```toml
[project]
name = "uv-sealed-node"
version = "0.1.0"
dependencies = ["boltons"]

[tool.comfy.isolation]
can_isolate = true
package_manager = "uv"
execution_model = "sealed_worker"
share_torch = false
```

Trade-offs for `package_manager = "uv"` with `execution_model = "sealed_worker"`:

- `share_torch` must be `False`
- tensors cross the boundary through JSON-compatible RPC values instead of shared-memory tensor handles
- host `sys.path` reconstruction is disabled
- host framework runtime imports such as `comfy.isolation.extension_wrapper` must not be required in the child
- `bwrap` sandboxing is intentionally disabled in this phase
Comment on lines +196 to +221
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The sealed-worker sandbox docs are out of sync with the code.

These lines say sealed workers have “no sandbox” / bwrap is disabled, but the new Linux tests in this PR assert the opposite for the uv path: sealed workers still go through build_bwrap_command() with --clearenv, env allowlisting, and read-only binds. Please scope that claim to the conda path or rephrase it so the docs don’t surprise and improvise.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 196 - 221, The README claim that sealed workers
disable `bwrap` sandboxing is out of sync with the implementation/tests; update
the "UV Backend for Sealed Workers" section to either scope the "no sandbox /
bwrap disabled" statement to the conda path or rephrase it to reflect that for
`package_manager = "uv"` with `execution_model = "sealed_worker"` the runtime
still invokes the sandbox construction (see `build_bwrap_command()` which
applies `--clearenv`, env allowlisting and read-only binds) as asserted by the
Linux tests; ensure the doc text references `sealed_worker` and `package_manager
= "uv"` so readers aren’t misled.


### Conda Backend for Sealed Workers

ComfyUI extensions can declare a conda-backed isolated environment in `pyproject.toml`:

```toml
[project]
name = "weather-node"
version = "0.1.0"
dependencies = ["xarray", "cfgrib"]

[tool.comfy.isolation]
can_isolate = true
package_manager = "conda"
share_torch = false
conda_channels = ["conda-forge"]
conda_dependencies = ["eccodes", "cfgrib"]
```

Trade-offs for `package_manager = "conda"`:

- `share_torch` is forced `False`
- `bwrap` sandboxing is skipped
- the child uses its own interpreter instead of the host Python
- the child is treated as a sealed foreign runtime and must not import host framework runtime code through leaked `sys.path`
- tensor transfer crosses the RPC boundary as JSON-compatible values instead of shared-memory tensor handles

### Shared State with Singletons

Share state across all extensions using ProxiedSingleton:
Expand Down
4 changes: 3 additions & 1 deletion pyisolate/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,18 +39,20 @@
from ._internal.tensor_serializer import flush_tensor_keeper, purge_orphan_sender_shm_files
from .config import ExtensionConfig, ExtensionManagerConfig, SandboxMode
from .host import ExtensionBase, ExtensionManager
from .sealed import SealedNodeExtension

if TYPE_CHECKING:
from .interfaces import IsolationAdapter

__version__ = "0.9.1"
__version__ = "0.10.0"

__all__ = [
"ExtensionBase",
"ExtensionManager",
"ExtensionManagerConfig",
"ExtensionConfig",
"SandboxMode",
"SealedNodeExtension",
"ProxiedSingleton",
"local_execution",
"singleton_scope",
Expand Down
84 changes: 63 additions & 21 deletions pyisolate/_internal/bootstrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,24 +21,12 @@
logger = logging.getLogger(__name__)


def _apply_sys_path(snapshot: dict[str, Any]) -> None:
host_paths = snapshot.get("sys_path", [])
extra_paths = snapshot.get("additional_paths", [])
def _should_apply_host_sys_path(snapshot: dict[str, Any]) -> bool:
return bool(snapshot.get("apply_host_sys_path", True))

preferred_root: str | None = snapshot.get("preferred_root")
if not preferred_root:
context_data = snapshot.get("context_data", {})
module_path = context_data.get("module_path") or os.environ.get("PYISOLATE_MODULE_PATH")
if module_path:
preferred_root = str(Path(module_path).parent.parent)

child_paths = build_child_sys_path(host_paths, extra_paths, preferred_root)

if not child_paths:
return

# Rebuild sys.path with child paths first while preserving any existing entries
# that are not already in the computed set.
def _merge_sys_path_front(paths: list[str]) -> None:
"""Prepend paths to sys.path while preserving order and removing duplicates."""
seen = set()
merged: list[str] = []

Expand All @@ -49,13 +37,62 @@ def add_path(p: str) -> None:
seen.add(norm)
merged.append(p)

for p in child_paths:
for p in paths:
add_path(p)

for p in sys.path:
add_path(p)

sys.path[:] = merged


def _apply_sealed_opt_in_paths(snapshot: dict[str, Any]) -> None:
raw_paths = snapshot.get("sealed_host_ro_paths", [])
if not isinstance(raw_paths, list):
return

opt_in_paths: list[str] = []
for path in raw_paths:
if not isinstance(path, str) or not path.strip():
continue
if not os.path.isabs(path):
continue
if not os.path.exists(path):
continue
opt_in_paths.append(path)

if not opt_in_paths:
return

_merge_sys_path_front(opt_in_paths)
logger.debug("Applied %d sealed opt-in import paths", len(opt_in_paths))


def _apply_sys_path(snapshot: dict[str, Any]) -> None:
if not _should_apply_host_sys_path(snapshot):
_apply_sealed_opt_in_paths(snapshot)
logger.debug("Skipping host sys.path reconstruction for sealed child")
return

host_paths = snapshot.get("sys_path", [])
extra_paths = snapshot.get("additional_paths", [])

preferred_root: str | None = snapshot.get("preferred_root")
if not preferred_root:
context_data = snapshot.get("context_data", {})
module_path = context_data.get("module_path") or os.environ.get("PYISOLATE_MODULE_PATH")
if module_path:
preferred_root = str(Path(module_path).parent.parent)

filtered_subdirs = snapshot.get("filtered_subdirs")
child_paths = build_child_sys_path(host_paths, extra_paths, preferred_root, filtered_subdirs)

if not child_paths:
return

# Rebuild sys.path with child paths first while preserving any existing entries
# that are not already in the computed set.
_merge_sys_path_front(child_paths)
logger.debug("Applied %d paths from snapshot (preferred_root=%s)", len(child_paths), preferred_root)


Expand Down Expand Up @@ -125,20 +162,25 @@ def bootstrap_child() -> IsolationAdapter | None:
_apply_sys_path(snapshot)

adapter: IsolationAdapter | None = None
is_sealed = not _should_apply_host_sys_path(snapshot)

adapter_ref = snapshot.get("adapter_ref")
if adapter_ref:
try:
adapter = _rehydrate_adapter(adapter_ref)
except Exception as exc:
logger.warning("Failed to rehydrate adapter from ref %s: %s", adapter_ref, exc)
logger.warning(
"Failed to rehydrate adapter from ref %s: %s",
adapter_ref,
exc,
)

if not adapter and adapter_ref:
# If we had info but failed to load, that's an error
if not adapter and adapter_ref and not is_sealed:
raise ValueError("Snapshot contained adapter info but adapter could not be loaded")

if adapter:
adapter.setup_child_environment(snapshot)
if not is_sealed:
adapter.setup_child_environment(snapshot)
registry = SerializerRegistry.get_instance()
adapter.register_serializers(registry)

Expand Down
4 changes: 4 additions & 0 deletions pyisolate/_internal/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@ async def async_entrypoint(
api_instance = cast(ProxiedSingleton, getattr(api, "instance", api))
_adapter.handle_api_registration(api_instance, rpc)

# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
_adapter.setup_child_event_hooks(extension)

Comment on lines +110 to +113
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard adapter hook failures so child startup doesn’t hard-fail.

If setup_child_event_hooks() throws, extension loading aborts even though RPC/module loading may be otherwise valid. Wrap this call and log a warning to keep startup resilient.

Proposed fix
-        # Let the adapter wire child-side event hooks (e.g., progress bar)
-        if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
-            _adapter.setup_child_event_hooks(extension)
+        # Let the adapter wire child-side event hooks (e.g., progress bar)
+        if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
+            try:
+                _adapter.setup_child_event_hooks(extension)
+            except Exception as exc:  # pragma: no cover - best-effort hook
+                logger.warning("Adapter child event hook setup failed: %s", exc, exc_info=True)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
_adapter.setup_child_event_hooks(extension)
# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
try:
_adapter.setup_child_event_hooks(extension)
except Exception as exc: # pragma: no cover - best-effort hook
logger.warning("Adapter child event hook setup failed: %s", exc, exc_info=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/client.py` around lines 110 - 113, Wrap the call to
_adapter.setup_child_event_hooks(extension) in a try/except that catches
Exception so adapter hook failures do not abort child startup; on exception, log
a warning (e.g., using logging.getLogger(__name__).warning(...)) including the
adapter identity and the exception details, and do not re-raise so execution
continues. Target the invocation in pyisolate._internal.client where _adapter
and hasattr(..., "setup_child_event_hooks") are checked.

# Sanitize module name for use as Python identifier.
# Replace '-' and '.' with '_' to prevent import errors when module names contain
# non-identifier characters (e.g., "my-node" → "my_node", "my.node" → "my_node").
Expand Down
Loading
Loading