Skip to content

feat: pyisolate 0.10.0 — conda backend, sealed workers, CUDA wheels, sandbox hardening#9

Merged
pollockjj merged 1 commit intoComfy-Org:mainfrom
pollockjj:develop
Mar 29, 2026
Merged

feat: pyisolate 0.10.0 — conda backend, sealed workers, CUDA wheels, sandbox hardening#9
pollockjj merged 1 commit intoComfy-Org:mainfrom
pollockjj:develop

Conversation

@pollockjj
Copy link
Copy Markdown
Collaborator

Summary

  • Conda/pixi backend: environment creation, config, routing, PEP 508 markers
  • CUDA wheel resolver: target_python, multi-index, find_links, dotted torch versions
  • Sealed worker execution model: uv and conda backends, proxy handle round-trip
  • RPC serialization: dynamic serializer resolution, perf tracing
  • Generic event channel: child-to-host dispatch, setup_child_event_hooks
  • Bwrap sandbox hardening: adapter paths, sealed workers, sys.base_prefix
  • Decoupling: remove ComfyUI-specific references from pyisolate core
  • Platform fixes: Windows uv_exe fallback, test expectations, TMPDIR cleanup
  • Version bump: 0.9.2 → 0.10.0 (pyproject.toml + init.py)

Test plan

  • All CI checks green on pollockjj/pyisolate PR Add custom coderabbit instructions #4 (53 checks pass)
  • Upstream CI: all matrix legs pass
  • Build wheels: all platforms build and smoke test pass
  • Lint: ruff check + ruff format --check + mypy all exit 0

…event channel, sandbox hardening

* feat: pyisolate 0.10.0 — conda backend, sealed workers, CUDA wheels, event channel, sandbox hardening

Squash of 93 commits from conda_support since v0.9.2.

Major features:
- Conda/pixi backend: environment creation, config, routing, PEP 508 markers
- CUDA wheel resolver: target_python, multi-index, find_links, dotted torch versions
- Sealed worker execution model: uv and conda backends, proxy handle round-trip
- RPC serialization: dynamic serializer resolution, perf tracing
- Generic event channel: child-to-host dispatch, setup_child_event_hooks
- Bwrap sandbox hardening: adapter paths, sealed workers, sys.base_prefix
- Remove ComfyUI-specific references from pyisolate core
- Platform fixes: Windows uv_exe fallback, test expectations, TMPDIR cleanup
- Formatting fixes: ruff format compliance for CI

* chore: bump version to 0.10.0

* fix: use tomli fallback for Python 3.10 in integration tests

* fix: inline fixture config in integration tests, remove tomllib dependency

* fix: skip network-dependent and bwrap-dependent tests in CI

* fix: skip uv sealed integration test when bwrap sandbox rejects at runtime
Copilot AI review requested due to automatic review settings March 29, 2026 22:26
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 29, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a multi-backend execution model supporting both uv (default) and conda package managers, adds sealed-worker isolation with hermetic environment enforcement, implements event bridging for child-to-host communication, and extends tensor transport to support JSON serialization alongside shared-memory. Version bumped to 0.10.0.

Changes

Cohort / File(s) Summary
Configuration & Validation
pyproject.toml, pyisolate/config.py, pyisolate/__init__.py, .gitignore, README.md
Updated version to 0.10.0, added SealedNodeExtension to public API, extended CUDAWheelConfig with index_urls support, added optional package_manager, execution_model, sealed_host_ro_paths, and conda-related fields to ExtensionConfig, ignored .pytest_artifacts/ and .smoke_venv/ directories, documented sealed-worker and multi-backend execution model in README.
Backend Configuration Validation
pyisolate/_internal/environment.py
Added validate_backend_config to enforce backend-specific rules: derives/validates execution_model, rejects unknown models, enforces share_cuda_ipc requires share_torch, forbids sealed_worker with share_torch=True, validates sealed_host_ro_paths, and adds conda-specific constraints (requires sealed_worker, share_torch=False, conda_channels, and pixi on PATH).
Conda/Pixi Environment Creation
pyisolate/_internal/environment_conda.py
New module implementing conda/pixi-based environment provisioning; generates pixi.toml manifests from configs, handles conda/pip dependency splitting, resolves pixi Python, installs CUDA wheels into pixi environments, installs local wheel artifacts, and caches fingerprints to skip redundant installs.
Bootstrap & Sealed Mode Control
pyisolate/_internal/bootstrap.py, pyisolate/_internal/sandbox.py, pyisolate/path_helpers.py
Conditional host sys.path application controlled by snapshot["apply_host_sys_path"], added sealed opt-in path handling via sealed_host_ro_paths, skipped adapter setup for sealed workers, enhanced bwrap command building with sealed-worker environment clearance and explicit allowlist, added symlink-aware Python prefix resolution, normalized writable paths and rejected /tmp binds.
Event Bridging Infrastructure
pyisolate/_internal/event_bridge.py, pyisolate/shared.py, pyisolate/interfaces.py, pyisolate/_internal/host.py
New _EventBridge class for child-to-host event dispatch via RPC, added emit_event to ExtensionLocal with JSON serializability validation and asyncio loop fallbacks, added setup_child_event_hooks and setup_web_directory to IsolationAdapter protocol, registered event bridge as RPC callee on host side.
Tensor & RPC Transport Modes
pyisolate/_internal/tensor_serializer.py, pyisolate/_internal/rpc_transports.py, pyisolate/_internal/model_serialization.py, pyisolate/_internal/rpc_serialization.py
Extended tensor serialization to support mode parameter for JSON vs. shared-memory transport, added performance tracing via _record_tensor_trace, refactored JSON transport with NumPy scalar handling and mode-aware deserialization, updated RPC serialization with serializer-type memoization and diagnostic logging, simplified remote-handle passthrough logic.
CUDA Wheel Resolution
pyisolate/_internal/cuda_wheels.py
Added optional target_python parameter to CUDA wheel functions to support multi-interpreter scenarios, introduced _tags_for_python() for CPython tag generation, expanded CUDA-local pattern matching with dotted torch version groups, made index URL configuration accept index_urls (list) alongside index_url (string).
Host Process Management
pyisolate/_internal/host.py, pyisolate/_internal/uds_client.py, pyisolate/host.py
Added backend-selection helpers, conditional tensor transport mode configuration, conditional torch multiprocessing sharing, conda environment creation dispatch, sealed-worker bootstrap payload customization, PYTHONPATH/PYTHONNOUSERSITE environment setup for sealed workers, event handler registration with persistence across restarts, updated child launch with platform-aware environment merging.
Sealed Node Extension
pyisolate/sealed.py
New public class SealedNodeExtension wrapping sealed-worker runtimes; provides async node discovery APIs (list/info/details/input-types), executes nodes with remote-object handle wrapping/resolution, manages per-process node instances and remote-object registry, flushes tensor keeper on state cleanup, sanitizes transportable structures for proxy crossing.
Client Bootstrap & Entrypoint
pyisolate/_internal/uds_client.py, pyisolate/_internal/client.py
Resolved API classes from config (skipped under sealed-worker), passed tensor transport mode through bootstrap, registered adapter child-event-hooks when available, extracted tensor/torch multiprocessing configuration to mode checks, added helper _resolve_api_classes_from_config.
Sandbox Detection & Security
pyisolate/_internal/sandbox_detect.py, pyisolate/_internal/perf_trace.py
Fixed bwrap probe to dynamically detect /lib64 presence, added file-based performance tracing infrastructure with nanosecond timestamps, process IDs, and event enrichment.
Test Fixtures: UV Sealed Worker
tests/fixtures/uv_sealed_worker/*
New fixture implementing ComfyUI-style nodes for sealed-worker testing: runtime probe (sys.path, boltons origin, site-packages absence), slugify, filesystem barrier, tensor/latent echo; configured with uv package manager and sealed_worker execution model.
Test Fixtures: Conda Sealed Node
tests/fixtures/conda_sealed_node/*
New fixture with inspect runtime, echo tensor, and weather dataset nodes; configured with conda package manager and conda dependencies (boltons), validates conda-sealed runtimes.
Integration Tests: Sealed Workers
tests/test_uv_sealed_integration.py, tests/test_conda_integration.py
Process-level tests verifying sealed worker isolation: node discovery, dependency origin validation (no host leakage), filesystem barriers, tensor roundtrips with JSON signaling, bwrap sandbox invocation, pixi environment setup, CUDA synchronization cleanup.
Integration Tests: Bootstrap & Config
tests/test_bootstrap.py, tests/test_exact_proxy_bootstrap.py, tests/test_conda_sealed_worker_contract.py
Unit tests validating sealed-worker bootstrap payload (apply_host_sys_path disabled), adapter handling (skipped or non-fatal), sealed_host_ro_paths opt-in, pixi.toml generation, dependency splitting, execution-model defaults, bootstrap snapshot disables host sys-path and adapter refs when sealed.
Unit Tests: Config Validation
tests/test_config_conda.py, tests/test_config_sealed_worker.py
Tests for validate_backend_config: conda/sealed_worker defaults, share_torch/cuda_ipc constraints, pixi availability checks, sealed_host_ro_paths validation (absolute paths, sealed_worker-only), execution-model matrix validation.
Unit Tests: Environment & Dependencies
tests/test_environment_conda.py, tests/test_environment_sealed_worker.py, tests/test_host_conda_dispatch.py, tests/test_host_sealed_worker_dispatch.py
Tests for pixi.toml generation, conda/pip dependency separation, CUDA wheel installation targeting, uv fallbacks, sealed-worker dependency exclusion (no torch), conda vs uv dispatch, JSON tensor transport for sealed workers, bootstrap payload validation.
Unit Tests: Security & Sandbox
tests/test_security_conda.py, tests/test_security_sealed_worker.py, tests/test_bwrap_command.py, tests/test_sandbox_detect.py
Tests for sealed-worker bwrap invocation with environment clearance and allowlist, credential-like variable filtering, /lib64 existence checks, Python/adapter prefix binding, writable-path normalization, sealed_host_ro_paths binding, /dev/shm omission for sealed workers.
Unit Tests: Event Handling & Serialization
tests/test_event_channel.py, tests/test_cuda_wheels.py, tests/test_model_serialization.py, tests/test_rpc_contract.py, tests/test_rpc_transports.py
Tests for event dispatch registration/handler invocation, JSON serializability validation, async handler awaiting, target_python CUDA wheel selection, opaque RemoteObjectHandle preservation, event-loop resilience and cleanup, transport close-in-finally safety, numpy scalar handling in JSON serialization.
Unit Tests: Sealed Node Extension
tests/test_sealed_proxy_handle.py
Tests validating SealedNodeExtension wraps unregistered objects as RemoteObjectHandle instances, resolves handles to original objects, raises on stale handles, roundtrips numpy arrays via handles.
Miscellaneous Test Updates
tests/conftest.py, tests/harness/host.py, tests/harness/test_package/__init__.py, tests/path_unification/test_path_helpers.py, tests/integration_v2/test_isolation.py, tests/integration_v2/test_tensors.py, tests/test_client_entrypoint_extra.py, tests/test_harness_host_env.py, tests/test_host_internal_ext.py, tests/test_tensor_serializer_signal_cleanup.py, tests/test_shared_additional.py
Removed ComfyUI integration startup from conftest, added TMPDIR preservation/restore in test harness, extended test extension with CUDA synchronization cleanup, updated path helpers tests to use synthetic preferred_root, added host /tmp isolation test, enhanced CUDA test cleanup logic, added sealed-worker API resolution test, improved event-loop preservation, added signal cleanup handler robustness test, improved RPC test double with register_callee.

Sequence Diagram(s)

sequenceDiagram
    participant Host as Host Process
    participant Child as Child Process
    participant Adapter as Adapter (if present)
    
    Note over Host,Child: Sealed Worker Initialization
    Host->>Host: validate_backend_config()
    alt package_manager == "conda"
        Host->>Host: create_conda_env() with pixi
    else package_manager == "uv"
        Host->>Host: create_venv() + install_dependencies()
    end
    
    Host->>Host: build_bwrap_command(execution_model=sealed_worker)
    Host->>Child: spawn subprocess with --clearenv allowlist
    
    Child->>Child: bootstrap_child(apply_host_sys_path=False)
    alt adapter present and host sys.path enabled
        Child->>Adapter: _rehydrate_adapter()
        Adapter-->>Child: adapter ref
    else sealed worker (host sys.path disabled)
        Child->>Child: skip adapter setup
        Child->>Child: apply sealed_host_ro_paths only
    end
    
    Child->>Child: SealedNodeExtension.on_module_loaded()
    Child->>Child: capture NODE_CLASS_MAPPINGS
    
    Note over Host,Child: Node Execution
    Host->>Child: execute_node(node_name, inputs)
    Child->>Child: _resolve_handles(inputs)
    Child->>Child: _get_node_class() + invoke
    alt sync callable
        Child->>Child: executor.run_in_thread()
    else async callable
        Child->>Child: await node function
    end
    Child->>Child: _wrap_for_transport(results)
    Child-->>Host: RemoteObjectHandle[] + JSON tensors
    
    Note over Host,Child: Event Signaling
    Child->>Child: emit_event(name, payload)
    Child->>Host: RPC EventBridge.dispatch()
    Host->>Adapter: setup_child_event_hooks.dispatch()
    Adapter-->>Host: event handled
Loading
sequenceDiagram
    participant Host as Host Process
    participant Transport as JSON-RPC Transport
    participant Child as Child (Sealed)
    
    Note over Host,Child: Tensor Transport Mode Selection
    Host->>Host: _tensor_transport_mode() → "json"
    Host->>Transport: set_tensor_transport_mode("json")
    Host->>Child: bootstrap with tensor_transport="json"
    
    Note over Host,Child: Sealed Worker Tensor Roundtrip
    Host->>Transport: serialize_tensor(t, mode="json")
    Transport->>Transport: TensorValue{dtype, shape, data}
    Transport-->>Host: JSON-compatible dict
    Host->>Child: send via JSON-RPC
    Child->>Child: _json_object_hook() → deserialize_tensor(..., mode="json")
    Child->>Child: reconstruct torch.Tensor from data
    Child->>Child: execute_node(tensor_input)
    Child->>Child: serialize_tensor(result, mode="json")
    Child-->>Host: TensorValue JSON
    Host->>Transport: deserialize_tensor(data, mode="json")
    Transport-->>Host: torch.Tensor (no SHM handles)
Loading
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
✨ Simplify code
  • Create PR with simplified code

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@socket-security
Copy link
Copy Markdown

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Addedboltons@​25.0.098100100100100

View full report

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 29, 2026

Codecov Report

❌ Patch coverage is 63.27968% with 365 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
pyisolate/sealed.py 39.83% 67 Missing and 7 partials ⚠️
pyisolate/_internal/environment_conda.py 73.72% 41 Missing and 26 partials ⚠️
pyisolate/_internal/tensor_serializer.py 41.26% 33 Missing and 4 partials ⚠️
pyisolate/_internal/sandbox.py 72.81% 13 Missing and 15 partials ⚠️
pyisolate/_internal/rpc_serialization.py 48.00% 21 Missing and 5 partials ⚠️
pyisolate/_internal/uds_client.py 7.40% 24 Missing and 1 partial ⚠️
pyisolate/_internal/rpc_transports.py 25.80% 23 Missing ⚠️
pyisolate/_internal/host.py 79.77% 11 Missing and 7 partials ⚠️
pyisolate/_internal/perf_trace.py 50.00% 15 Missing and 1 partial ⚠️
pyisolate/shared.py 20.00% 16 Missing ⚠️
... and 6 more
Flag Coverage Δ
unittests 63.66% <63.27%> (+3.11%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
pyisolate/__init__.py 73.33% <100.00%> (+1.90%) ⬆️
pyisolate/_internal/event_bridge.py 100.00% <100.00%> (ø)
pyisolate/_internal/sandbox_detect.py 95.12% <100.00%> (+0.38%) ⬆️
pyisolate/config.py 96.72% <100.00%> (+0.97%) ⬆️
pyisolate/interfaces.py 100.00% <100.00%> (ø)
pyisolate/path_helpers.py 100.00% <100.00%> (ø)
pyisolate/_internal/client.py 80.95% <0.00%> (-2.66%) ⬇️
pyisolate/_internal/model_serialization.py 67.12% <85.71%> (+11.70%) ⬆️
pyisolate/host.py 84.90% <28.57%> (-8.58%) ⬇️
pyisolate/_internal/environment.py 68.87% <84.61%> (+2.20%) ⬆️
... and 12 more

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR is a broad 0.10.0 feature release for pyisolate, adding a conda/pixi provisioning backend, a sealed-worker execution model (with tightened sandbox behavior), improved CUDA wheel resolution, and a new child→host event channel, alongside extensive new test coverage.

Changes:

  • Add conda/pixi backend support and update config validation/routing for uv vs conda and host-coupled vs sealed-worker execution.
  • Introduce sealed-worker runtime mechanics (proxy handle round-trip, JSON tensor transport, stricter sandbox/env rules) plus a new event channel for child→host dispatch.
  • Expand CUDA wheel resolver (target Python selection, multi-index support, dotted torch versions) and add perf tracing for tensor transport.

Reviewed changes

Copilot reviewed 60 out of 61 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
.gitignore Ignore pytest artifacts and adjust ignored venv directories.
README.md Document new backends/execution model and usage guidance.
pyproject.toml Bump version to 0.10.0; add pytest marker config and default exclusion of network tests.
pyisolate/__init__.py Export SealedNodeExtension and bump __version__ to 0.10.0.
pyisolate/config.py Extend typed config for conda backend, execution model, and multi-index CUDA wheel config.
pyisolate/host.py Add host-side wrapper support for registering/re-registering event handlers.
pyisolate/interfaces.py Extend adapter protocol to support web directory setup and child event hooks.
pyisolate/path_helpers.py Generalize sys.path reconstruction filtering via filtered_subdirs.
pyisolate/sealed.py Add sealed-worker child wrapper (SealedNodeExtension) with proxy-handle round-tripping and flush semantics.
pyisolate/shared.py Add emit_event() API on extensions for child→host event dispatch.
pyisolate/_internal/bootstrap.py Adjust sys.path reconstruction for sealed workers; support opt-in RO import paths.
pyisolate/_internal/client.py Let adapters wire child-side event hooks after initialization.
pyisolate/_internal/cuda_wheels.py Add target_python support, multi-index resolution, and dotted torch version matching.
pyisolate/_internal/environment.py Add backend config validation; adjust dependency install behavior for sealed workers; snapshot includes filtered_subdirs.
pyisolate/_internal/environment_conda.py New conda/pixi environment builder with pixi.toml generation, fingerprinting, local wheels, and CUDA wheel installation.
pyisolate/_internal/event_bridge.py New host-side RPC callee to dispatch named events to registered handlers.
pyisolate/_internal/host.py Route env creation to uv/conda; sealed-worker launch rules; register event bridge; set tensor transport mode.
pyisolate/_internal/model_serialization.py Refactor serialization; adjust RemoteObjectHandle behavior for sealed-worker proxy handles.
pyisolate/_internal/perf_trace.py New trace sink for performance events (file-based JSONL).
pyisolate/_internal/rpc_serialization.py Add cached serializer resolution and refactor _prepare_for_rpc; add diagnostics in tensor conversion path.
pyisolate/_internal/rpc_transports.py Add tensor transport mode support; add numpy handling; adjust tensor (de)serialization hooks.
pyisolate/_internal/sandbox.py Harden bubblewrap command generation for sealed workers; bind interpreter prefixes; forbid writable /tmp binds; env allowlisting.
pyisolate/_internal/sandbox_detect.py Avoid binding /lib64 during bwrap probes when absent.
pyisolate/_internal/tensor_serializer.py Add JSON tensor transport mode; add perf tracing; make signal cleanup more robust.
pyisolate/_internal/uds_client.py Skip API class imports for sealed workers; thread tensor transport mode into serializer registration.
tests/conftest.py Remove ComfyUI-specific sys.path injection.
tests/harness/host.py Improve harness cleanup (proxy stop, tmpdir restore) and enable signal cleanup env flag.
tests/harness/test_package/__init__.py Ensure tensor/CUDA cleanup on stop via flush_tensor_keeper() and CUDA collect/sync.
tests/integration_v2/test_isolation.py Add test ensuring host /tmp marker is not visible inside sandboxed child.
tests/integration_v2/test_tensors.py Add explicit GC/sync cleanup after CUDA tests.
tests/path_unification/test_path_helpers.py Update tests to use preferred_root + configurable filtered subdirs (no ComfyUI assumptions).
tests/test_bootstrap.py Add sealed-worker bootstrap/path and adapter rehydration behavior tests.
tests/test_bwrap_command.py Add assertions for hardened bwrap behavior (prefix binds, /tmp tmpfs protection, sealed env allowlist).
tests/test_client_entrypoint_extra.py Add test verifying sealed worker skips importing API classes from config strings.
tests/test_config_conda.py New tests for conda backend config validation requirements.
tests/test_config_sealed_worker.py New tests for execution_model defaults and invalid option combos.
tests/test_conda_integration.py New process-level integration test for sealed conda runtime.
tests/test_conda_sealed_worker_contract.py New contract tests for conda/uv sealed-worker behavior and host sys.path isolation.
tests/test_cuda_wheels.py Expand CUDA wheel resolver test matrix (target_python + multi-index); add network-marked tests.
tests/test_environment_conda.py New unit tests for pixi manifest generation, fingerprinting, uv exe resolution, and post-install wheel behavior.
tests/test_environment_sealed_worker.py Verify sealed-worker uv dependency install behavior (no auto-injected torch).
tests/test_event_channel.py New tests for event bridge dispatch and API surface.
tests/test_exact_proxy_bootstrap.py Validate sealed worker bootstrap payload binds exact proxy API refs and disables host sys.path.
tests/test_harness_host_env.py Verify harness cleanup restores TMPDIR.
tests/test_host_conda_dispatch.py New tests for host dispatch to conda backend and related launch behaviors.
tests/test_host_internal_ext.py Update dummy RPC with register_callee to satisfy updated codepaths.
tests/test_host_sealed_worker_dispatch.py New tests for uv sealed-worker host dispatch, transport mode, and bwrap arguments.
tests/test_model_serialization.py Add tests for opaque handle preservation and flushing remote object registry.
tests/test_rpc_contract.py Stabilize tests around event loop recreation by restoring prior loop state.
tests/test_rpc_transports.py Ensure transports are closed in tests; improve cleanup handling.
tests/test_sandbox_detect.py Add tests verifying /lib64 bind omission when absent.
tests/test_sealed_proxy_handle.py New tests for sealed-worker proxy handle wrapping and resolution.
tests/test_security_conda.py New tests ensuring sealed conda bwrap env allowlist and credential filtering.
tests/test_security_sealed_worker.py New tests ensuring sealed-worker bwrap launch, env allowlist, and host sys.path suppression.
tests/test_shared_additional.py Stabilize tests by restoring prior event loop after loop substitution.
tests/test_tensor_serializer_signal_cleanup.py New test for signal cleanup handler robustness when SIGHUP is missing.
tests/test_uv_sealed_integration.py New process-level integration test for uv sealed worker without host leakage.
tests/fixtures/conda_sealed_node/__init__.py Add conda sealed-node fixture module used by integration tests.
tests/fixtures/conda_sealed_node/pyproject.toml Add fixture pyproject for conda sealed-node integration.
tests/fixtures/uv_sealed_worker/__init__.py Add uv sealed-worker fixture module used by integration tests.
tests/fixtures/uv_sealed_worker/pyproject.toml Add fixture pyproject for uv sealed-worker integration.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +42 to +63
def emit_event(self, name: str, payload: dict) -> None:
"""Emit a named event to the host process via RPC.

Payload must be JSON-serializable (dicts, lists, strings, numbers,
booleans, None). Non-serializable payloads raise immediately.
"""
json.dumps(payload) # fail-loud if not JSON-serializable
from ._internal.event_bridge import _EventBridge

caller = self._rpc.create_caller(_EventBridge, "_event_bridge")
import asyncio

coro = caller.dispatch(name, payload)
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop is not None and loop.is_running():
future = asyncio.run_coroutine_threadsafe(coro, loop)
future.result(timeout=10.0)
else:
# No running loop in this thread — use the RPC's event loop
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emit_event() can deadlock when called from inside an already-running event loop in the same thread: it uses run_coroutine_threadsafe(..., loop) and then blocks on future.result(), preventing the loop from making progress. Consider making this API async (and providing a separate sync helper), or if a running loop is detected schedule with asyncio.create_task/loop.create_task and return without blocking (or otherwise avoid blocking the current loop thread). Also, the type hint/documentation disagree: the docstring allows any JSON-serializable value, but the signature restricts payload to dict.

Copilot uses AI. Check for mistakes.
Comment on lines +240 to +247
def _resolve_serializer_for_type(registry: Any, obj_type: type[Any]) -> Any | None:
cached = _SERIALIZER_BY_TYPE.get(obj_type)
if obj_type in _SERIALIZER_BY_TYPE:
return cached

serializer = registry.get_serializer(obj_type.__name__)
if serializer is not None:
_SERIALIZER_BY_TYPE[obj_type] = serializer
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The global _SERIALIZER_BY_TYPE cache memoizes serializer lookups by Python type, including caching None when a serializer is not found. Since the SerializerRegistry is mutable (and tests call SerializerRegistry.clear()), this can cause stale misses: a type cached as None will never pick up a serializer registered later, and clears won’t invalidate cached serializers. Consider tying the cache to a registry generation/version, clearing this cache when the registry changes/clears, or avoiding caching negative lookups.

Copilot uses AI. Check for mistakes.
Comment on lines +351 to +363
logger.warning(
"][ DIAG: __type__=%s -> %s",
ref_type,
type(result).__name__,
)
return result
else:
logger.warning(
"][ DIAG: __type__=%s no deserializer",
ref_type,
)
else:
logger.warning(
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logger.warning diagnostics in _tensor_to_cuda (e.g., "][ DIAG: __type__=... ) look like temporary debug output and will be very noisy in normal operation. Consider removing them or downgrading to debug with a feature flag so production logs aren’t spammed during routine deserialization.

Suggested change
logger.warning(
"][ DIAG: __type__=%s -> %s",
ref_type,
type(result).__name__,
)
return result
else:
logger.warning(
"][ DIAG: __type__=%s no deserializer",
ref_type,
)
else:
logger.warning(
logger.debug(
"][ DIAG: __type__=%s -> %s",
ref_type,
type(result).__name__,
)
return result
else:
logger.debug(
"][ DIAG: __type__=%s no deserializer",
ref_type,
)
else:
logger.debug(

Copilot uses AI. Check for mistakes.
Comment on lines +119 to +127
object_id = str(uuid.uuid4())
self.remote_objects[object_id] = data
logger.info(
"][ PROXY_HANDLE_WRAP type=%s id=%s remote_objects_count=%d",
type_name,
object_id[:8],
len(self.remote_objects),
)
return RemoteObjectHandle(object_id, type_name=type_name)
Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SealedNodeExtension._wrap_for_transport() logs every handle wrap at INFO (and with a leading "][ prefix). This is likely to spam logs for normal workflows and makes log parsing harder. Consider changing this to debug (or gating behind an env flag) and/or adding a cap/eviction strategy for remote_objects so a long-lived sealed worker doesn’t retain unbounded object references if flush_transport_state() isn’t called frequently.

Copilot uses AI. Check for mistakes.
Comment on lines +119 to +124
def test_no_pip_deps_omits_pypi_section(self) -> None:
config = _make_conda_config(dependencies=[])
toml_str = _generate_pixi_toml(config)
assert "[pypi-dependencies]" in toml_str
assert f'pyisolate = {{ path = "{_toml_path_string(_pyisolate_source_path())}" }}' in toml_str

Copy link

Copilot AI Mar 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test name doesn’t match assertions: test_no_pip_deps_omits_pypi_section asserts that [pypi-dependencies] is present (and pyisolate is always added there). Consider renaming the test (or adjusting expectations) so it documents the intended behavior accurately.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 33

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
tests/test_shared_additional.py (2)

55-92: ⚠️ Potential issue | 🟡 Minor

Test setup unguarded by try/finally—cleanup could slip through finally!

Lines 62-82 (setting the event loop and all test setup) are outside the try/finally block. If an exception occurs during setup (e.g., RPCPendingRequest construction), the finally block won't execute, leaving the test loop unclosed and the event loop policy unrestore.

Compare to test_rpc_contract.py where asyncio.set_event_loop(loop1) is placed inside the try block (line 120).

🔧 Proposed fix to wrap setup in try/finally
 def test_async_rpc_send_thread_sets_exception_on_send_failure():
-    previous_loop = None
-    loop = asyncio.new_event_loop()
     try:
         previous_loop = asyncio.get_event_loop_policy().get_event_loop()
     except RuntimeError:
         previous_loop = None
-    asyncio.set_event_loop(loop)
 
-    class FailingQueue:
-        def put(self, _):
-            raise RuntimeError("boom")
-
-    recv_q: queue.Queue = queue.Queue()
-    rpc = AsyncRPC(recv_queue=recv_q, send_queue=FailingQueue())
-
-    pending = RPCPendingRequest(  # type: ignore[call-arg]
-        kind="call",
-        object_id="obj",
-        parent_call_id=None,
-        calling_loop=loop,
-        future=loop.create_future(),
-        method="ping",
-        args=(),
-        kwargs={},
-    )
-    rpc.outbox.put(pending)
-    rpc.outbox.put(None)
-
+    loop = asyncio.new_event_loop()
     try:
+        asyncio.set_event_loop(loop)
+
+        class FailingQueue:
+            def put(self, _):
+                raise RuntimeError("boom")
+
+        recv_q: queue.Queue = queue.Queue()
+        rpc = AsyncRPC(recv_queue=recv_q, send_queue=FailingQueue())
+
+        pending = RPCPendingRequest(  # type: ignore[call-arg]
+            kind="call",
+            object_id="obj",
+            parent_call_id=None,
+            calling_loop=loop,
+            future=loop.create_future(),
+            method="ping",
+            args=(),
+            kwargs={},
+        )
+        rpc.outbox.put(pending)
+        rpc.outbox.put(None)
+
         rpc._send_thread()
         loop.run_until_complete(asyncio.sleep(0))
         assert pending["future"].done() is True
         with pytest.raises(RuntimeError):
             pending["future"].result()
     finally:
         asyncio.set_event_loop(previous_loop)
         loop.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_shared_additional.py` around lines 55 - 92, The test currently
sets up and assigns the event loop (asyncio.set_event_loop(loop)) and creates
RPCPendingRequest/RPC objects outside the try block so if setup fails the
finally cleanup won’t run; move asyncio.set_event_loop(loop) and all test setup
that depends on that loop (creation of FailingQueue, recv_q, AsyncRPC(...),
RPCPendingRequest(...), and placing items on rpc.outbox) inside the try block so
the finally always executes to restore the previous loop and close the new loop,
ensuring cleanup even on setup exceptions.

95-132: ⚠️ Potential issue | 🟡 Minor

Same cleanup gap—déjà vu, what a flap!

Identical issue as the previous test: lines 102-122 are outside the try/finally block, so setup failures bypass cleanup. Apply the same structural fix to ensure consistent behavior with test_rpc_contract.py.

🔧 Proposed fix
 def test_async_rpc_send_thread_callback_failure_sets_exception():
-    previous_loop = None
-    loop = asyncio.new_event_loop()
     try:
         previous_loop = asyncio.get_event_loop_policy().get_event_loop()
     except RuntimeError:
         previous_loop = None
-    asyncio.set_event_loop(loop)
-
-    class FailingQueue:
-        def put(self, _):
-            raise RuntimeError("kaboom")
 
-    recv_q: queue.Queue = queue.Queue()
-    rpc = AsyncRPC(recv_queue=recv_q, send_queue=FailingQueue())
-
-    pending = RPCPendingRequest(  # type: ignore[call-arg]
-        kind="callback",
-        object_id="cb",
-        parent_call_id=None,
-        calling_loop=loop,
-        future=loop.create_future(),
-        method="__call__",
-        args=(),
-        kwargs={},
-    )
-    rpc.outbox.put(pending)
-    rpc.outbox.put(None)
-
+    loop = asyncio.new_event_loop()
     try:
+        asyncio.set_event_loop(loop)
+
+        class FailingQueue:
+            def put(self, _):
+                raise RuntimeError("kaboom")
+
+        recv_q: queue.Queue = queue.Queue()
+        rpc = AsyncRPC(recv_queue=recv_q, send_queue=FailingQueue())
+
+        pending = RPCPendingRequest(  # type: ignore[call-arg]
+            kind="callback",
+            object_id="cb",
+            parent_call_id=None,
+            calling_loop=loop,
+            future=loop.create_future(),
+            method="__call__",
+            args=(),
+            kwargs={},
+        )
+        rpc.outbox.put(pending)
+        rpc.outbox.put(None)
+
         rpc._send_thread()
         loop.run_until_complete(asyncio.sleep(0))
         assert pending["future"].done() is True
         with pytest.raises(RuntimeError):
             pending["future"].result()
     finally:
         asyncio.set_event_loop(previous_loop)
         loop.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_shared_additional.py` around lines 95 - 132, The test
test_async_rpc_send_thread_callback_failure_sets_exception has setup/teardown
imbalance because portions of the test (creating recv_q, rpc, pending, putting
into rpc.outbox, calling rpc._send_thread and assertions) are outside the
try/finally that restores the event loop; move the entire setup, invocation of
rpc._send_thread(), loop.run_until_complete(...), and assertions into the try
block and keep the asyncio.set_event_loop(previous_loop) and loop.close() in the
finally block so the event loop is always restored/closed even if setup fails;
refer to AsyncRPC, RPCPendingRequest, FailingQueue, rpc.outbox.put, and
rpc._send_thread when applying the change.
pyisolate/_internal/rpc_serialization.py (1)

227-257: ⚠️ Potential issue | 🔴 Critical

Invalidate _SERIALIZER_BY_TYPE when the registry mutates.

This memoizer remembers a bit too well: a miss or base-class hit recorded before a later register()/clear() will keep winning afterward. That breaks the new dynamic serializer resolution path and can leak stale serializer decisions across tests or restarted extensions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/rpc_serialization.py` around lines 227 - 257, The
serializer cache _SERIALIZER_BY_TYPE is global and can become stale when a
registry mutates; fix this by making the cache registry-aware or invalidating it
on registry mutations: either (A) change _resolve_serializer_for_type to key the
cache by (id(registry), obj_type) instead of just obj_type so different
registries don't collide, or (B) add explicit invalidation calls from the
registry mutation methods (e.g., have Registry.register(...) and
Registry.clear(...) call _SERIALIZER_BY_TYPE.clear() or a new helper like
invalidate_serializer_cache()) so any register()/clear() updates purge stale
entries; update references in _resolve_serializer_for_type and registry mutation
methods accordingly.
pyisolate/_internal/host.py (1)

443-455: ⚠️ Potential issue | 🟠 Major

PYISOLATE_IMPORT_TORCH misses the explicit sandbox override path.

Line 451 snapshots self.config.get("env") into build_bwrap_command(), but Lines 481-487 add PYISOLATE_IMPORT_TORCH=0 only afterward. That means the bwrap-specific override path never sees this flag, so sandboxed behavior depends on implicit environment inheritance instead of the explicit config this code is trying to pass.

🛠️ Wire the env once, before building the command
+        env_overrides = dict(self.config.get("env") or {})
+        if not self.config.get("share_torch", False):
+            env["PYISOLATE_IMPORT_TORCH"] = "0"
+            env_overrides["PYISOLATE_IMPORT_TORCH"] = "0"
+        if env_overrides:
+            self.config["env"] = env_overrides
+
                 cmd = build_bwrap_command(
                     python_exe=python_exe,
                     module_path=self.module_path,
                     venv_path=str(self.venv_path),
                     uds_address=ipc_address,
                     sandbox_config=cast(dict[str, Any], sandbox_config),
                     allow_gpu=True,  # Default to allowing GPU for ComfyUI nodes
                     restriction_model=cap.restriction_model,
-                    env_overrides=self.config.get("env"),
+                    env_overrides=env_overrides or None,
                     adapter=adapter,
                     execution_model=self._execution_model(),
                     sealed_host_ro_paths=cast(list[str] | None, self.config.get("sealed_host_ro_paths")),
                 )
-        if not self.config.get("share_torch", False):
-            env["PYISOLATE_IMPORT_TORCH"] = "0"
-            # Also inject into config env for bwrap path which uses env_overrides
-            if "env" not in self.config:
-                self.config["env"] = {}
-            self.config["env"]["PYISOLATE_IMPORT_TORCH"] = "0"
+        if not self.config.get("share_torch", False):
+            env["PYISOLATE_IMPORT_TORCH"] = "0"

Also applies to: 481-487

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/host.py` around lines 443 - 455, The
PYISOLATE_IMPORT_TORCH override is being added after build_bwrap_command(...) is
called so the bwrap env_overrides (passed as self.config.get("env") to
build_bwrap_command) never includes it; modify the code so you assemble the
environment dict first (take self.config.get("env"), copy/merge it, set
PYISOLATE_IMPORT_TORCH = "0" when appropriate) and then pass that assembled env
to build_bwrap_command(...) via the env_overrides parameter; locate
build_bwrap_command call and the later block that mutates
os.environ/PYISOLATE_IMPORT_TORCH and move/merge that mutation into the env
construction before calling build_bwrap_command.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pyisolate/_internal/client.py`:
- Around line 110-113: Wrap the call to
_adapter.setup_child_event_hooks(extension) in a try/except that catches
Exception so adapter hook failures do not abort child startup; on exception, log
a warning (e.g., using logging.getLogger(__name__).warning(...)) including the
adapter identity and the exception details, and do not re-raise so execution
continues. Target the invocation in pyisolate._internal.client where _adapter
and hasattr(..., "setup_child_event_hooks") are checked.

In `@pyisolate/_internal/cuda_wheels.py`:
- Around line 285-286: The environment used to evaluate PEP 508 markers is built
from default_environment() (host) before
get_cuda_wheel_runtime(target_python=...), so markers are evaluated against the
host Python; update the environment to reflect target_python first: call
default_environment(), then modify the returned dict (keys like
"python_version", "python_full_version",
"implementation_name"/"implementation_version" as appropriate) to represent the
tuple target_python before any marker evaluation in get_cuda_wheel_runtime() or
subsequent logic that inspects PEP 508 markers; ensure the same adjustment is
applied in the other occurrence referenced (the block around the other
default_environment() call) so marker evaluation uses the target interpreter
values.

In `@pyisolate/_internal/environment_conda.py`:
- Around line 282-291: The fingerprint currently only includes the find_links
paths, so replace that by including the actual local-wheel signature when
building the descriptor used to compute fingerprint: for each entry in the
existing "find_links" list (and any other local-package paths used by
_install_local_wheels), walk the path(s) and compute a deterministic signature
per wheel (e.g., sha256 of file bytes or a tuple of mtime+size) and add that
mapping (filename->hash or filename->mtime-size) into the descriptor before
json.dumps; update every place that builds the descriptor/fingerprint (the
descriptor variable and its later uses that produce fingerprint) so the
fingerprint changes when wheel contents change, ensure you canonicalize paths
and ignore non-wheel files, and keep behavior tolerant to missing entries so
_install_local_wheels still runs when content differs.
- Around line 21-23: The function _pyisolate_source_path currently returns a
parent folder that points into site-packages for wheel installs; change
_pyisolate_source_path to detect whether the resolved candidate is a real
project checkout by checking for project metadata files (pyproject.toml or
setup.py) and/or whether the path contains “site-packages”; if those checks
fail, return None (or raise a specific sentinel) instead of returning a
site-packages path. Update the caller that injects this path into the conda
manifest to skip injecting a local path when _pyisolate_source_path returns None
and instead rely on the published package version (i.e., do not add a path
dependency for wheel installs). Ensure references to _pyisolate_source_path and
the manifest-injection code handle the None sentinel safely.

In `@pyisolate/_internal/environment.py`:
- Around line 28-33: The derived defaults for package_manager and
execution_model are only assigned to local variables and not written back to the
original config, so callers like install_dependencies and
_resolve_api_classes_from_config still see None; update the code that computes
package_manager and execution_model to also set those keys on the config dict
(e.g., config["package_manager"] = package_manager and config["execution_model"]
= execution_model) after deriving defaults so the resolved values persist for
later functions such as install_dependencies() and
_resolve_api_classes_from_config().
- Around line 381-382: The current logic unconditionally appends
Path(__file__).resolve().parents[2] to safe_deps for sealed_worker, which
incorrectly targets site‑packages in installed wheels; change it to only add
that parent when it looks like a source checkout (e.g., when a repository marker
like pyproject.toml or a top-level package folder exists) and skip it if the
resolved parent is inside the interpreter's site‑packages/purelib location
(detectable via sysconfig.get_paths() or by checking for "site‑packages" in the
path). Update the block guarded by config.get("execution_model") ==
"sealed_worker" so it inspects Path(__file__).resolve().parents[2] and
conditionally appends to safe_deps only for a true project root, leaving
installed wheel environments untouched.

In `@pyisolate/_internal/event_bridge.py`:
- Around line 29-30: Replace the hasattr-based awaitable check with the stdlib
helper: import inspect at top of pyisolate/_internal/event_bridge.py and change
the conditional that checks result (the variable holding the handler return
value) from "hasattr(result, '__await__')" to "inspect.isawaitable(result)";
keep the await result behavior unchanged so any coroutine or awaitable returned
by the handler is awaited using the existing "await result" line.

In `@pyisolate/_internal/host.py`:
- Around line 252-256: The _event_bridge is created only when the child starts
but register_event_handler() accesses it unconditionally, causing AttributeError
when called earlier; fix by lazily creating and registering the bridge: in
register_event_handler (and the similar spot around the other registration code
using _event_bridge) check if self._event_bridge is missing or None, instantiate
_EventBridge and call self.rpc.register_callee(self._event_bridge,
"_event_bridge") before adding the handler; this ensures handlers can be
registered before the child starts while keeping existing behavior when the
bridge already exists.
- Around line 336-342: The conda-specific override of config["share_cuda_ipc"]
is applied too late in __launch() after validate_backend_config() and
_initialize_process() have already read it; move the normalization so consumers
never see the original value. Detect conda via _package_manager() early and set
self.config["share_cuda_ipc"] = False and self._cuda_ipc_enabled = False before
calling validate_backend_config() and before any call to _initialize_process()
(or perform the same pre-validation normalization in the
constructor/initializer), keeping create_conda_env(self.venv_path, self.config,
self.name) where it belongs.
- Around line 126-137: The branch deciding _host_rpc_services uses
self.config.get("execution_model") but other code uses self._execution_model(),
causing inconsistent behavior; change the condition to use
self._execution_model() instead (i.e., if self._execution_model() ==
"sealed_worker": ...) so that the sealed-worker path (importing AdapterRegistry,
calling AdapterRegistry.get() and adapter.provide_rpc_services()) is chosen
consistently with the rest of the class, and keep the else as
list(self.config["apis"]) and the existing exception handling for
AdapterRegistry.

In `@pyisolate/_internal/rpc_serialization.py`:
- Around line 345-365: The successful deserialization path is logging with
logger.warning; change that to logger.debug so normal rehydration isn't treated
as an alarm. Specifically, inside the block that checks
registry.has_handler(ref_type) and obtains deserializer via
registry.get_deserializer(ref_type), replace the logger.warning call that logs
"][ DIAG: __type__=%s -> %s" (which logs ref_type and type(result).__name__)
with logger.debug, while leaving the existing logger.warning calls for the "no
deserializer" and "no handler" branches unchanged; keep using the same message
and interpolated values but at debug level.

In `@pyisolate/_internal/sandbox.py`:
- Around line 348-374: The sealed-worker branch currently sets --clearenv and a
strict allowlist but later the unconditional application of env_overrides
reintroduces dangerous vars; modify the code around the env_overrides
application so that when is_sealed_worker is True you filter env_overrides to
only a safe allowlist before extending cmd. Concretely, update the place that
iterates over env_overrides (the unconditional loop that appends to cmd) to skip
keys like PYTHONPATH, PYTHONHOME, LD_PRELOAD and any others not in the
sealed-worker allowlist, or only merge env_overrides keys present in the
explicit allowlist used earlier; keep the existing behavior when
is_sealed_worker is False. Ensure you reference and use the same symbols:
is_sealed_worker, env_overrides, and cmd.
- Around line 169-173: Remove the unconditional binding of the host launcher's
sys.base_prefix into system_paths: do not append sys.base_prefix when the worker
is running in sealed mode (respect the sandbox's sealed/closed-worker boolean)
and otherwise only add it if the interpreter tree is not already provided by
python_exe.resolve() (i.e., check system_paths for an existing base prefix
before appending). Update the block that currently mutates system_paths (the
sys.base_prefix append) so it is conditional on "not sealed" and on the base
prefix not already being present, and rely on python_exe.resolve() to supply the
interpreter tree for sealed workers.

In `@pyisolate/_internal/tensor_serializer.py`:
- Around line 424-438: deserialize_tensor currently routes all JSON-mode
payloads to _deserialize_json_tensor based only on mode, causing non-TensorValue
payloads (e.g., TensorRef, TorchReduction from register_tensor_serializer) to
hit _deserialize_json_tensor and raise KeyError; change the dispatch so you
inspect data.get("__type__") first and only call _deserialize_json_tensor when
__type__ == "TensorValue" (otherwise fall back to _deserialize_legacy_tensor or
the appropriate registered handler), ensuring deserialize_tensor and any
analogous branches (also update the similar logic around lines noted in the
comment) route payloads by their __type__ rather than by mode alone.

In `@pyisolate/shared.py`:
- Around line 55-62: When asyncio.get_running_loop() succeeds and
loop.is_running() is true, you are on the loop's thread — calling
asyncio.run_coroutine_threadsafe(coro, loop) and then blocking on
future.result(...) will deadlock/timeout. Replace that branch so it schedules
the coroutine without blocking: use asyncio.create_task(coro) (or
asyncio.ensure_future(coro)) instead of run_coroutine_threadsafe and do not call
future.result; if the surrounding function must wait for completion it should be
refactored to async and await the task. Update the code paths around
get_running_loop(), run_coroutine_threadsafe(), future.result and coro
accordingly.

In `@pyproject.toml`:
- Around line 129-137: The CI currently excludes network tests via the pytest
config addopts ("-m 'not network'"), so add a new CI job (e.g., "pytest-network"
or "ci-network-tests") that runs pytest with the marker filter overridden to
include network tests by passing -m network (or removing the -m filter for that
job); ensure the job is wired into the existing workflow matrix, runs on the
same platforms used for other test jobs, and reports failures so
resolver/wheel-index regressions are caught.

In `@README.md`:
- Around line 196-221: The README claim that sealed workers disable `bwrap`
sandboxing is out of sync with the implementation/tests; update the "UV Backend
for Sealed Workers" section to either scope the "no sandbox / bwrap disabled"
statement to the conda path or rephrase it to reflect that for `package_manager
= "uv"` with `execution_model = "sealed_worker"` the runtime still invokes the
sandbox construction (see `build_bwrap_command()` which applies `--clearenv`,
env allowlisting and read-only binds) as asserted by the Linux tests; ensure the
doc text references `sealed_worker` and `package_manager = "uv"` so readers
aren’t misled.

In `@tests/harness/host.py`:
- Around line 193-195: The current use of contextlib.suppress(Exception) around
ext.get_proxy() / await proxy.stop() hides errors during cleanup; change it to
catch Exception as e (or use contextlib.suppress with a try/except inside) and
emit a debug-level log containing the extension identity (e.g., repr(ext) or
ext.name) and the exception details when proxy.stop() fails; update the block
around contextlib.suppress / ext.get_proxy / proxy.stop to log the suppressed
exception (logger.debug or the test harness logger) so teardown failures remain
observable without failing tests.

In `@tests/integration_v2/test_isolation.py`:
- Around line 100-101: The test currently uses pytest.raises(Exception,
match="No such file or directory") when calling
proxy.read_file(str(host_marker)); remove the brittle message match and simply
assert that an exception is raised (e.g., use pytest.raises(Exception) or a more
specific exception type if appropriate) so the test checks failure without
pinning platform-specific error text; update the assertion around
proxy.read_file and keep the host_marker reference unchanged.
- Around line 92-105: The test uses a fixed child_scratch =
"/tmp/child_scratch.txt" which can collide across parallel runs; change the test
to generate a unique child scratch path (e.g., use the pytest tmp_path fixture,
tempfile with uuid, or Path(tmp_path / f"child_scratch_{uuid4()}.txt")) and
replace references to child_scratch in this test (the variable declared and used
with proxy.write_file and proxy.read_file) so each test run uses an isolated
file name.

In `@tests/integration_v2/test_tensors.py`:
- Around line 57-60: Wrap the CUDA-related assertions in
test_tensor_roundtrip_cuda inside a try/finally so cleanup always runs: move the
deletion and cleanup steps for the tensors (result and t) into a finally block
and ensure you call del result, del t, gc.collect(), and
torch.cuda.synchronize() there; apply the same try/finally pattern to the other
CUDA cleanup block in the test to guarantee tensors are released even if
assertions fail.

In `@tests/test_bootstrap.py`:
- Around line 109-119: The test name and assertion are misleading because
injecting a failing _rehydrate_adapter still lets the test pass; change the test
so it verifies _rehydrate_adapter is not invoked: replace the raising lambda
with a stub that flips a local flag (e.g., called = {"v": False}; def
fake(name): called["v"]=True; return None) or use monkeypatch.setattr to a no-op
that records invocation, call bootstrap.bootstrap_child(), then assert
called["v"] is False; also rename the test from
test_bootstrap_sealed_worker_skips_adapter_rehydration to something clearer like
test_bootstrap_sealed_worker_does_not_rehydrate_adapter.
- Around line 217-224: Replace the hard-coded "/home/johnj/ComfyUI" in the
test's PYISOLATE_HOST_SNAPSHOT fixture with a hermetic path: use the
test-provided tmp_path (or construct a synthetic absolute path) when building
the JSON for monkeypatch.setenv, updating the sealed_host_ro_paths value so the
test is machine-independent; locate the JSON construction passed to
monkeypatch.setenv of the "PYISOLATE_HOST_SNAPSHOT" environment variable and
substitute the literal path in the sealed_host_ro_paths array with
tmp_path.as_posix() (or an explicit synthetic absolute path) before json.dumps.

In `@tests/test_conda_integration.py`:
- Around line 13-21: The module-level "import torch" causes collection to fail
if torch isn't installed; replace it by calling pytest.importorskip("torch") at
module scope (so the whole module is skipped during collection) or remove the
module import and import torch inside the individual test functions.
Specifically, change the module-level import statement "import torch" to
pytest.importorskip("torch") (or move that import into test bodies) while
keeping the existing pytestmark skip logic for PIXI_AVAILABLE.

In `@tests/test_config_sealed_worker.py`:
- Around line 30-32: The test test_uv_defaults_to_host_coupled currently only
calls validate_backend_config(config) but must also assert that the function
populated the default execution_model; after calling
validate_backend_config(_make_config()), add an assertion that the resulting
config's execution_model is "host-coupled" (e.g., assert config.execution_model
== "host-coupled" or assert config["execution_model"] == "host-coupled"
depending on config's type). Apply the same assertion pattern to the other test
referenced (around the 116-122 block) so both tests verify the defaulting
behavior, not just successful validation.

In `@tests/test_environment_conda.py`:
- Around line 425-428: The monkeypatch targets are patching the wrong namespace
so calls to resolve_cuda_wheel_requirements from _install_cuda_wheels_into_pixi
are not intercepted; update the tests to monkeypatch
"pyisolate._internal.environment_conda.resolve_cuda_wheel_requirements" (and the
same for the other occurrences at the mentioned blocks) so the alias imported
into environment_conda is replaced, ensuring _install_cuda_wheels_into_pixi uses
the mocked resolve_cuda_wheel_requirements during the test.
- Around line 248-252: The test is stubbing Path.exists() too broadly; instead
patch the specific seam _resolve_pixi_python() used by create_conda_env().
Replace the patch.object(Path, "exists", return_value=True) with a patch of the
module-level function _resolve_pixi_python() to return the expected pixi Python
path (e.g. "/usr/bin/pixi"), leaving other patches (shutil.which,
subprocess.check_call) intact so existence checks and the fingerprint cache
behavior remain realistic.

In `@tests/test_environment_sealed_worker.py`:
- Around line 57-72: The test test_host_coupled_uv_still_auto_injects_torch
should hermetically mock torch before calling environment.install_dependencies:
using monkeypatch, inject a minimal stub into sys.modules["torch"] that provides
__version__ and a nested version.cuda attribute (or callable) so
environment.install_dependencies sees a torch module when share_torch=False;
update the test to set this stub with monkeypatch.setitem(sys.modules, "torch",
stub) and remove it after to avoid leaking into other tests.

In `@tests/test_harness_host_env.py`:
- Around line 19-25: The test may skip cleanup if an assertion fails; wrap the
assertions in a try/finally and call await host.cleanup() in the finally block
so ReferenceHost.cleanup() always runs; locate the test using the host =
ReferenceHost() instantiation and move the await host.cleanup() out of the
normal flow into a finally (or use try/except/finally) ensuring the host
variable is available in the finally and that you await ReferenceHost.cleanup()
unconditionally.

In `@tests/test_sealed_proxy_handle.py`:
- Around line 16-18: The fixture clean_registry clears the process-wide
SerializerRegistry and never restores it; change it to a yield fixture that
snapshots the registry state before clearing and restores it after the test: in
the clean_registry fixture (referencing SerializerRegistry.get_instance() and
the fixture name clean_registry) import copy, take a deep copy of the instance
state (e.g. copy.deepcopy(SerializerRegistry.get_instance().__dict__)), clear
the registry for the test, yield, then restore the instance state by assigning
the saved snapshot back to the instance __dict__ so other tests see the original
registry again.

In `@tests/test_security_conda.py`:
- Around line 95-99: The test is hiding unexpected failures by using
contextlib.suppress(Exception) around the ext._launch_with_uds() call; remove
the broad suppress so any exception from ext._launch_with_uds() will propagate
and fail the test, or if you must allow one specific expected error, replace
contextlib.suppress(Exception) with contextlib.suppress(ExpectedExceptionType)
limited to that specific exception; update the with(...) block in
tests/test_security_conda.py that wraps ext._launch_with_uds() accordingly so
failures after Popen or first mocked send are not swallowed.

In `@tests/test_security_sealed_worker.py`:
- Around line 113-123: The test currently only inspects the bwrap command string
via cmd/_setenv_map; also assert the subprocess.Popen call used a clean env by
checking mock_popen.call_args[1].get("env") (or call_args.kwargs["env"]) is not
None and that this child_env dict contains PATH=="/usr/bin", LANG=="C.UTF-8",
HOME=="/tmp", TMPDIR=="/tmp", PYTHONNOUSERSITE=="1" and that keys like
"PYTHONPATH" and "SECRET_TOKEN" (and other credential-like names covered by the
second test) are absent; add the same child_env assertions in the
credential-like variable test (the later test that currently checks bwrap args)
so secrets cannot be injected via the env kwarg.

---

Outside diff comments:
In `@pyisolate/_internal/host.py`:
- Around line 443-455: The PYISOLATE_IMPORT_TORCH override is being added after
build_bwrap_command(...) is called so the bwrap env_overrides (passed as
self.config.get("env") to build_bwrap_command) never includes it; modify the
code so you assemble the environment dict first (take self.config.get("env"),
copy/merge it, set PYISOLATE_IMPORT_TORCH = "0" when appropriate) and then pass
that assembled env to build_bwrap_command(...) via the env_overrides parameter;
locate build_bwrap_command call and the later block that mutates
os.environ/PYISOLATE_IMPORT_TORCH and move/merge that mutation into the env
construction before calling build_bwrap_command.

In `@pyisolate/_internal/rpc_serialization.py`:
- Around line 227-257: The serializer cache _SERIALIZER_BY_TYPE is global and
can become stale when a registry mutates; fix this by making the cache
registry-aware or invalidating it on registry mutations: either (A) change
_resolve_serializer_for_type to key the cache by (id(registry), obj_type)
instead of just obj_type so different registries don't collide, or (B) add
explicit invalidation calls from the registry mutation methods (e.g., have
Registry.register(...) and Registry.clear(...) call _SERIALIZER_BY_TYPE.clear()
or a new helper like invalidate_serializer_cache()) so any register()/clear()
updates purge stale entries; update references in _resolve_serializer_for_type
and registry mutation methods accordingly.

In `@tests/test_shared_additional.py`:
- Around line 55-92: The test currently sets up and assigns the event loop
(asyncio.set_event_loop(loop)) and creates RPCPendingRequest/RPC objects outside
the try block so if setup fails the finally cleanup won’t run; move
asyncio.set_event_loop(loop) and all test setup that depends on that loop
(creation of FailingQueue, recv_q, AsyncRPC(...), RPCPendingRequest(...), and
placing items on rpc.outbox) inside the try block so the finally always executes
to restore the previous loop and close the new loop, ensuring cleanup even on
setup exceptions.
- Around line 95-132: The test
test_async_rpc_send_thread_callback_failure_sets_exception has setup/teardown
imbalance because portions of the test (creating recv_q, rpc, pending, putting
into rpc.outbox, calling rpc._send_thread and assertions) are outside the
try/finally that restores the event loop; move the entire setup, invocation of
rpc._send_thread(), loop.run_until_complete(...), and assertions into the try
block and keep the asyncio.set_event_loop(previous_loop) and loop.close() in the
finally block so the event loop is always restored/closed even if setup fails;
refer to AsyncRPC, RPCPendingRequest, FailingQueue, rpc.outbox.put, and
rpc._send_thread when applying the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 126ce17a-c35d-41b8-a371-996c91d34b84

📥 Commits

Reviewing files that changed from the base of the PR and between c5e6e1a and 2c9878d.

📒 Files selected for processing (61)
  • .gitignore
  • README.md
  • pyisolate/__init__.py
  • pyisolate/_internal/bootstrap.py
  • pyisolate/_internal/client.py
  • pyisolate/_internal/cuda_wheels.py
  • pyisolate/_internal/environment.py
  • pyisolate/_internal/environment_conda.py
  • pyisolate/_internal/event_bridge.py
  • pyisolate/_internal/host.py
  • pyisolate/_internal/model_serialization.py
  • pyisolate/_internal/perf_trace.py
  • pyisolate/_internal/rpc_serialization.py
  • pyisolate/_internal/rpc_transports.py
  • pyisolate/_internal/sandbox.py
  • pyisolate/_internal/sandbox_detect.py
  • pyisolate/_internal/tensor_serializer.py
  • pyisolate/_internal/uds_client.py
  • pyisolate/config.py
  • pyisolate/host.py
  • pyisolate/interfaces.py
  • pyisolate/path_helpers.py
  • pyisolate/sealed.py
  • pyisolate/shared.py
  • pyproject.toml
  • tests/conftest.py
  • tests/fixtures/conda_sealed_node/__init__.py
  • tests/fixtures/conda_sealed_node/pyproject.toml
  • tests/fixtures/uv_sealed_worker/__init__.py
  • tests/fixtures/uv_sealed_worker/pyproject.toml
  • tests/harness/host.py
  • tests/harness/test_package/__init__.py
  • tests/integration_v2/test_isolation.py
  • tests/integration_v2/test_tensors.py
  • tests/path_unification/test_path_helpers.py
  • tests/test_bootstrap.py
  • tests/test_bwrap_command.py
  • tests/test_client_entrypoint_extra.py
  • tests/test_conda_integration.py
  • tests/test_conda_sealed_worker_contract.py
  • tests/test_config_conda.py
  • tests/test_config_sealed_worker.py
  • tests/test_cuda_wheels.py
  • tests/test_environment_conda.py
  • tests/test_environment_sealed_worker.py
  • tests/test_event_channel.py
  • tests/test_exact_proxy_bootstrap.py
  • tests/test_harness_host_env.py
  • tests/test_host_conda_dispatch.py
  • tests/test_host_internal_ext.py
  • tests/test_host_sealed_worker_dispatch.py
  • tests/test_model_serialization.py
  • tests/test_rpc_contract.py
  • tests/test_rpc_transports.py
  • tests/test_sandbox_detect.py
  • tests/test_sealed_proxy_handle.py
  • tests/test_security_conda.py
  • tests/test_security_sealed_worker.py
  • tests/test_shared_additional.py
  • tests/test_tensor_serializer_signal_cleanup.py
  • tests/test_uv_sealed_integration.py
💤 Files with no reviewable changes (1)
  • tests/conftest.py

Comment on lines +110 to +113
# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
_adapter.setup_child_event_hooks(extension)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard adapter hook failures so child startup doesn’t hard-fail.

If setup_child_event_hooks() throws, extension loading aborts even though RPC/module loading may be otherwise valid. Wrap this call and log a warning to keep startup resilient.

Proposed fix
-        # Let the adapter wire child-side event hooks (e.g., progress bar)
-        if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
-            _adapter.setup_child_event_hooks(extension)
+        # Let the adapter wire child-side event hooks (e.g., progress bar)
+        if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
+            try:
+                _adapter.setup_child_event_hooks(extension)
+            except Exception as exc:  # pragma: no cover - best-effort hook
+                logger.warning("Adapter child event hook setup failed: %s", exc, exc_info=True)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
_adapter.setup_child_event_hooks(extension)
# Let the adapter wire child-side event hooks (e.g., progress bar)
if _adapter and hasattr(_adapter, "setup_child_event_hooks"):
try:
_adapter.setup_child_event_hooks(extension)
except Exception as exc: # pragma: no cover - best-effort hook
logger.warning("Adapter child event hook setup failed: %s", exc, exc_info=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/client.py` around lines 110 - 113, Wrap the call to
_adapter.setup_child_event_hooks(extension) in a try/except that catches
Exception so adapter hook failures do not abort child startup; on exception, log
a warning (e.g., using logging.getLogger(__name__).warning(...)) including the
adapter identity and the exception details, and do not re-raise so execution
continues. Target the invocation in pyisolate._internal.client where _adapter
and hasattr(..., "setup_child_event_hooks") are checked.

Comment on lines 285 to +286
environment = cast(dict[str, str], default_environment())
runtime = get_cuda_wheel_runtime()
runtime = get_cuda_wheel_runtime(target_python=target_python)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's see the structure and find the exact lines mentioned
wc -l pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 101


🏁 Script executed:

# Read the relevant section around lines 285-286
sed -n '270,300p' pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 1172


🏁 Script executed:

# Also check lines 314-316
sed -n '305,325p' pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 858


🏁 Script executed:

# Let's see the full function context that contains lines 285-286
sed -n '260,330p' pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 2448


🏁 Script executed:

# Search for function definitions to understand which function this code belongs to
ast-grep --pattern 'function_definition' | head -20

Repository: Comfy-Org/pyisolate

Length of output: 45


🏁 Script executed:

# Get better context - look for the function that contains line 285
grep -n "^def " pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 818


🏁 Script executed:

# Check imports at the top of the file
head -50 pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 1797


🏁 Script executed:

# Search for uses of default_environment and marker.evaluate in the file
grep -n "default_environment\|marker.evaluate" pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 265


🏁 Script executed:

# Check if there are other functions that use target_python and might have similar issues
grep -n "target_python" pyisolate/_internal/cuda_wheels.py

Repository: Comfy-Org/pyisolate

Length of output: 948


🏁 Script executed:

# Look at the packaging library to understand how marker evaluation works
python3 << 'EOF'
from packaging.markers import default_environment, Marker

# Test if we can modify the environment dict
env = default_environment()
print("Original environment keys:", list(env.keys()))

# Try modifying it
env["python_version"] = "3.12"
env["python_full_version"] = "3.12.0"

# Test marker evaluation with modified environment
marker = Marker("python_version >= '3.12'")
print("Marker evaluation with modified env:", marker.evaluate(env))

# Verify it works with the actual Python version too
marker2 = Marker(f"python_version >= '{'.'.join(map(str, __import__('sys').version_info[:2]))}'")
print("Marker evaluation with current version:", marker2.evaluate(env))
EOF

Repository: Comfy-Org/pyisolate

Length of output: 407


Update environment to respect target_python before evaluating PEP 508 markers.

default_environment() describes the host interpreter, not the target. With host Python 3.11 and target_python=(3, 12), a dependency guarded by python_version >= '3.12' is marked as excluded—no bones about it, even though the pixi environment will be 3.12. This undermines the newly added target_python support.

🐍 Fix
     environment = cast(dict[str, str], default_environment())
+    if target_python is not None:
+        major, minor = target_python
+        environment["python_version"] = f"{major}.{minor}"
+        environment["python_full_version"] = f"{major}.{minor}.0"
     runtime = get_cuda_wheel_runtime(target_python=target_python)

Also applies to: 314–316

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/cuda_wheels.py` around lines 285 - 286, The environment
used to evaluate PEP 508 markers is built from default_environment() (host)
before get_cuda_wheel_runtime(target_python=...), so markers are evaluated
against the host Python; update the environment to reflect target_python first:
call default_environment(), then modify the returned dict (keys like
"python_version", "python_full_version",
"implementation_name"/"implementation_version" as appropriate) to represent the
tuple target_python before any marker evaluation in get_cuda_wheel_runtime() or
subsequent logic that inspects PEP 508 markers; ensure the same adjustment is
applied in the other occurrence referenced (the block around the other
default_environment() call) so marker evaluation uses the target interpreter
values.

Comment on lines +21 to +23
def _pyisolate_source_path() -> Path:
"""Return the local pyisolate source tree for sealed-worker child installs."""
return Path(__file__).resolve().parents[2]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the file structure and confirm the repo layout
cd /tmp && find . -type f -name "environment_conda.py" 2>/dev/null | head -5

Repository: Comfy-Org/pyisolate

Length of output: 45


🏁 Script executed:

# Check the actual file structure to understand parents[2]
git ls-files | grep -E "(environment_conda|__init__|setup)" | head -20

Repository: Comfy-Org/pyisolate

Length of output: 585


🏁 Script executed:

# Examine the target file and surrounding context
wc -l pyisolate/_internal/environment_conda.py

Repository: Comfy-Org/pyisolate

Length of output: 107


🏁 Script executed:

# Read the function and its usage
sed -n '15,35p' pyisolate/_internal/environment_conda.py

Repository: Comfy-Org/pyisolate

Length of output: 608


🏁 Script executed:

# Check lines 156-157 where the path is injected
sed -n '150,165p' pyisolate/_internal/environment_conda.py

Repository: Comfy-Org/pyisolate

Length of output: 834


🏁 Script executed:

# Look for any tests or documentation about wheel installations
find . -type f -name "*.py" -o -name "*.md" -o -name "*.txt" | xargs grep -l "wheel\|site-packages\|installation" 2>/dev/null | head -10

Repository: Comfy-Org/pyisolate

Length of output: 359


🏁 Script executed:

# Check the directory structure to verify parents[2] behavior
python3 << 'EOF'
from pathlib import Path
import os

# Simulate the file location
file_path = Path("pyisolate/_internal/environment_conda.py").resolve()
print(f"File path: {file_path}")
print(f"parents[0]: {file_path.parents[0]}")
print(f"parents[1]: {file_path.parents[1]}")
print(f"parents[2]: {file_path.parents[2]}")

# In a wheel install, __file__ would point to site-packages location
# Let's check if that makes sense
print("\n--- Expected behavior in wheel install ---")
wheel_file_path = Path("/usr/lib/python3.11/site-packages/pyisolate/_internal/environment_conda.py")
print(f"Wheel file path: {wheel_file_path}")
print(f"parents[2] of wheel path: {wheel_file_path.parents[2]}")
EOF

Repository: Comfy-Org/pyisolate

Length of output: 452


🏁 Script executed:

# Check setup.py for packaging info
cat setup.py

Repository: Comfy-Org/pyisolate

Length of output: 302


🏁 Script executed:

# Look for tests of the environment_conda module
cat tests/test_environment_conda.py | head -100

Repository: Comfy-Org/pyisolate

Length of output: 3414


🏁 Script executed:

# Check if there's any conditional logic for wheel vs source installs
rg -A 5 -B 5 "_pyisolate_source_path\|site-packages" pyisolate/_internal/

Repository: Comfy-Org/pyisolate

Length of output: 45


🏁 Script executed:

# See if there's any documentation about supported installation methods
head -50 README.md

Repository: Comfy-Org/pyisolate

Length of output: 2432


🏁 Script executed:

# Check if the function is used elsewhere or if there's handling logic
rg "_pyisolate_source_path\|Path\(__file__\)" pyisolate/ tests/

Repository: Comfy-Org/pyisolate

Length of output: 45


The pixi path dependency won't resolve correctly in wheel installations—it'll point to site-packages instead of an installable project.

When Path(__file__).resolve().parents[2] runs in a wheel-installed distribution at /usr/lib/python3.11/site-packages/pyisolate/_internal/environment_conda.py, it resolves to /usr/lib/python3.11/site-packages, not a project root. Pixi's path dependencies expect a directory with project metadata (setup.py, pyproject.toml), so this won't lead to success—you could say it becomes a dead-path dependency! The generated [pypi-dependencies] entry will fail outside source checkouts.

Lines 156–157 inject this path into the manifest, so this breaks conda env creation for any wheel-installed pyisolate. Either detect the installation method and skip the local path injection for wheels, or fall back to a published package version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/environment_conda.py` around lines 21 - 23, The function
_pyisolate_source_path currently returns a parent folder that points into
site-packages for wheel installs; change _pyisolate_source_path to detect
whether the resolved candidate is a real project checkout by checking for
project metadata files (pyproject.toml or setup.py) and/or whether the path
contains “site-packages”; if those checks fail, return None (or raise a specific
sentinel) instead of returning a site-packages path. Update the caller that
injects this path into the conda manifest to skip injecting a local path when
_pyisolate_source_path returns None and instead rely on the published package
version (i.e., do not add a path dependency for wheel installs). Ensure
references to _pyisolate_source_path and the manifest-injection code handle the
None sentinel safely.

Comment on lines +282 to +291
descriptor = {
"conda_dependencies": config.get("conda_dependencies", []),
"pip_dependencies": config.get("dependencies", []),
"channels": config.get("conda_channels", []),
"platforms": config.get("conda_platforms", []),
"cuda_wheels": config.get("cuda_wheels"),
"find_links": config.get("find_links", []),
"pixi_toml": toml_content,
}
fingerprint = hashlib.sha256(json.dumps(descriptor, sort_keys=True).encode()).hexdigest()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Include local wheel contents in the dependency fingerprint.

The cache key only hashes the find_links paths, not the wheel files inside them. Rebuilding or replacing a wheel in place still hits the cache on the next run, so _install_local_wheels() is skipped and the environment keeps stale binaries — same config, stale cargo.

Also applies to: 294-305, 331-340

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/environment_conda.py` around lines 282 - 291, The
fingerprint currently only includes the find_links paths, so replace that by
including the actual local-wheel signature when building the descriptor used to
compute fingerprint: for each entry in the existing "find_links" list (and any
other local-package paths used by _install_local_wheels), walk the path(s) and
compute a deterministic signature per wheel (e.g., sha256 of file bytes or a
tuple of mtime+size) and add that mapping (filename->hash or
filename->mtime-size) into the descriptor before json.dumps; update every place
that builds the descriptor/fingerprint (the descriptor variable and its later
uses that produce fingerprint) so the fingerprint changes when wheel contents
change, ensure you canonicalize paths and ignore non-wheel files, and keep
behavior tolerant to missing entries so _install_local_wheels still runs when
content differs.

Comment on lines +28 to +33
package_manager = config.get("package_manager", "uv")
execution_model = config.get("execution_model")

if execution_model is None:
execution_model = "sealed_worker" if package_manager == "conda" else "host-coupled"

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Persist the derived defaults; right now they validate, then evaporate.

This only normalizes package_manager / execution_model into local variables. A conda config that omits execution_model passes validation, but later code that reads the original dict — e.g. install_dependencies() here and _resolve_api_classes_from_config() in pyisolate/_internal/uds_client.py — still sees None and takes the wrong branch.

🛠️ Proposed fix
 def validate_backend_config(config: ExtensionConfig) -> None:
     """Validate backend-specific configuration. Fail loud on invalid combos."""
     package_manager = config.get("package_manager", "uv")
     execution_model = config.get("execution_model")
 
     if execution_model is None:
         execution_model = "sealed_worker" if package_manager == "conda" else "host-coupled"
+        config["execution_model"] = execution_model
+
+    if "package_manager" not in config:
+        config["package_manager"] = package_manager
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
package_manager = config.get("package_manager", "uv")
execution_model = config.get("execution_model")
if execution_model is None:
execution_model = "sealed_worker" if package_manager == "conda" else "host-coupled"
package_manager = config.get("package_manager", "uv")
execution_model = config.get("execution_model")
if execution_model is None:
execution_model = "sealed_worker" if package_manager == "conda" else "host-coupled"
config["execution_model"] = execution_model
if "package_manager" not in config:
config["package_manager"] = package_manager
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyisolate/_internal/environment.py` around lines 28 - 33, The derived
defaults for package_manager and execution_model are only assigned to local
variables and not written back to the original config, so callers like
install_dependencies and _resolve_api_classes_from_config still see None; update
the code that computes package_manager and execution_model to also set those
keys on the config dict (e.g., config["package_manager"] = package_manager and
config["execution_model"] = execution_model) after deriving defaults so the
resolved values persist for later functions such as install_dependencies() and
_resolve_api_classes_from_config().

Comment on lines +19 to +25
host = ReferenceHost()

assert os.environ["TMPDIR"] != str(original_tmpdir)

await host.cleanup()

assert os.environ["TMPDIR"] == str(original_tmpdir)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ensure ReferenceHost.cleanup() always runs.

await host.cleanup() is currently after assertions, so a failing assertion can skip cleanup. Move cleanup into a finally block for deterministic teardown.

Proposed fix
-    host = ReferenceHost()
-
-    assert os.environ["TMPDIR"] != str(original_tmpdir)
-
-    await host.cleanup()
-
-    assert os.environ["TMPDIR"] == str(original_tmpdir)
+    host = ReferenceHost()
+    try:
+        assert os.environ["TMPDIR"] != str(original_tmpdir)
+    finally:
+        await host.cleanup()
+
+    assert os.environ["TMPDIR"] == str(original_tmpdir)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
host = ReferenceHost()
assert os.environ["TMPDIR"] != str(original_tmpdir)
await host.cleanup()
assert os.environ["TMPDIR"] == str(original_tmpdir)
host = ReferenceHost()
try:
assert os.environ["TMPDIR"] != str(original_tmpdir)
finally:
await host.cleanup()
assert os.environ["TMPDIR"] == str(original_tmpdir)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_harness_host_env.py` around lines 19 - 25, The test may skip
cleanup if an assertion fails; wrap the assertions in a try/finally and call
await host.cleanup() in the finally block so ReferenceHost.cleanup() always
runs; locate the test using the host = ReferenceHost() instantiation and move
the await host.cleanup() out of the normal flow into a finally (or use
try/except/finally) ensuring the host variable is available in the finally and
that you await ReferenceHost.cleanup() unconditionally.

Comment on lines +96 to +128
@patch("pyisolate._internal.host.validate_backend_config")
@patch("pyisolate._internal.host.create_conda_env")
def test_validate_called_before_conda_launch(
self,
mock_create_conda: MagicMock,
mock_validate: MagicMock,
) -> None:
"""validate_backend_config must be called before env creation."""
from pyisolate._internal.host import Extension
from pyisolate.shared import ExtensionBase

config = {
"name": "test_ext",
"module": "test_module",
"dependencies": [],
"share_torch": False,
"share_cuda_ipc": False,
"package_manager": "conda",
"conda_channels": ["conda-forge"],
"conda_dependencies": ["numpy"],
}

ext = Extension.__new__(Extension)
ext.name = "test_ext"
ext.config = config
ext.venv_path = Path("/fake/venv")
ext.module_path = "/fake/module"
ext.extension_type = ExtensionBase

with patch.object(ext, "_launch_with_uds", return_value=MagicMock()):
ext._Extension__launch()

mock_validate.assert_called_once_with(config)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

These tests don’t actually verify the “before env creation” contract.

The assertions on Lines 128 and 162 only prove validate_backend_config() was called at some point with the same dict object. They do not check ordering relative to create_conda_env() / create_venv(), and a later in-place mutation of config would still make them pass. Capture call order explicitly here, or this time check stays more mime than rhyme.

Also applies to: 130-162

Comment on lines +16 to +18
@pytest.fixture(autouse=True)
def clean_registry():
SerializerRegistry.get_instance().clear()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Restore SerializerRegistry after each test.

This fixture clears a process-wide singleton and never puts it back, so any later module that runs after this file inherits the modified registry state. A yield fixture that snapshots/restores the registry will keep the suite deterministic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_sealed_proxy_handle.py` around lines 16 - 18, The fixture
clean_registry clears the process-wide SerializerRegistry and never restores it;
change it to a yield fixture that snapshots the registry state before clearing
and restores it after the test: in the clean_registry fixture (referencing
SerializerRegistry.get_instance() and the fixture name clean_registry) import
copy, take a deep copy of the instance state (e.g.
copy.deepcopy(SerializerRegistry.get_instance().__dict__)), clear the registry
for the test, yield, then restore the instance state by assigning the saved
snapshot back to the instance __dict__ so other tests see the original registry
again.

Comment on lines +95 to +99
patch("pyisolate._internal.host.build_extension_snapshot", return_value={}),
patch("os.chmod"),
contextlib.suppress(Exception),
):
ext._launch_with_uds()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don't swallow unexpected launch failures in the helper.

contextlib.suppress(Exception) can turn a half-failed _launch_with_uds() into a green test if the exception is raised after Popen or the first mocked send. These checks are security-sensitive, so let unexpected failures fail loudly instead of rhyming with false positives.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_security_conda.py` around lines 95 - 99, The test is hiding
unexpected failures by using contextlib.suppress(Exception) around the
ext._launch_with_uds() call; remove the broad suppress so any exception from
ext._launch_with_uds() will propagate and fail the test, or if you must allow
one specific expected error, replace contextlib.suppress(Exception) with
contextlib.suppress(ExpectedExceptionType) limited to that specific exception;
update the with(...) block in tests/test_security_conda.py that wraps
ext._launch_with_uds() accordingly so failures after Popen or first mocked send
are not swallowed.

Comment on lines +113 to +123
cmd = mock_popen.call_args[0][0]
env_map = _setenv_map(cmd)

assert "--clearenv" in cmd
assert env_map["PATH"] == "/usr/bin"
assert env_map["LANG"] == "C.UTF-8"
assert env_map["HOME"] == "/tmp"
assert env_map["TMPDIR"] == "/tmp"
assert env_map["PYTHONNOUSERSITE"] == "1"
assert "PYTHONPATH" not in env_map
assert "SECRET_TOKEN" not in env_map
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Also assert the Popen(..., env=...) path stays clean.

These checks only parse bwrap --setenv, so a secret could still sneak in through the subprocess env kwarg and the tests would stay green. For a security contract test, that’s a hole worth plugging.

🛡️ Extra assertion to cover the subprocess environment
         cmd = mock_popen.call_args[0][0]
+        child_env = mock_popen.call_args.kwargs.get("env", {})
         env_map = _setenv_map(cmd)

         assert "--clearenv" in cmd
         assert env_map["PATH"] == "/usr/bin"
         assert env_map["LANG"] == "C.UTF-8"
@@
         assert env_map["PYTHONNOUSERSITE"] == "1"
         assert "PYTHONPATH" not in env_map
         assert "SECRET_TOKEN" not in env_map
+        assert "PYTHONPATH" not in child_env
+        assert "SECRET_TOKEN" not in child_env

Apply the same child_env check in the credential-like variable test too.

Also applies to: 142-145

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_security_sealed_worker.py` around lines 113 - 123, The test
currently only inspects the bwrap command string via cmd/_setenv_map; also
assert the subprocess.Popen call used a clean env by checking
mock_popen.call_args[1].get("env") (or call_args.kwargs["env"]) is not None and
that this child_env dict contains PATH=="/usr/bin", LANG=="C.UTF-8",
HOME=="/tmp", TMPDIR=="/tmp", PYTHONNOUSERSITE=="1" and that keys like
"PYTHONPATH" and "SECRET_TOKEN" (and other credential-like names covered by the
second test) are absent; add the same child_env assertions in the
credential-like variable test (the later test that currently checks bwrap args)
so secrets cannot be injected via the env kwarg.

@pollockjj pollockjj merged commit 51bb318 into Comfy-Org:main Mar 29, 2026
49 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants