Skip to content

docs: update README with Explorer screenshots and Sensible Analytics branding#25

Closed
rprabhat wants to merge 2671 commits into
mainfrom
feat/rebrand-final
Closed

docs: update README with Explorer screenshots and Sensible Analytics branding#25
rprabhat wants to merge 2671 commits into
mainfrom
feat/rebrand-final

Conversation

@rprabhat
Copy link
Copy Markdown
Collaborator

@rprabhat rprabhat commented Apr 5, 2026

Updated README.md with:

  • Sensible Analytics organization branding and logo
  • 4 Explorer UI screenshots (Home, Graph, Chat, Report)
  • Updated all Nexus references to SensibleDB
  • Project structure overview
  • Documentation links to https://sensible-analytics.github.io/SensibleDB/

Also updated gh-pages documentation site with new SensibleDB branding.

xav-db and others added 30 commits January 17, 2026 18:01
<!-- greptile_comment -->

<h3>Greptile Summary</h3>


This PR adds a `helix logs` command to the CLI and introduces
`HELIX_CORES_OVERRIDE` environment variable support.

**Logs Command:**
- Supports three modes: interactive TUI (default), live streaming
(`--live`), and historical queries (`--range`)
- Works with both local Docker/Podman containers and Helix Cloud
instances
- TUI features vim-style navigation (j/k for page scrolling, zt/zb for
top/bottom), live log streaming, and time range selection with presets
- Cloud integration uses SSE for live streaming and REST API for
historical queries
- Local integration uses `docker logs` commands with time filters

**Core Override Feature:**
- `HELIX_CORES_OVERRIDE` env var allows limiting worker thread count for
testing/development
- Properly validated (warns if exceeds available cores) and logged
- Passed through from CLI to containers

**Critical Issues:**
- `vector_core.rs:161` contains a breaking bug that stores `[0u8]`
instead of actual vector properties
- `main.rs:157-164` includes a test logging thread that should be
removed from production code
- `user_test_8` test files were added but appear unrelated to this PR's
scope

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-db/src/helix_engine/vector_core/vector_core.rs | Critical bug:
stores `[0u8]` instead of vector properties, breaking vector storage |
| helix-container/src/main.rs | Contains test logging thread that should
be removed from production code |
| helix-cli/src/commands/logs/tui.rs | Interactive TUI for log viewing
with vim-style navigation and time range selection |
| helix-cli/src/commands/logs/log_source.rs | Log source abstraction
supporting both local Docker and Helix Cloud instances |
| helix-db/src/helix_gateway/gateway.rs | Added `HELIX_CORES_OVERRIDE`
env var support with validation and improved logging |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant CLI as helix logs
    participant LogSource
    participant Docker as Docker/Podman
    participant Cloud as Helix Cloud API
    participant TUI as Terminal UI

    User->>CLI: helix logs [options]
    CLI->>CLI: Parse flags (--live, --range)
    
    alt No flags (TUI mode)
        CLI->>LogSource: Create log source (local or cloud)
        CLI->>TUI: Initialize TUI
        TUI->>LogSource: Fetch initial logs (last 15min)
        alt Local instance
            LogSource->>Docker: docker logs --since --until
            Docker-->>LogSource: Historical logs
        else Cloud instance
            LogSource->>Cloud: GET /logs/range?start_time&end_time
            Cloud-->>LogSource: JSON logs response
        end
        LogSource-->>TUI: Display logs
        
        TUI->>LogSource: Start SSE stream
        alt Local instance
            LogSource->>Docker: docker logs -f
            Docker-->>LogSource: Stream logs
        else Cloud instance
            LogSource->>Cloud: SSE /logs/live
            Cloud-->>LogSource: Stream log events
        end
        
        loop User navigation
            User->>TUI: Key press (j/k/r/l/q)
            TUI->>TUI: Update UI state
            opt Switch to time range
                TUI->>LogSource: Query range
                LogSource-->>TUI: Historical logs
            end
        end
    else --live flag (CLI mode)
        CLI->>LogSource: stream_live()
        alt Local instance
            LogSource->>Docker: docker logs -f --tail 100
            Docker-->>CLI: Stream to stdout
        else Cloud instance
            LogSource->>Cloud: SSE /logs/live
            Cloud-->>CLI: Stream to stdout
        end
    else --range flag (CLI mode)
        CLI->>LogSource: query_range(start, end)
        alt Local instance
            LogSource->>Docker: docker logs --since --until
            Docker-->>CLI: Print logs
        else Cloud instance
            LogSource->>Cloud: GET /logs/range
            Cloud-->>CLI: Print logs
        end
    end
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
Only push to S3 when a release is created on main

<!-- greptile_comment -->

<h2>Greptile Overview</h2>

### Greptile Summary

This PR modifies the S3 push workflow to enforce that releases can only
trigger S3 uploads when they originate from the main branch. The changes
achieve this by:

1. **Removing the `create.tags` trigger** - Previously, any tag creation
(matching `v*`) would trigger the workflow. This has been removed to
rely solely on release events.

2. **Adding branch verification logic** - A new step verifies that:
- For release events: The tagged commit must be an ancestor of
`origin/main` (using `git merge-base --is-ancestor`)
- For workflow_dispatch events: The workflow must be triggered from the
`refs/heads/main` branch

3. **Adding `fetch-depth: 0`** - Required to enable the git history
commands needed for branch verification.

## Key Changes
- Removed trigger: `on.create.tags`
- Added verification step that exits with code 1 if the release/dispatch
is not from main
- The workflow will now fail early (before AWS authentication) if
triggered from non-main branches

## Issues Found
The implementation has **no critical bugs** but includes several shell
scripting best practice violations:
- Missing quotes around GitHub context variables in shell commands
- No validation that `TAG_COMMIT` variable is non-empty before use
- Redundant conditional that checks for event types that are the only
triggers

These are style/robustness improvements rather than functional issues.
The core logic correctly implements the intended security control of
restricting S3 uploads to main branch releases.

<details><summary><h3>Important Files Changed</h3></summary>



File Analysis



| Filename | Score | Overview |
|----------|-------|----------|
| .github/workflows/s3_push.yml | 4/5 | Modified to verify releases are
from main branch before S3 push. Removed tag creation trigger. Found
shell script best practice issues with variable quoting and missing
error handling. |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant GitHub
    participant Workflow as S3 Push Workflow
    participant Git
    participant AWS as AWS S3

    alt Release Event (published/created)
        User->>GitHub: Create/Publish Release
        GitHub->>Workflow: Trigger workflow (release event)
        Workflow->>Git: Checkout repository (fetch-depth: 0)
        Git-->>Workflow: Repository checked out
        Workflow->>Git: Get commit SHA for release tag
        Git-->>Workflow: TAG_COMMIT
        Workflow->>Git: Check if TAG_COMMIT is ancestor of origin/main
        alt Tag is on main branch
            Git-->>Workflow: Success (exit 0)
            Workflow->>AWS: Configure AWS credentials via OIDC
            AWS-->>Workflow: Credentials configured
            Workflow->>Workflow: Create template.tar.gz
            Workflow->>AWS: Upload template.tar.gz to S3
            AWS-->>Workflow: Upload complete
            Workflow->>User: Success notification
        else Tag is NOT on main branch
            Git-->>Workflow: Failure (exit 1)
            Workflow->>User: Skip S3 push - not on main
        end
    else Manual Trigger (workflow_dispatch)
        User->>GitHub: Manually trigger workflow
        GitHub->>Workflow: Trigger workflow (workflow_dispatch event)
        Workflow->>Git: Checkout repository (fetch-depth: 0)
        Git-->>Workflow: Repository checked out
        alt Triggered from main branch
            Workflow->>AWS: Configure AWS credentials via OIDC
            AWS-->>Workflow: Credentials configured
            Workflow->>Workflow: Create template.tar.gz
            Workflow->>AWS: Upload template.tar.gz to S3
            AWS-->>Workflow: Upload complete
            Workflow->>User: Success notification
        else Triggered from non-main branch
            Workflow->>User: Error - must run from main
        end
    end
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
…ting via env var (#808)

# Release Notes

## New Features

### helix logs Command
 - View instance logs directly from the CLI with three modes:
- Interactive TUI (default): Vim-style navigation (j/k to scroll, zt/zb
for top/bottom), live streaming, and time range selection with presets
 - Live streaming (--live): Stream logs to stdout in real-time
 - Historical queries (--range): Query logs for specific time ranges
- Works with both local Docker/Podman containers and Helix Cloud
instances

### HELIX_CORES_OVERRIDE Environment Variable
 - Override worker thread count for testing and development
 - Includes validation warnings if set higher than available cores

## Improvements
- S3 releases now verify the release originates from the main branch
before uploading
 - Removed deprecated Docker configuration files
 - Version bumps and dependency updates
<!-- greptile_comment -->

<h3>Greptile Summary</h3>


This PR fixes two issues: a date picker bug in the CLI that was
hardcoded to 28 days per month, and enables response logging for
production builds.

- Fixed `PickerField::Day` logic to use actual month lengths (28-31
days) with proper leap year calculation
- Added `production` feature flag to conditional compilation for
response logging in gateway
- Note: commit message contains typo "proudction" instead of
"production"

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-cli/src/commands/logs/tui.rs | Fixed day picker to correctly
handle variable month lengths (28-31 days) with proper leap year
calculation |
| helix-db/src/helix_gateway/gateway.rs | Enabled response logging for
production builds by adding `production` feature flag alongside
`dev-instance` |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant TUI as CLI TUI (tui.rs)
    participant Gateway as Gateway (gateway.rs)
    participant WorkerPool
    
    Note over TUI: Date Picker Fix
    User->>TUI: Adjust day field
    TUI->>TUI: days_in_month(year, month)
    TUI->>TUI: Calculate valid day (1-28/29/30/31)
    TUI->>User: Display updated date
    
    Note over Gateway: Production Logging
    User->>Gateway: POST /query request
    Gateway->>WorkerPool: process(req)
    WorkerPool-->>Gateway: Response body
    alt dev-instance OR production
        Gateway->>Gateway: Log query response
    end
    Gateway-->>User: HTTP Response
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
… `db`, a captured variable in an `FnMut` closure

Fixes #805
…… (#810)

<!-- greptile_comment -->

<h3>Greptile Summary</h3>


Fixed Rust compilation error E0507 in generated code by replacing
functional iterator chains with imperative for loops.

**Changes Made:**
- Replaced `.iter().map()` patterns with explicit `for` loops in
`AddE::Display` implementation (3 cases)
- Replaced `.iter().flat_map().map()` pattern with nested `for` loops in
`AddE::Display` implementation
- Applied same transformation to `UpsertE::Display` implementation (3
cases total)
- All changes preserve identical functionality while avoiding ownership
issues

**Technical Details:**
The original code attempted to move captured variables (`db`, `arena`,
`txn`) inside `FnMut` closures created by `.map()`, which violates
Rust's ownership rules. The imperative approach with `for` loops
correctly borrows these variables without attempting to move them,
resolving the compilation error while maintaining the same behavior of
collecting edge results into a `Vec`.

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-db/src/helixc/generator/source_steps.rs | Fixed Rust ownership
issue by replacing .map()/.flat_map() with imperative for loops to avoid
moving captured variables in FnMut closures |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant HQL as HelixQL Parser
    participant Gen as Code Generator
    participant Display as Display trait
    participant Rust as Generated Rust Code
    
    HQL->>Gen: Parse AddE or UpsertE with plural variables
    Gen->>Display: Call fmt method for code generation
    
    alt Both from and to are singular
        Display->>Rust: Generate single add_edge call
    else From is plural
        Display->>Rust: Generate for loop iterating over from
        Note over Rust: Creates Vec and pushes each edge result
    else To is plural
        Display->>Rust: Generate for loop iterating over to
        Note over Rust: Creates Vec and pushes each edge result
    else Both are plural
        Display->>Rust: Generate nested for loops
        Note over Rust: Creates Vec and pushes all combinations
    end
    
    Rust->>Rust: Execute with correct ownership semantics
    Note over Rust: For loops avoid FnMut closure ownership issues
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
Updated the TraversalValue implementation to handle the Empty variant in both data retrieval and scoring methods. This change ensures that the Empty variant returns appropriate values, improving the robustness of the traversal logic.
Not yet tested

<!-- greptile_comment -->

<h3>Greptile Summary</h3>


This PR deprecates `BuildMode::Debug` in favor of `BuildMode::Dev`
across the CLI codebase.

## Changes
- Made `BuildMode::Dev` the new default (was `Debug`)
- Added validation in `HelixConfig::validate()` to reject configs using
`build_mode = "debug"` with helpful error message
- Replaced `BuildMode::Debug` match arms with `unreachable!()` panics in
`image_name()` and `generate_dockerfile()`
- Updated `add` and `migrate` commands to use `BuildMode::Dev`
- Refactored several match statements to use more concise destructuring
patterns

## Critical Issues Found
Two locations still reference `BuildMode::Debug` which will cause
runtime panics:
1. `config.rs:484` - `default_config()` method still creates instances
with `BuildMode::Debug`
2. `docker.rs:821` - `remove_instance_images()` tries to call
`image_name()` with `BuildMode::Debug`, which will hit the unreachable
panic

Both need to be updated to use `BuildMode::Dev` instead.

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-cli/src/config.rs | added validation for `BuildMode::Debug`,
moved it to deprecated enum variant, added helper method. Critical bug:
`default_config()` still uses `BuildMode::Debug` |
| helix-cli/src/docker.rs | replaced `BuildMode::Debug` with unreachable
panic calls. Critical bug in `remove_instance_images()` still tries to
create debug_image which will panic |
| helix-cli/src/commands/add.rs | changed default `build_mode` from
`Debug` to `Dev` for new local instances |
| helix-cli/src/commands/migrate.rs | updated migration to use
`BuildMode::Dev` instead of `BuildMode::Debug` |
| helix-cli/src/commands/integrations/ecr.rs | replaced
`BuildMode::Debug` case with unreachable panic |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant CLI
    participant Config
    participant Docker
    participant ECR

    User->>CLI: helix add local
    CLI->>Config: Create LocalInstanceConfig
    Config->>Config: Set build_mode = BuildMode::Dev
    Note over Config: Changed from Debug to Dev
    CLI->>Config: Save helix.toml
    Config->>Config: validate()
    Config->>Config: Check build_mode != Debug
    alt build_mode is Debug
        Config-->>CLI: Error: debug mode removed
    else build_mode is Dev or Release
        Config-->>CLI: Validation passed
    end

    User->>CLI: helix build instance
    CLI->>Docker: build_image()
    Docker->>Docker: image_name(BuildMode)
    alt BuildMode::Debug
        Docker-->>CLI: unreachable! panic
    else BuildMode::Dev
        Docker->>Docker: Tag as "dev"
    else BuildMode::Release
        Docker->>Docker: Tag as "latest"
    end
    Docker-->>CLI: Build complete

    User->>CLI: helix remove instance
    CLI->>Docker: remove_instance_images()
    Docker->>Docker: image_name(BuildMode::Debug)
    Note over Docker: BUG: Will panic!
    Docker-->>CLI: unreachable! panic
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
<!-- greptile_comment -->

<h3>Greptile Summary</h3>


Refactored upsert operations from creation-style syntax to
traversal-style chained syntax. Previously, upserts were standalone
creation statements like `UpsertN<Person>({fields})`. Now they are
traversal steps chained after iterators: `existing::UpsertN({fields})`.

Key changes:
- Grammar moved upsert operations from `creation_stmt` to `last_step` in
traversal chains
- Added `TraversalValue::Empty` variant to represent empty iterator
states during upserts
- Refactored compiler types: moved `UpsertNode/UpsertEdge/UpsertVector`
from `ExpressionType` to `StepType` as `UpsertN/UpsertE/UpsertV`
- Implemented complete upsert engine logic: updates existing items when
iterator has values, creates new items when empty
- Properly handles secondary indices, BM25 updates, and vector data
conversion for all entity types
- Updated all test queries to use new chained syntax

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-db/src/grammar.pest | Moved upsert operations from creation
statements to last_step traversal operations, changing syntax from
`UpsertN<Type>` to `::UpsertN({fields})` |
| helix-db/src/helix_engine/traversal_core/ops/util/upsert.rs |
Implemented complete upsert logic for nodes, edges, and vectors with
proper secondary index handling and BM25 updates; handles
VectorWithoutData conversion |
| helix-db/src/helixc/parser/types.rs | Refactored upsert types from
ExpressionType (creation statements) to StepType (traversal steps),
adding Upsert, UpsertN, UpsertE, UpsertV variants |
| helix-db/src/helixc/analyzer/methods/traversal_validation.rs | Added
validation logic for all upsert operations with field existence checks
and type validation using get_singular_type helper |
| helix-db/src/helixc/generator/traversal_steps.rs | Implemented code
generation for UpsertN, UpsertE, UpsertV operations with proper
G::new_mut_from_iter wrapping and source handling |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant Parser
    participant Analyzer
    participant Generator
    participant Engine
    
    User->>Parser: Write query with ::UpsertN({fields})
    Parser->>Parser: Parse grammar (upsert_n in last_step)
    Parser->>Parser: Create UpsertN StepType
    Parser-->>Analyzer: AST with UpsertN step
    
    Analyzer->>Analyzer: Validate traversal type
    Analyzer->>Analyzer: Check field existence on schema
    Analyzer->>Analyzer: Set TraversalType::UpsertN
    Analyzer-->>Generator: Validated AST
    
    Generator->>Generator: Generate G::new_mut_from_iter()
    Generator->>Generator: Generate .upsert_n(label, props)
    Generator-->>Engine: Rust code
    
    Engine->>Engine: Execute traversal iterator
    alt Iterator has items
        Engine->>Engine: Update existing node/edge/vector
        Engine->>Engine: Merge properties
        Engine->>Engine: Update secondary indices
        Engine->>Engine: Update BM25 index
    else Iterator empty
        Engine->>Engine: Create new item with label
        Engine->>Engine: Insert properties
        Engine->>Engine: Create secondary indices
        Engine->>Engine: Create BM25 entry
    end
    Engine-->>User: Result (updated or created item)
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
…tui, removing debug mode (#814)

<!-- greptile_comment -->

<h3>Greptile Summary</h3>


This PR refactors the upsert functionality and removes the deprecated
debug build mode.

**Major Changes:**

- **Upsert refactoring**: Moved upsert operations from creation
statements to traversal steps, introducing new `UpsertN`, `UpsertE`, and
`UpsertV` step types that work within traversal chains
- **Grammar changes**: Updated grammar to support `::UpsertN({fields})`,
`::UpsertE({fields})::From()::To()`, and `::UpsertV(data, {fields})` as
traversal steps
- **Vector upsert fix**: Added proper handling for
`VectorNodeWithoutVectorData` in upsert_v operation with comprehensive
secondary index management
- **Debug mode removal**: Replaced `build_mode = "debug"` with
`build_mode = "dev"`, adding validation to prevent use of deprecated
mode
- **Code generation fixes**: Resolved placeholder variables (`_` and
`val`) correctly and added distinction between single source
(`std::iter::once()`) and iterator sources (`.iter().cloned()`)
- **TUI improvements**: Removed mouse capture from logs TUI for simpler
terminal handling
- **Test updates**: Updated 100+ test TOML files and added comprehensive
test coverage for new upsert syntax

**Issues Found:**

- Potential duplicate secondary index insertions in vector upsert logic
when updating properties (lines 848-945 in upsert.rs)

<details><summary><h3>Important Files Changed</h3></summary>




| Filename | Overview |
|----------|----------|
| helix-db/src/helix_engine/traversal_core/ops/util/upsert.rs | Added
VectorNodeWithoutVectorData handling in upsert_v operation with
comprehensive secondary index updates |
| helix-db/src/grammar.pest | Moved upsert operations from creation
statements to traversal steps, supporting both Update and UpsertN/E/V
syntax |
| helix-db/src/helixc/parser/types.rs | Refactored upsert types from
Expression-based to Step-based, adding UpsertN, UpsertE, UpsertV step
types |
| helix-db/src/helixc/parser/graph_step_parse_methods.rs | Implemented
parsing methods for UpsertN, UpsertE, and UpsertV steps with proper
field and connection handling |
| helix-db/src/helixc/analyzer/methods/traversal_validation.rs | Added
validation for UpsertN, UpsertE, UpsertV steps with proper type checking
and source variable handling |
| helix-db/src/helixc/generator/traversal_steps.rs | Added code
generation for UpsertN/E/V traversal types with proper single/plural
source handling |
| helix-db/src/helixc/generator/queries.rs | Fixed placeholder
resolution for '_' and 'val' variables, added single source vs iterator
distinction |
| helix-cli/src/config.rs | Removed debug mode in favor of dev mode with
validation to prevent usage of deprecated build_mode |

</details>


</details>


<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant User
    participant CLI
    participant Parser
    participant Analyzer
    participant Generator
    participant Engine
    
    Note over User,Engine: Upsert Operation Flow
    
    User->>CLI: helix migrate (BuildMode validation)
    CLI->>CLI: Validate config (no Debug mode)
    alt Debug mode detected
        CLI-->>User: Error: use dev mode instead
    end
    
    User->>Parser: Query with UpsertN/E/V
    Parser->>Parser: Parse grammar (step syntax)
    Note over Parser: UpsertN({fields})<br/>UpsertE({fields})::From()::To()<br/>UpsertV(data, {fields})
    Parser->>Parser: Create UpsertN/E/V AST nodes
    
    Parser->>Analyzer: Validate traversal
    Analyzer->>Analyzer: Check type compatibility
    Analyzer->>Analyzer: Validate field existence
    Analyzer->>Analyzer: Determine source (single/plural)
    Note over Analyzer: Extract source variable<br/>from traversal context
    
    Analyzer->>Generator: Generate code
    Generator->>Generator: Resolve placeholders ('_', 'val')
    Generator->>Generator: Choose iterator type
    alt Single source
        Generator->>Generator: std::iter::once(item.clone())
    else Multiple sources
        Generator->>Generator: items.iter().cloned()
    end
    
    Generator->>Engine: G::new_mut_from() or G::new_mut_from_iter()
    Engine->>Engine: Execute upsert operation
    alt Iterator has items
        Engine->>Engine: Update existing items
        Engine->>Engine: Update secondary indices
        Engine->>Engine: Update BM25 index
    else Iterator empty
        Engine->>Engine: Create new item with label
        Engine->>Engine: Insert secondary indices
    end
    
    Engine-->>User: Return upserted items
```
</details>


<!-- greptile_other_comments_section -->

<!-- /greptile_comment -->
xav-db and others added 27 commits March 17, 2026 21:46
- Added new fields for minimum and maximum gateway and hyperscale counts in `CliProjectEnterpriseCluster`.
- Introduced methods to resolve minimum and maximum counts for gateways and hyperscales.
- Updated error messages for missing count fields in project cluster responses for clarity.
- Refactored the enterprise cluster creation request to utilize role-based count fields.
- Enhanced tests to validate the new configuration and ensure correct behavior.

These changes improve the flexibility and clarity of enterprise cluster management within the CLI.
…URL to version 0.1.1 for improved stability.
…ise-ql, and sonic-rs to latest versions with checksums for improved stability and security.
<!-- greptile_comment -->

<h3>Greptile Summary</h3>

This PR adds enterprise cluster support to the Helix CLI, covering the
full lifecycle: `helix init` with enterprise cluster provisioning,
`helix compile` for Rust-based query bundles, `helix push` (renamed from
raw `.rs` upload to `queries.json` + source snapshot upload), and a
fully bidirectional `helix sync` with manifest-aware reconciliation for
enterprise clusters.

**Key changes:**
- **Enterprise deploy** (`helix.rs`): SSE streaming replaced by a simple
HTTP POST; the query bundle (`queries.json`) is base64-encoded and
uploaded alongside an allowlisted source snapshot, with payload-size
enforcement (20 MB limit).
- **Enterprise sync** (`sync.rs`): New
`reconcile_enterprise_cluster_snapshot` mirrors the standard-cluster
sync flow — fetches the remote manifest, computes a diff, and guides the
user through push/pull/no-op decisions. Post-pull, `queries.json` is
regenerated via `cargo run`.
- **Workspace flow** (`workspace_flow.rs`): Enterprise cluster creation
now uses role-based count fields (`min_gateway_count`,
`max_hyperscale_count`, etc.) instead of the legacy
`min_instances`/`max_instances` pair; non-interactive mode gets sensible
HA defaults.
- **Init** (`init.rs`): `--name` is now honoured for all deployment
backends; scaffold files are not overwritten without confirmation.
- **Auth** (`auth.rs`): `github_login().await.unwrap()` corrected to
`github_login().await?`.
- **Dependency bumps**: `sonic-rs` 0.5.3 → 0.5.7, new `base64` and
`helix-enterprise-ql` git dependency.
- **Issues found:**
- P1 in `helix.rs`: the debug-build fallback URL was changed from
`localhost:8080` to a live staging ALB
(`http://helix-cloud-build-staging-gw-alb-72217854.us-east-1.elb.amazonaws.com`),
so any debug build will silently route to staging.
- P2 in `helix.rs` / `sync.rs`: `should_descend_enterprise_source_dir`
and `should_include_enterprise_source_file` are duplicated verbatim
across both files.
- P2 in `workspace_flow.rs`: `build_enterprise_cluster_request` takes
`min_instances`/`max_instances` parameters but maps them to fixed
gateway/hyperscale counts respectively, not to scaling bounds — the
naming is misleading.
- P2 in `sync.rs`: `resolved_gateway_max_count` falls back to
`min_instances` (by design, since legacy `min_instances` = gateway
count), but this is unintuitive without an explanatory comment.

<details><summary><h3>Important Files Changed</h3></summary>

| Filename | Overview |
|----------|----------|
| helix-cli/src/commands/integrations/helix.rs | Major enterprise deploy
rewrite: replaces SSE streaming with a simple HTTP POST, adds
base64-encoded `queries.json` upload, source snapshot collection with
allowlist filtering, and payload size enforcement. Contains a P1 issue
where the debug-build fallback URL was changed from `localhost:8080` to
a live staging ALB URL, and P2 code duplication of source-file filtering
helpers that also exist in sync.rs. |
| helix-cli/src/commands/sync.rs | Extensive enterprise sync overhaul:
adds `reconcile_enterprise_cluster_snapshot`, bidirectional manifest
comparison for Rust source files, safe path sanitisation on pull, and
post-pull `queries.json` regeneration. The `CliProjectEnterpriseCluster`
now supports both legacy (`min_instances`/`max_instances`) and
role-based (`min_gateway_count` etc.) count fields with compatibility
resolution methods. Fallback logic in `resolved_gateway_max_count` uses
`min_instances` (not `max_instances`) which is by design but deserves a
comment. The source-file filtering helpers are duplicated from helix.rs.
|
| helix-cli/src/commands/workspace_flow.rs | Enterprise cluster creation
now uses a typed `CreateEnterpriseClusterRequest` struct with role-based
count fields, and supports non-interactive mode with sensible defaults.
The `build_enterprise_cluster_request` helper conflates
`min_instances`/`max_instances` parameter names with gateway/hyperscale
fixed counts — the naming should be clarified. |
| helix-cli/src/commands/compile.rs | Adds enterprise compile path:
detects a `Cargo.toml` in the queries directory and runs `cargo run` to
generate `queries.json`, then optionally copies it to a user-specified
output path. Clean implementation with good error messages. |
| helix-cli/src/commands/init.rs | Init command now honours `--name` for
all deployment types (Helix, ECR, Fly, Local), avoids overwriting
pre-existing scaffold files via `write_starter_file`, and generates
context-aware "next steps" instructions. Clean, well-tested changes. |
| helix-cli/src/commands/auth.rs | Single-line fix:
`github_login().await.unwrap()` replaced with `github_login().await?`,
propagating auth errors correctly instead of panicking. |
| helix-cli/ENTERPRISE_CLI_TEST_PLAN.md | New testing plan document for
enterprise CLI features. Many items are still unchecked (e.g.,
workspace-type enforcement, scale/limit tests, CI contract guards),
which aligns with the stated status that not all critical blockers are
resolved yet. |
| helix-cli/src/prompts.rs | Adds `confirm_overwrite` helper and updates
`build_init_deployment_command` to prompt for an instance name
(defaulting to the project name) for all deployment types. Clean
additions. |

</details>

</details>

<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant Dev as Developer
    participant CLI as Helix CLI
    participant Cargo as cargo run
    participant Cloud as Helix Cloud API
    participant S3 as Object Storage

    Note over Dev,S3: helix push <enterprise-instance>
    Dev->>CLI: helix push
    CLI->>Cargo: cargo run --manifest-path queries/Cargo.toml
    Cargo-->>CLI: generates queries.json
    CLI->>CLI: collect_enterprise_source_files()<br/>(allowlist: Cargo.toml, src/**, .cargo/*.toml, …)
    CLI->>CLI: base64-encode queries.json
    CLI->>CLI: check payload ≤ 20 MB
    CLI->>Cloud: POST /api/cli/enterprise-clusters/{id}/deploy<br/>{queries_json_b64, source_files, helix_toml}
    Cloud->>S3: upload queries.json + source snapshot
    Cloud-->>CLI: {s3_key, size_bytes}
    CLI-->>Dev: Enterprise cluster deployed

    Note over Dev,S3: helix sync <enterprise-instance>
    Dev->>CLI: helix sync
    CLI->>Cloud: GET /api/cli/enterprise-clusters/{id}/sync
    Cloud-->>CLI: {source_files, file_metadata, helix_toml}
    CLI->>CLI: build_remote_enterprise_manifest()<br/>sanitize_relative_path + allowlist filter
    CLI->>CLI: collect_local_enterprise_manifest()
    CLI->>CLI: compare_manifests() → BothEmpty/InSync/LocalOnly/RemoteOnly/Diverged
    alt Pull chosen
        CLI->>CLI: pull_remote_enterprise_snapshot_into_local()<br/>write files + remove local-only files
        CLI->>Cargo: cargo run (regenerate queries.json)
        Cargo-->>CLI: queries.json updated
    else Push chosen
        CLI->>Cloud: POST /api/cli/enterprise-clusters/{id}/deploy
    end
    CLI->>Cloud: GET /api/cli/enterprise-clusters/{id}/project
    Cloud-->>CLI: project_id
    CLI->>Cloud: GET /api/cli/projects/{id}/clusters
    Cloud-->>CLI: cluster metadata
    CLI->>CLI: reconcile_project_config_from_cloud()<br/>update helix.toml
    CLI-->>Dev: Sync complete
```
</details>

<sub>Last reviewed commit: ["Enhance enterprise
c..."](https://github.com/helixdb/helix-db/commit/2d82e1a79f1fe06458a33e1e630a2ecd017e1c3e)</sub>

> Greptile also left **1 inline comment** on this PR.

<!-- /greptile_comment -->
<!-- greptile_comment -->

<h3>Greptile Summary</h3>

This PR adds enterprise cluster support to the Helix CLI (`v2.3.3 →
v2.3.4`), introducing a new push/sync flow for Rust-based enterprise
query projects (powered by the new `helix-enterprise-ql` crate). The key
changes are:

- **Enterprise deploy** (`helix push`): replaces the old SSE-streaming
`.rs` file upload with a single JSON POST containing a base64-encoded
`queries.json` (generated by `cargo run`) plus a sanitized source
snapshot with allowlist filtering and 20 MB / 2 000-file limits.
- **Enterprise sync** (`helix sync`): rewrites the sync flow to use
manifest-based reconciliation (sha256 + timestamps) against a new
`source_files`/`file_metadata` API response shape, with
pull/push/tie-break prompts and automatic `queries.json` regeneration
after a pull.
- **Cluster creation** (`helix add`, `helix init`): enterprise cluster
creation is locked to HA mode with role-based `min/max_gateway_count` +
`min/max_hyperscale_count` fields; non-interactive defaults are added.
- **`helix compile`**: detects an enterprise queries project by the
presence of `Cargo.toml` and runs `cargo run` to produce `queries.json`.
- **`helix init`**: now accepts `--name` for all deployment types,
preserves existing scaffold files in non-interactive mode, and improves
next-step instructions.

**Notable issues found:**
- The `CLOUD_AUTHORITY` debug-mode default was changed from
`\"localhost:8080\"` to a live staging AWS ALB URL. This will route all
debug builds to staging infrastructure instead of localhost, breaking
local development for all contributors.
- The enterprise source file count check in
`collect_enterprise_source_files` triggers at `files.len() > 2000` after
insertion, effectively allowing 2 001 files before erroring.
- The `else { (1, 1) }` branch in `create_enterprise_cluster_flow` is
dead code since `availability_mode` is unconditionally
`AvailabilityMode::Ha`.

<details><summary><h3>Important Files Changed</h3></summary>

| Filename | Overview |
|----------|----------|
| helix-cli/src/commands/integrations/helix.rs | Core enterprise deploy:
rewrites deploy from SSE-streaming to a single JSON POST with base64
queries.json + source snapshot; adds source file collection with
allowlist + size/count limits. Critical issue: debug CLOUD_AUTHORITY was
changed from localhost:8080 to a live staging AWS ALB URL. Minor
off-by-one in file count check (allows 2001, not 2000). |
| helix-cli/src/commands/sync.rs | Large refactor of enterprise sync:
response schema changed from rs_files to source_files+file_metadata,
adds manifest-based diff/pull/push reconciliation with sha256+timestamp
comparison, safe path sanitization, and local cargo regeneration after
pull. Well tested. |
| helix-cli/src/commands/workspace_flow.rs | Adds enterprise cluster
creation with role-based min/max_gateway_count +
min/max_hyperscale_count fields, non-interactive defaults, and
preferred_cluster_name propagation. Contains unreachable else { (1, 1) }
branch since availability_mode is hardcoded to Ha. |
| helix-cli/src/commands/compile.rs | Adds enterprise compile path:
detects Cargo.toml in queries dir and runs cargo run to generate
queries.json, with optional output path resolution. Clean implementation
with good error messages. |
| helix-cli/src/commands/init.rs | Extended init to accept --name for
all deployment types, adds non-interactive defaults, smarter project
structure creation (preserves existing files in non-interactive mode),
and dynamic next-step instructions. |

</details>

</details>

<details><summary><h3>Sequence Diagram</h3></summary>

```mermaid
sequenceDiagram
    participant CLI as helix CLI
    participant Cargo as cargo run
    participant FS as Local Filesystem
    participant API as Helix Cloud API

    Note over CLI,API: helix push (enterprise)
    CLI->>FS: Read queries_project_dir/Cargo.toml
    CLI->>Cargo: cargo run --manifest-path Cargo.toml
    Cargo-->>FS: writes queries.json
    Cargo-->>CLI: exit status
    CLI->>FS: Read queries.json → base64 encode
    CLI->>FS: Walk source files (allowlist filter, ≤2000 files, ≤20MB)
    CLI->>API: POST /api/cli/enterprise-clusters/{id}/deploy
    API-->>CLI: 200 {s3_key, size}
    CLI->>CLI: output success

    Note over CLI,API: helix sync (enterprise)
    CLI->>API: GET /api/cli/enterprise-clusters/{id}/sync
    API-->>CLI: {source_files, file_metadata, helix_toml}
    CLI->>FS: collect_local_enterprise_manifest
    CLI->>CLI: compare_manifests (sha256 + timestamps)
    alt Remote newer / RemoteOnly
        CLI->>FS: pull_remote_enterprise_snapshot_into_local
        CLI->>Cargo: cargo run (regenerate queries.json)
        Cargo-->>FS: writes queries.json
    else Local newer / LocalOnly
        CLI->>API: push_local_enterprise_snapshot_to_cluster
    else In sync / BothEmpty
        CLI->>CLI: no-op
    end
```
</details>

<sub>Reviews (1): Last reviewed commit: ["Update helix-cli version to
2.3.4 in
Car..."](HelixDB/helix-db@dae5d71)
| [Re-trigger
Greptile](https://app.greptile.com/api/retrigger?id=26929311)</sub>

> Greptile also left **3 inline comments** on this PR.

<!-- /greptile_comment -->
- Rename HQL references to NQL in documentation and comments
- Rename hql-tests directory to nql-tests
- Update Cargo.toml workspace membership
- Update test labels and scripts
- Fix clippy check exclusions
- Update GitHub issue templates

All tests pass after these changes.
Documentation adapted from HelixDB documentation structure:
- Getting Started: overview and installation guide
- NexusQL: query language, schema definition, CRUD, traversals, vectors
- CLI: command reference and project configuration
- SDKs: TypeScript, Python, and Rust integration
- Features: MCP tools, embeddings, RAG, security, multi-model
- Overview: about, distinctive features, when to use
- Programming Interfaces: 5-minute quick start
- mkdocs.yml: Material theme configuration with navigation structure
- .github/workflows/docs.yml: Auto-deploy workflow for GitHub Pages
- Documentation will be available at sensible-analytics.github.io/NexusDB/
* Add SQLite-style table partitioning documentation for Nexus TP (#1)

Co-authored-by: prabhatranjan <prabhatranjan@example.com>

* feat: add nexus-explorer Tauri desktop app with CI release pipeline

Add NexusDB Explorer as a workspace module — a native macOS desktop app
built with Tauri 2.0 + SolidJS for graph database management.

Features:
- Database lifecycle (create, open, close, list, stats)
- Full Node/Edge CRUD with inline property editing
- Force-graph 2D visualization with auto-layout
- NQL query editor with CodeMirror 6
- Schema browser with label counts and totals
- Embedded mode using nexus-db directly (no FFI)

CI: GitHub Actions workflow triggered on explorer-v* tags that builds
.app and .dmg for both aarch64 and x86_64 macOS, auto-publishing
to GitHub Releases.

* fix(ci): install tauri-cli, update runner to macos-14

* fix(ci): zip .app bundle before upload, fix working directory path

* fix(ci): rewrite workflow with softprops/action-gh-release v2, single job

---------

Co-authored-by: prabhatranjan <prabhatranjan@example.com>
Use official actions/upload-pages-artifact and actions/deploy-pages
for proper GitHub Pages integration instead of mkdocs gh-deploy.
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
* Clean up README by removing HTML and badges

Removed HTML structure and badges from README.

* Fix CI: exclude nexus-cli and nexus-explorer from clippy check

These packages require GTK/glib system libraries not available on GitHub Actions runners.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Creates ~/.nexus/demo with sample data (people, places, events, symptoms, medications + relationships) so users can explore NexusDB immediately.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Frontend now automatically selects the first available database (demo) and loads its nodes, edges, and schema on mount.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
)

* feat: add E2E test pipeline for NexusDB Explorer

- Add comprehensive E2E test script (scripts/e2e-test.sh)
- Add GitHub Actions workflow for E2E testing
- Tests cover: demo DB creation, data integrity, graph rendering, frontend auto-select, Rust compilation, database CRUD
- 20 test cases covering all critical paths

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat: implement NQL query execution for embedded mode

- Replace stub nql_execute with functional query engine
- Support MATCH, SEARCH, GET, FIND, COUNT query patterns
- MATCH (n:Label) returns filtered nodes by label
- MATCH (n)-[r]->(m) returns nodes with relationship edges
- GET/FIND nodes WHERE label contains 'X' returns matching nodes
- COUNT nodes/edges returns total counts
- Default query (no pattern) returns all nodes and edges

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: restore graph rendering fix and force-graph types

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: skip demo DB directory check in CI environment

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: correct bash syntax in e2e-test.sh

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
* feat: redesign SensibleDB Explorer UI with user-centric design

- Rebrand from NexusDB to SensibleDB with new logo and design system
- Add Home view with onboarding, demo cards, and quick stats
- Add Chat view for natural language data queries
- Add Report view with metric cards, findings, and type breakdown
- Redesign sidebar with Home/Graph/Chat/Report navigation
- Add header bar with SensibleDB logo and database badge
- Add status bar showing connection info
- Implement consistent CSS design tokens (Indigo/Violet/Cyan palette)
- Fix NQL query execution (case-insensitive matching, camelCase params)
- Fix graph rendering with SVG-based visualization + interactions
- Add demo databases: health-patterns and project-management
- Create design document at docs/design/explorer-redesign.md

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: update E2E tests for redesigned UI and dual demo databases

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: correct edge count in E2E test to 35

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
…#15)

* fix: update E2E tests for redesigned UI and dual demo databases

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: correct edge count in E2E test to 35

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat: complete explorer UI redesign with E2E test coverage

Implement all 5 phases of the SensibleDB Explorer redesign:

Phase 1 (verified): Foundation - CSS tokens, component styles, graph interactions
Phase 2: Home & Onboarding - Guided tour, connection wizard, contextual tooltips
Phase 3a: Chat Interface - NL→NQL translation, structured responses, follow-up chips
Phase 3b: Graph View - Node cards with foreignObject, InspectorPanel, Query Bar
Phase 4: Reports - Formatted export (txt/PDF), share, time period selector
Phase 5: Polish - Animations, ErrorBoundary, keyboard shortcuts (1-8, /, Ctrl+K/G/R)

New components:
- GuidedTour: 5-step overlay tour with localStorage persistence
- ConnectionWizard: 4-step modal wizard for data sources
- ContextualTooltip: Reusable tooltip with glossary explanations
- InspectorPanel: Node details with connected items and 'Ask about' button
- ErrorBoundary: Per-view error boundary with retry

E2E testing:
- Playwright setup with Tauri API mock for browser testing
- 56 tests covering all views, navigation, keyboard shortcuts, design system
- Test categories: Home, Graph, Chat, Report, Navigation, Onboarding, NQL, Data Views

* fix: E2E test infrastructure - webServer config, port alignment, data flow tests

- Add webServer config to playwright.config.ts with auto-start on port 1420
- Align Playwright baseURL with Vite config port (1420)
- Add reuseExistingServer for local dev convenience
- Add 7 data flow tests verifying mock data renders with correct values:
  - Correct database name, node count (10), edge count (10)
  - Correct metric values in report view
  - Correct item/connection counts in status bar and chat welcome message
- No skipped or deleted tests

* ci: add Playwright E2E test job to CI pipeline

- Add playwright-e2e job to .github/workflows/e2e-tests.yml
- Installs Node.js 20, frontend dependencies, and Playwright Chromium
- Runs npx playwright test with CI=true environment
- Fix e2e-test.sh: replace hardcoded node/edge counts with assert_gt > 0

* fix: replace fake error boundary test with real error path tests

- Remove toBeGreaterThanOrEqual(0) assertion (always passes, verifies nothing)
- Add 3 real error handling tests:
  - error boundary renders when component throws (navigate to unknown route)
  - app handles empty database gracefully (main content visible)
  - status bar shows zero counts when no db selected
- Total: 65 E2E tests, all passing

* ci: fix Playwright E2E job - use pnpm, add test file path triggers

- Change npm install to pnpm install (project uses pnpm exclusively)
- Add pnpm/action-setup@v4 step
- Add e2e/** and playwright.config.ts to path triggers so test changes re-run CI

* fix: commit pnpm vite config and remove duplicate push trigger in CI

- playwright.config.ts: use pnpm vite instead of npx vite for CI consistency
- e2e-tests.yml: remove duplicate push block that caused YAML override bug

* fix: improve E2E test quality - real assertions, auto-waiting, stable error tests

- Replace fake error handling tests with real stability assertions
- Replace waitForTimeout with auto-waiting toBeVisible({ timeout: 5000 })
- Add content assertions in Data Flow tests
- 65 tests passing consistently

* ci: streamline E2E workflow - remove legacy bash script job, use ubuntu-latest

- Remove redundant e2e-tests job that ran grep-based static checks
- Change playwright-e2e from macos-latest to ubuntu-latest (cost + reliability)
- Keep only the real Playwright E2E browser tests in CI

* fix: replace toBe(true) with idiomatic toBeVisible() auto-waiting assertion

* ci: fix pnpm action - specify version 10 for action-setup@v4

* ci: install root npm deps for Playwright before running E2E tests

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
* fix: update E2E tests for redesigned UI and dual demo databases

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* fix: correct edge count in E2E test to 35

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>

* feat: complete explorer UI redesign with E2E test coverage

Implement all 5 phases of the SensibleDB Explorer redesign:

Phase 1 (verified): Foundation - CSS tokens, component styles, graph interactions
Phase 2: Home & Onboarding - Guided tour, connection wizard, contextual tooltips
Phase 3a: Chat Interface - NL→NQL translation, structured responses, follow-up chips
Phase 3b: Graph View - Node cards with foreignObject, InspectorPanel, Query Bar
Phase 4: Reports - Formatted export (txt/PDF), share, time period selector
Phase 5: Polish - Animations, ErrorBoundary, keyboard shortcuts (1-8, /, Ctrl+K/G/R)

New components:
- GuidedTour: 5-step overlay tour with localStorage persistence
- ConnectionWizard: 4-step modal wizard for data sources
- ContextualTooltip: Reusable tooltip with glossary explanations
- InspectorPanel: Node details with connected items and 'Ask about' button
- ErrorBoundary: Per-view error boundary with retry

E2E testing:
- Playwright setup with Tauri API mock for browser testing
- 56 tests covering all views, navigation, keyboard shortcuts, design system
- Test categories: Home, Graph, Chat, Report, Navigation, Onboarding, NQL, Data Views

* fix: E2E test infrastructure - webServer config, port alignment, data flow tests

- Add webServer config to playwright.config.ts with auto-start on port 1420
- Align Playwright baseURL with Vite config port (1420)
- Add reuseExistingServer for local dev convenience
- Add 7 data flow tests verifying mock data renders with correct values:
  - Correct database name, node count (10), edge count (10)
  - Correct metric values in report view
  - Correct item/connection counts in status bar and chat welcome message
- No skipped or deleted tests

* ci: add Playwright E2E test job to CI pipeline

- Add playwright-e2e job to .github/workflows/e2e-tests.yml
- Installs Node.js 20, frontend dependencies, and Playwright Chromium
- Runs npx playwright test with CI=true environment
- Fix e2e-test.sh: replace hardcoded node/edge counts with assert_gt > 0

* fix: replace fake error boundary test with real error path tests

- Remove toBeGreaterThanOrEqual(0) assertion (always passes, verifies nothing)
- Add 3 real error handling tests:
  - error boundary renders when component throws (navigate to unknown route)
  - app handles empty database gracefully (main content visible)
  - status bar shows zero counts when no db selected
- Total: 65 E2E tests, all passing

* ci: fix Playwright E2E job - use pnpm, add test file path triggers

- Change npm install to pnpm install (project uses pnpm exclusively)
- Add pnpm/action-setup@v4 step
- Add e2e/** and playwright.config.ts to path triggers so test changes re-run CI

* fix: commit pnpm vite config and remove duplicate push trigger in CI

- playwright.config.ts: use pnpm vite instead of npx vite for CI consistency
- e2e-tests.yml: remove duplicate push block that caused YAML override bug

* fix: improve E2E test quality - real assertions, auto-waiting, stable error tests

- Replace fake error handling tests with real stability assertions
- Replace waitForTimeout with auto-waiting toBeVisible({ timeout: 5000 })
- Add content assertions in Data Flow tests
- 65 tests passing consistently

* ci: streamline E2E workflow - remove legacy bash script job, use ubuntu-latest

- Remove redundant e2e-tests job that ran grep-based static checks
- Change playwright-e2e from macos-latest to ubuntu-latest (cost + reliability)
- Keep only the real Playwright E2E browser tests in CI

* fix: replace toBe(true) with idiomatic toBeVisible() auto-waiting assertion

* ci: fix pnpm action - specify version 10 for action-setup@v4

* ci: install root npm deps for Playwright before running E2E tests

* docs: add architectural guardrails and diagrams

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
- Renamed directories: nexus-cli → sensibledb-cli, nexus-db → sensibledb-db, nexus-explorer → sensibledb-explorer, nexus-container → sensibledb-container, nexus-macros → sensibledb-macros
- Renamed internal Rust modules: nexus_engine → sensibledb_engine, nexus_gateway → sensibledb_gateway, nexusc → sensibledbc
- Updated all Cargo.toml workspace members, package names, and dependency paths
- Updated all Rust source imports and crate references
- Rewrote all documentation with SensibleDB branding
- Updated CI/CD workflows, install scripts, and config files
- Renamed nexus.toml → sensibledb.toml, .nexus/ → .sensibledb/
- Updated icon assets and Tauri config with SensibleDB branding
- Updated HTML title, guided tour text, and all user-facing strings
- cargo check --workspace passes
- NexusGraphEngine → SensibleGraphEngine
- NexusGateway → SensibleGateway
- NexusGraphStorage → SensibleGraphStorage
- nexus_node macro → sensible_node
- nexus-ts/nexus-py → sensible-ts/sensible-py
- NexusQL → SensibleQL, nexusql/ → sensibleql/
@rprabhat rprabhat closed this Apr 5, 2026
@rprabhat rprabhat deleted the feat/rebrand-final branch April 5, 2026 22:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants