Skip to content

[codex] publish LAI platform milestones#4

Draft
N3uralCreativity wants to merge 16 commits intomainfrom
codex/publish-lai-platform
Draft

[codex] publish LAI platform milestones#4
N3uralCreativity wants to merge 16 commits intomainfrom
codex/publish-lai-platform

Conversation

@N3uralCreativity
Copy link
Copy Markdown
Owner

What changed

This publishes the local LAI platform work accumulated on the protected main branch into a reviewable draft PR.

The branch includes:

  • the orchestration core, routing, providers, persistent jobs, and API/dashboard surfaces
  • worker service, stale-job recovery, execution telemetry, smoke diagnostics, and workstation validation
  • recent fixes for local Transformers execution and provider regression coverage

Why it changed

Direct pushes to main are blocked by repository protection rules, so the safest path was to push the current work to a dedicated branch and open a PR for CI and review.

User and developer impact

  • the full local LAI platform work is now on GitHub under a shareable branch
  • CI can run against the protected-branch workflow
  • follow-up changes should target this branch or future feature branches instead of direct local-only main

Root cause

The remote repository requires pull requests and the ci status check before updating main.

Validation

  • ruff check src tests
  • pytest (37 passed, 5 skipped)
  • local route explanation and dashboard health checks
  • real local small-model run via the project .venv

@qodo-code-review
Copy link
Copy Markdown

CI Feedback 🧐

A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

Action: test

Failed stage: Pytest [❌]

Failed test name: tests/unit/test_routing.py::test_routing_prefers_deep_work_for_complex_prompt

Failure summary:

The GitHub Action failed because the pytest suite had a failing unit test:
-
tests/unit/test_routing.py::test_routing_prefers_deep_work_for_complex_prompt failed at
tests/unit/test_routing.py:42.
- The test expected decision.executor_model_id to be execution-large,
but the routing engine returned openai-general (while decision.matched_tier_id was deep-work).
-
This assertion failure caused pytest to exit with code 1, failing the workflow.

Relevant error logs:
1:  ##[group]Runner Image Provisioner
2:  Hosted Compute Agent
...

292:  collected 42 items
293:  tests/integration/test_orchestration.py .....                            [ 11%]
294:  tests/live/test_local_smoke.py ss                                        [ 16%]
295:  tests/live/test_provider_smoke.py sss                                    [ 23%]
296:  tests/unit/test_api.py ...........                                       [ 50%]
297:  tests/unit/test_config.py .                                              [ 52%]
298:  tests/unit/test_evals.py ..                                              [ 57%]
299:  tests/unit/test_layout.py ..                                             [ 61%]
300:  tests/unit/test_observability.py .                                       [ 64%]
301:  tests/unit/test_providers.py .                                           [ 66%]
302:  tests/unit/test_recovery.py ..                                           [ 71%]
303:  tests/unit/test_routing.py F.                                            [ 76%]
304:  tests/unit/test_smoke.py ....                                            [ 85%]
305:  tests/unit/test_worker_service.py ....                                   [ 95%]
306:  tests/unit/test_workstation.py ..                                        [100%]
307:  =================================== FAILURES ===================================
308:  ______________ test_routing_prefers_deep_work_for_complex_prompt _______________
309:  tmp_path = PosixPath('/tmp/pytest-of-runner/pytest-0/test_routing_prefers_deep_work0')
310:  repo_root = PosixPath('/home/runner/work/LAI/LAI')
311:  def test_routing_prefers_deep_work_for_complex_prompt(tmp_path, repo_root) -> None:
312:  app = _make_application(tmp_path, repo_root)
313:  decision = app.routing_engine.route(
314:  ExecutionRequest(
315:  user_prompt=(
316:  "Create a comprehensive advanced architecture and complete implementation "
317:  "strategy for a long-running research platform."
318:  )
319:  )
320:  )
321:  assert decision.matched_tier_id == "deep-work"
322:  >       assert decision.executor_model_id == "execution-large"
323:  E       AssertionError: assert 'openai-general' == 'execution-large'
324:  E         
325:  E         - execution-large
326:  E         + openai-general
327:  tests/unit/test_routing.py:42: AssertionError
328:  =========================== short test summary info ============================
329:  FAILED tests/unit/test_routing.py::test_routing_prefers_deep_work_for_complex_prompt - AssertionError: assert 'openai-general' == 'execution-large'
330:  - execution-large
331:  + openai-general
332:  =================== 1 failed, 36 passed, 5 skipped in 2.64s ====================
333:  ##[error]Process completed with exit code 1.
334:  Post job cleanup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant