Releases: amariichi/Image-To-DepthWebViewer
Releases · amariichi/Image-To-DepthWebViewer
v3.4 release
Highlights
- add GitHub Actions CI with
python-testsandjs-checks - improve viewer responsiveness with on-demand rendering and RGBDE decode/preprocess optimizations
- move display-time depth shaping to the GPU while preserving CPU export parity
- add a local performance harness and deterministic RGBDE fixtures for repeatable checks
- improve SBS stereo rendering with stronger off-axis separation defaults
Fixes
- fix
Model Z Offsetso dragging the slider immediately refreshes the canvas - fix Looking Glass exit -> re-enter flow so the display can be entered again without reloading the page
Release metadata
- add a top-level
VERSIONfile and align the FastAPI app version metadata to3.4
Included work
This release includes both the earlier rendering/performance/CI improvements and the follow-up interaction fixes for Model Z Offset and Looking Glass re-entry.
v3.3 release
Release Notes – v3.3
- Introduced an in-VR hint overlay that shows a concise control cheat sheet at session start and single-line popups
for subsequent controller actions. Hints now appear just below the gaze and fade quickly to minimize distraction. - Added a “Show XR hints” checkbox next to the Enter VR button, letting users toggle the overlay on or off at any
time. - Updated default Geometry FOV to 32°, providing a more focused initial view (range remains 15–120°; controller and UI
sliders adjust the full range). - Improved controller feedback messages (e.g., “Orbit / Zoom”) so the mapping between inputs and actions is clearer.
- README now documents the hint toggle and revised FOV defaults.
v3.2 release
Release v3.2 Highlights
- Added full VR controller parity with desktop interactions: trigger + move to orbit, grip to pan, trigger + forward / back to zoom, stick left / right to tweak Geometry FOV, stick up/down to adjust Depth Magnification, and X/Y to trim the far clip.
- Implemented smooth trigger-driven zoom and grip-driven Z offset for precise positioning inside WebXR sessions.
- Documented controller mappings and noted limitations when switching between Looking Glass and standard VR (page reload required after exiting Looking Glass).
- Cleaned up debug utilities (window.__xrDebug) to aid troubleshooting without impacting production behavior.
v3.1 release
Add glTF export and update documentation.
v3.0.2 release
Lower Far Crop Distance minimum to 0.2m
v3.0.1 release
Update Readme.md to clarify WebXR limitations (standalone headsets not supported)
v3.0 release
Image-to-Depth Web Viewer v3.0 – WebXR & Looking Glass Support
Highlights
- Added on-canvas controls to launch WebXR sessions in standard VR headsets and Looking Glass holographic displays.
- Introduced a reusable WebXR manager that shares the existing WebGL pipeline with OpenXR runtimes and Looking Glass’
official polyfill. - Documented headset-agnostic setup steps covering PC-tethered OpenXR devices, standalone headsets (Quest/Pico/Lynx/
Magic Leap via Wolvic), and Looking Glass workflows. - Noted third-party license requirements for Apple Depth Pro and Looking Glass WebXR components, keeping the repo MIT-
licensed while clarifying redistribution responsibilities.
Upgrade Notes
- Pull or bootstrap the latest repo as usual (python scripts/bootstrap.py) to refresh Depth Pro dependencies.
- Serve the webapp over HTTPS (or localhost) for WebXR testing; standalone headsets require a secure origin.
- Leave Looking Glass Bridge running when targeting holographic displays so the polyfill auto-detects the hardware.
Verification Tips
- Load an RGBDE PNG, click Enter VR in Chrome/Edge while an OpenXR runtime is active, and confirm stereoscopic
rendering in your headset. - For Looking Glass displays, press Enter Looking Glass and verify the multi-view quilt output while Bridge is
connected.
v2.1 release
Webapp: expand auto far clip margin
v2.0 release
initial release