Demo clip:
Advanced Geospatial Visualization Platform with AI-Powered Analysis
DeepGIS-XR is a comprehensive geospatial visualization and analysis platform that combines advanced 3D mapping, AI-powered image analysis, and adaptive sampling systems for Earth and lunar exploration.
-
Segment Anything Model (SAM): Universal image segmentation with no training required
- Three model sizes: Base (375MB), Large (1.2GB), Huge (2.4GB)
- Automatic region detection and boundary identification
- GeoJSON export with polygon simplification
-
YOLOv8 Detection: Ultra-fast real-time object detection
- Multiple model sizes: Nano, Small, Medium, Large, XLarge
- 80 COCO object categories
- Class filtering support
-
Grounding DINO: Open-vocabulary text-based object detection
- Detect ANY object by describing it in natural language
- Text prompts like:
"rock . boulder . crater . debris" - Supports remote API deployment for GPU acceleration
- Ideal for domain-specific detection (geology, archaeology, agriculture)
-
Zero-Shot Object Detection: Pre-trained COCO model for 80 object categories
- Detects: person, car, bicycle, truck, bus, animals, and more
- Confidence-based filtering
- Class-labeled visualizations
-
Mask2Former: State-of-the-art instance segmentation
- More accurate than Zero-Shot for complex scenes
- Pre-trained on COCO dataset
- Intelligent Spatial Sampling: Probabilistic framework for location sampling
- Adaptive Learning: Updates distribution based on feedback and rewards
- Survey Mode: Cycle through sampled points with automatic navigation
- Spatial Queries: Efficient region-based queries and statistics
- Multiple Initialization Strategies: Uniform, Gaussian, Gaussian mixture, custom
- Lunar Visualization: Full Moon globe with LROC QuickMap imagery
- Apollo Landing Sites: Historical mission locations
- Aviation-Style Navigation: Heading dial, attitude indicator, sun/moon info
- LOLA Terrain: High-resolution lunar elevation data
- Lunar Digital Twin: Navigational decision support system
- NWS Weather Stations: Real-time weather data from National Weather Service API
- Multi-State Support: Quick load stations from California, Arizona, Colorado, and Nevada
- Interactive Display: Temperature labels, weather icons, and detailed popups
- Auto-Update: Automatic refresh every 15 minutes for current conditions
- HUD Integration: Weather stations accessible via bottom toolbar layer button
- 21 Default Stations: Pre-configured stations across four western US states
- 3D Globe Visualization: CesiumJS-powered Earth and Moon globes
- 3D Buildings Layer: OpenStreetMap buildings with worldwide coverage
- Toggle via View panel checkbox or press
Bkey - Free and open data source (ODbL license)
- Smart loading: loads once, toggles visibility thereafter
- Toggle via View panel checkbox or press
- Multi-Layer Support: Raster and vector layer management
- Tile Server Integration: Custom tile server for large datasets
- 3D Model Support: GLB/GLTF model loading and visualization
- Coordinate Systems: Support for multiple projections and ellipsolds
- Drone Navigation: Fly mode and orbit mode for automated camera movement
- Measurement Tools: Distance, area, and height measurement capabilities
- Shareable URLs: Generate URLs that capture complete camera state
- Camera Parameters: Position (lon, lat, alt), orientation (heading, pitch, roll)
- View Mode Preservation: Remembers 2D, 3D, or Columbus view mode
- Drone State Capture: Includes fly distance, speeds, orbit settings
- Active Mode Restoration: Restores takeoff, landing, fly, and orbit modes
- QR Code Generation: Share experiences via QR codes
- Keyboard Shortcuts: Press
Sto share current view
- Docker and Docker Compose
- NVIDIA GPU (optional, for AI features)
- Python 3.9 (for local development โ matches the Docker image)
-
Clone the repository
git clone https://github.com/Earth-Innovation-Hub/deepgis-xr.git cd deepgis-xr -
Start with Docker Compose
docker-compose up -d
-
Access the application
- Main application: http://localhost:8060
- Tile server: http://localhost:8091
- Topology server (optional): http://localhost:8092
-
Sync runtime assets (large MBTiles, models, analysis results)
bash scripts/sync_assets.sh # from /mnt/dreamslab-store by defaultOverride the source with
STORE=/path/to/store bash scripts/sync_assets.sh.
-
Install dependencies
pip install -r requirements.txt # runtime pip install -r requirements-dev.txt # + pytest, ruff, black, pip-tools
-
Run migrations
python manage.py migrate
-
Start development server
python manage.py runserver
-
Run tests / lint
pytest # requires requirements-dev.txt ruff check . black --check .
Backend:
- Django 3.2 LTS (Python 3.9 web framework) โ migration to 4.2 LTS tracked on roadmap
- Django REST Framework 3.12 (API endpoints)
- PostgreSQL (via
psycopg2-binary) / SQLite (dev) - Celery 5.6 + Redis (async tasks, world-sampler background jobs)
- Twilio +
django-phonenumber-field(phone-number authentication) - Flask 3.x โ separate topology-tile server at
services/topology/
Frontend:
- CesiumJS (3D globe visualization)
- Hand-written ES modules in
staticfiles/web/js/(no build step) - Bootstrap (UI components)
- (Vite build was removed in the Tier-A housekeeping pass โ
src/was empty and the pipeline was never wired intocollectstatic. The bundle-chunking plan preserved in the refactoring note will be revisited in Tier D.)
AI/ML:
- Segment Anything Model (SAM) โ Meta AI, local inference
- Grounding DINO โ IDEA Research, remote API (port 5000)
- Grounded-SAM-2 โ remote API (port 5001)
- YOLOv8 โ Ultralytics, local inference
- Mask R-CNN / Mask2Former โ Detectron2, local inference
- PyTorch 2.5.1 + CUDA 12.1 (deep learning framework)
kernelcalโ Kernel Dynamics / MaxCal integration (integration in progress)
Infrastructure:
- Docker & Docker Compose (containerization)
- NVIDIA Container Toolkit (GPU pass-through)
- Nginx (reverse proxy, optional)
- TileServer GL (MBTiles โ raster/vector tiles)
deepgis-xr/
โโโ deepgis_xr/ # Django project
โ โโโ apps/
โ โ โโโ api/v1/ # DRF v1 (serializers, urls)
โ โ โโโ auth/ # phone-based auth (Twilio)
โ โ โโโ core/ # core models, admin, image processing
โ โ โโโ ml/ # ML helpers
โ โ โโโ web/ # main web app
โ โ โโโ views/ # request handlers (Tier B split)
โ โ โ โโโ pages.py, missions.py, auth_ajax.py, ai_reports.py,
โ โ โ โโโ training_datasets.py, semi_supervised.py, models_3d.py
โ โ โ โโโ legacy.py # remaining un-split handlers
โ โ โโโ world_sampler.py # adaptive spatial sampler
โ โ โโโ world_sampler_api/ # sampling + AI viewport API (Tier C split)
โ โ โ โโโ core.py, http.py # helpers + 9 HTTP endpoints
โ โ โ โโโ analyzers/ # 7 analyzers + ANALYZER_REGISTRY
โ โ โ โโโ legacy.py
โ โ โโโ urls.py # 50+ routes
โ โ โโโ admin.py, models.py, middleware/, templates/
โ โ โโโ management/commands/ # e.g. import_rocks_labels
โ โโโ settings.py
โโโ services/
โ โโโ topology/ # standalone Flask tile/3D-tiles server
โ โโโ server.py # (was deepgis_topology_server.py)
โ โโโ prepare_data.py
โ โโโ Dockerfile # own runtime, no CUDA
โโโ examples/ # kernelcal demos, vegetation segmentation
โ โโโ bf_kernelcal_demo.py
โ โโโ bf_vegetation_segment.py
โโโ scripts/ # utility scripts
โ โโโ sync_assets.sh # pull data/models from lab store
โ โโโ optimize_large_glb.py
โ โโโ grounding_dino_api_client.py
โโโ staticfiles/web/ # hand-written JS, CSS, vendor libs
โโโ static/, media/, stl_models/ # runtime assets (gitignored)
โโโ data/ # MBTiles (gitignored)
โโโ models/ # ML model weights (gitignored)
โโโ deepgis_results/ # AI-analysis outputs (gitignored)
โโโ GroundingDINO/ # vendored upstream repo
โโโ Dockerfile # web container (CUDA 12.1, torch 2.5)
โโโ docker-compose.yml # web + tileserver (+ topology, optional)
โโโ requirements.txt # pinned runtime deps
โโโ requirements-dev.txt # pytest, ruff, black, pip-tools
โโโ README.md
A companion refactoring plan lives in the integration manuscript workspace at
notes/2026-04-22-deepgis-xr-refactoring.md. Tiers AโD have all landed
(layer manager, static-tree consolidation, and Cesium FPS work) and Tier E's
forward-compat prep is in; the Tier E version bumps and Tier F are scheduled
on the roadmap below.
POST /webclient/sampler/initialize- Initialize new samplerPOST /webclient/sampler/sample- Get sample locationsPOST /webclient/sampler/update- Update distributionGET /webclient/sampler/query- Query spatial regionGET /webclient/sampler/statistics- Get distribution statsPOST /webclient/sampler/reset- Reset samplerGET /webclient/sampler/history- View sample history
POST /webclient/sampler/analyze-viewport- Analyze viewport with AI- Parameters:
model_type:'sam','yolov8','grounding_dino','zero_shot', or'mask2former'image: Base64-encoded viewport imagelocation: Camera position metadata- SAM options:
sam_model:'vit_b','vit_l', or'vit_h'min_area: Minimum segment area in pixels
- YOLOv8 options:
yolo_model:'yolov8n','yolov8s','yolov8m','yolov8l','yolov8x'confidence_threshold: 0.0-1.0class_filter: Comma-separated class names (e.g.,"person,car,truck")
- Grounding DINO options:
text_prompt: Dot-separated object descriptions (e.g.,"rock . boulder . crater")box_threshold: Detection confidence threshold (default: 0.3)text_threshold: Text matching threshold (default: 0.25)
- Zero-Shot/Mask2Former options:
confidence_threshold: 0.0-1.0
- Returns: GeoJSON with segments/detections, metadata, saved file paths
- Parameters:
POST /label/semi-supervised/api/generate-labels/- Generate assisted labelsPOST /label/semi-supervised/api/save-labels/- Save labelsGET /label/semi-supervised/api/get-images/- Get label images
The AI Viewport Analysis system supports multiple detection models, including remote API deployment for GPU-intensive models like Grounding DINO.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ DeepGIS-XR Frontend โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ AI Viewport Analysis Panel โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ โ Analysis Type: [Grounding DINO (Open Vocab) โผ] โ โ โ
โ โ โ Text Prompt: [rock . boulder . crater . debris ] โ โ โ
โ โ โ Box Threshold: [โโโโโโโโโโโ] 0.30 โ โ โ
โ โ โ Text Threshold:[โโโโโโโโโโโ] 0.25 โ โ โ
โ โ โ [ ๐ง Analyze Viewport ] โ โ โ
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ POST /webclient/sampler/analyze-viewport
โ {image, location, model_type, text_prompt, ...}
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ DeepGIS-XR Django Backend โ
โ world_sampler_api.py::analyze_viewport() โ
โ โโโ model_type == 'sam' โ Local SAM inference โ
โ โโโ model_type == 'yolov8' โ Local YOLOv8 inference โ
โ โโโ model_type == 'grounding_dino'โ Remote API call โโโโโโโ โ
โ โโโ model_type == 'zero_shot' โ Local Mask R-CNN โ โ
โ โโโ model_type == 'mask2former' โ Local Mask2Former โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Local GPU/CPU Processing โ โ Remote Grounding DINO โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ API Server โ
โ โ โข SAM (vit_b, vit_l, vit_h) โ โ โ โโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โข YOLOv8 (n, s, m, l, x) โ โ โ โ POST /predict โ โ
โ โ โข Mask R-CNN (COCO) โ โ โ โ POST /predict_batch โ โ
โ โ โข Mask2Former (COCO) โ โ โ โ GET /health โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ โโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
GPU-accelerated AI services on dedicated server for open-vocabulary detection and segmentation.
Grounding DINO (port 5000): Text-based detection

Grounded-SAM-2 (port 5001): Detection + high-quality segmentation

# Grounding DINO - Detection only
curl -X POST http://192.168.0.232:5000/api/predict \
-F "file=@image.jpg" -F "text_prompt=rock . boulder . crater"
# Grounded-SAM-2 - Detection + Segmentation
curl -X POST http://192.168.0.232:5001/detect \
-F "image=@image.jpg" -F "text_prompt=rock . boulder . crater"
# Python client
./grounding_dino_api_client.py --image viewport.jpg --prompt "rock . boulder"Example Prompts: Geology: "rock . boulder . crater" | Urban: "building . car . tree" | Wildlife: "animal . bird . nest"
- Navigate to DeepGIS Search (
/label/3d/search/) - Open AI Viewport Analysis panel (brain icon in HUD)
- Select analysis type:
- SAM: Universal segmentation (all regions)
- YOLOv8: Fast real-time detection (80 COCO categories)
- Grounding DINO: Open-vocabulary detection (describe any object)
- Zero-Shot: Pre-trained COCO detection
- Mask2Former: High-accuracy instance segmentation
- Configure parameters:
- SAM: Model size (Base/Large/Huge), minimum segment area
- YOLOv8: Model size (Nano to XLarge), confidence, class filter
- Grounding DINO: Text prompt (e.g.,
"rock . crater . boulder"), thresholds - Zero-Shot/Mask2Former: Confidence threshold
- Click "Analyze Viewport"
- View results on map with color-coded polygons and labels
- Initialize sampler with desired strategy
- Sample locations based on adaptive distribution
- Navigate to samples using survey mode
- Update distribution based on feedback
- Query regions for spatial analysis
- Navigate to Moon Viewer (
/label/3d/moon/) - Explore lunar surface with LROC imagery
- View Apollo landing sites and historical locations
- Use navigation widgets for precise control
- Adjust camera with aviation-style controls
- Navigate to DeepGIS Search (
/label/3d/search/) - Enable buildings:
- Click "View" button in HUD toolbar
- Check "3D Buildings (OSM)" checkbox
- Or press
Bkey to toggle instantly
- Best viewed in 3D mode (press
Vto switch to 3D) - Zoom to urban areas to see detailed building models
- Coverage: Worldwide, based on OpenStreetMap data quality
- Navigate to DeepGIS Search (
/label/3d/search/) - Click "Weather" button in the bottom HUD toolbar
- Toggle "Show Weather Stations" to enable
- Load stations:
- Click "All States" to load all 21 stations (CA, AZ, CO, NV)
- Or click individual state buttons (CA, AZ, CO, NV) for specific regions
- View weather data: Click on station markers for detailed information
- Auto-update: Stations refresh every 15 minutes automatically
- Navigate to any view in DeepGIS Search
- Configure your experience:
- Set camera position and orientation
- Choose view mode (2D/3D/Columbus) - press
Vto toggle - Enable drone modes (fly, orbit, takeoff, landing)
- Share your view:
- Press
Sor click the Share button - URL is automatically copied to clipboard
- Press
- Generate QR Code: Click QR button to display scannable code
- URL includes:
Parameter Description lon,lat,altCamera position heading,pitch,rollCamera orientation viewMode2D, 3D, or Columbus flyDist,hSpeed,vSpeedDrone fly settings orbRadius,orbPitch,orbYawOrbit settings orbiting,flying,takeoff,landingActive mode flags
Press H to view all shortcuts in-app. Key shortcuts include:
| Key | Action |
|---|---|
B |
Toggle 3D Buildings |
V |
Toggle View Mode (2D/3D/Columbus) |
F |
Toggle Full Screen |
H |
Show Keyboard Shortcuts Help |
S |
Share Current View |
Q |
Toggle QR Code |
T |
Hide/Show Toolbars |
W |
Toggle Wireframe |
D |
Drone Fly Forward |
U |
Takeoff (Up) |
L |
Land |
O |
Start Orbit |
P |
Pause/Stop Orbit |
J |
Toggle Virtual Joysticks |
โ โ โ โ |
Camera Perspectives (N/S/W/E) |
ESC |
Stop Orbit / Close Panels |
DEBUG=True
DJANGO_SETTINGS_MODULE=deepgis_xr.settings
NVIDIA_VISIBLE_DEVICES=all # For GPU support
# Remote AI Services (optional - defaults in docker-compose.yml)
# GROUNDING_DINO_API_URL=http://192.168.0.232:5000The docker-compose.yml includes:
- Web service: Django application with GPU support
- Tile server: MapTiler TileServer GL for tile serving
- Volume mounts:
dreams_laboratory/scripts- ML model scriptsdeepgis_results- AI analysis results (shared with host)
To enable GPU for AI features:
- Install NVIDIA Docker runtime
- Uncomment GPU configuration in
docker-compose.yml - Ensure
NVIDIA_VISIBLE_DEVICES=allis set
- Tier A โ housekeeping (PR #3): fully pinned
requirements.txt; addedrequirements-dev.txt; relocated root.pyscripts intoservices/topology/,examples/,scripts/;kernelcalinstalled as a real dep;scripts/sync_assets.shsyncsdata//models//deepgis_results/from/mnt/dreamslab-store; dead Vite config removed. - Tier B โ views split (PR #4): the 2 633-line
apps/web/views.pymonolith is now aviews/package โpages,missions,auth_ajax,ai_reports,training_datasets,semi_supervised,models_3dplus a shrinkinglegacy.py, with__init__.pypreserving every public nameurls.pyroutes to. - Tier C โ world-sampler split (PR #5):
world_sampler_api.pyis now a package withcore.py(helpers),http.py(9 endpoints), and ananalyzers/subpackage (7 analyzers + 3 shared helpers) exposed through anANALYZER_REGISTRY. This unblocks the MaxCal / Model-Kernel Selector work inkernelcal. - Tier D โ frontend layer manager (PR #6): OSM-Buildings double-
instantiation bug fixed by collapsing two parallel
toggleOSMBuildingspaths onto one canonical helper incesium-init.js; a new feature-layer registry instaticfiles/web/js/core/feature-layers.jsnormalises heterogeneous layer toggles (OSM Buildings, World Terrain, โฆ) behind a singlewindow.FeatureLayers.set(id, enabled)/renderToggles(โฆ)API; Tier D3 lifted the OSM-Buildings toggle ontolabel_topologyvia the registry. - Tier E prep โ forward-compat cleanups (PR #7): dropped
USE_L10N = True(removed in Django 5.0) and the staledefault_app_configpointer inapps/auth/__init__.py(removed in 4.2, already redundant givenINSTALLED_APPSlistsAuthConfigdirectly); removed an unusedshapely.geometry.shapeimport inviews/legacy.py. All three edits are no-ops on the current Django 3.2 / Shapely 1.8 stack and shrink the actual bump commit to a four-linerequirements.txtchange. - Tier D0 โ frontend static-tree consolidation (PR #8): wired
BASE_DIR/staticfiles/intoSTATICFILES_DIRSso Django's finders actually resolve{% static 'web/js/main.js' %}to the tracked tree. Without this, the Tier D frontend work had been sitting in the repo but not being served โ the app had been falling back to an older, untrackeddeepgis_xr/apps/web/static/tree. Orphaned assets (mask2former-corrector.css/.js,responsive.css) parked understaticfiles/web/legacy/with a README. - Tier D0.5 โ Cesium FPS tuning (PR #8): four FPS sinks removed in
cesium-init.jsโ globeenableLighting, uncappedresolutionScale ร devicePixelRatioon Retina/4K (now capped at 1.5ร with?hidpi=1opt-in),tileLoadProgressEvent โ requestRenderspam, andfxaaon an already-supersampled scene. 60 FPS restored on iGPU / Retina.
- โ
3D Buildings Layer: OpenStreetMap buildings worldwide; toggleable via UI or
Bkey; free and open data (ODbL) - โ Experience URL Sharing: Complete camera state sharing via URL; supports takeoff/landing/fly/orbit modes; QR code generation
- โ Grounding DINO: Open-vocabulary detection with text prompts; remote API architecture for GPU servers
- โ Weather Stations: NWS integration with 21 stations across CA, AZ, CO, NV; HUD toolbar integration; auto-update every 15 min
- โ UI/UX: HUD toolbar with floating panels; aviation-style navigation widgets; drone fly/orbit modes
- โ AI/ML: YOLOv8 and Mask2Former integration; SAM optimization; clean viewport capture
- โ Performance: Memory optimization; improved error handling; duplicate entity prevention
- โ View Mode Switching: 2D/3D/Columbus view toggle with keyboard shortcut (V key); auto-restore from URL
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use ESLint for JavaScript
- Add docstrings to functions and classes
- Include tests for new features
This project is licensed under the MIT License - see the LICENSE file for details.
DeepGISโXR builds on concepts and systems originally developed for the Oceanographic Decision Support System (ODSS, MBARI), the Agricultural Decision Support System (AgDSS, University of Pennsylvania), the OpenUAV Project (University of Pennsylvania, Arizona State University), and DeepGIS (Arizona State University). The DeepGIS project acknowledges support from the National Science Foundation, the United States Department of Agriculture, and the National Aeronautics and Space Administration.
- CesiumJS: 3D globe visualization
- Meta AI: Segment Anything Model
- IDEA Research: Grounding DINO open-vocabulary detection
- Ultralytics: YOLOv8 real-time detection
- NASA/GSFC/ASU: LROC QuickMap lunar imagery
- COCO Dataset: Object detection categories
- Repository: Earth-Innovation-Hub/deepgis-xr
- Issues: GitHub Issues
- Zero-Shot Detection integration
- SAM viewport analysis
- YOLOv8 real-time detection
- Grounding DINO open-vocabulary detection
- Remote AI API integration architecture
- World Sampler adaptive sampling
- Moon viewer with navigation widgets
- Weather stations integration
- HUD toolbar and panel system
- Multi-state weather station support
- Experience URL sharing with full state capture
- QR code generation for mobile sharing
- 2D/3D/Columbus view mode switching
- 3D Buildings layer with OpenStreetMap data
Tiers AโC and the Tier-D prep steps (D0, D0.5) have landed. Remaining work
is tracked alongside feature work. See
notes/2026-04-22-deepgis-xr-refactoring.md in the integration workspace
for the full plan.
- Tier A โ housekeeping, pinning, file relocations (PR #3)
- Tier B โ
apps/web/views.pyโviews/package of 7 focused modules + shrinkinglegacy.py(PR #4) - Tier C โ
world_sampler_api.pyโ package withcore.py,http.py(9 endpoints),analyzers/subpackage, andANALYZER_REGISTRY(PR #5; unblockskernelcalThreads 1 + 2) - Tier D โ frontend layer manager (PR #6): OSM-Buildings
double-instantiation fixed; feature-layer registry in
core/feature-layers.jswith awindow.FeatureLayersAPI; registry-driven toggles onlabel_topology - Tier D0 โ
staticfiles/wired intoSTATICFILES_DIRSso the tracked frontend tree is actually served; orphaned assets parked instaticfiles/web/legacy/(PR #8) - Tier D0.5 โ Cesium perf pass: four FPS sinks removed in
cesium-init.js; 60 FPS restored on iGPU/Retina (PR #8) - [~] Tier E โ Django 3.2 โ 4.2 LTS; DRF 3.12 โ 3.15; Shapely 1.8 โ 2.x.
Forward-compat prep landed (PR #7); the version bumps are a four-line
requirements.txtchange pending container regression sweep. Full recon inTIER_E_MIGRATION_NOTES.mdat the repo root (local-only). Django 5.0/5.1 is tracked separately (needs Python 3.10+). - Tier F โ
kernelcalintegration: MaxCal World Sampler, Model-Kernel Selector, terrain diagnostics endpoint
- Mars Terrain Viewer: Extend lunar capabilities to Mars with HiRISE/CTX imagery
- Mission Export Formats: MAVLink waypoint export for drone autopilots
- Enhanced Annotation Tools: Polygon editing, snapping, and undo/redo
- Time-Series Layers: Temporal slider for historical imagery comparison
- Geofence Alerts: Real-time boundary violation notifications
- WebXR/VR Support: Immersive 3D globe exploration with VR headsets
- Real-Time Telemetry: Live drone/vehicle position tracking via MAVLink/ROS
- Collaborative Sessions: Multi-user annotation with real-time sync
- Custom Model Training: Upload datasets and train custom detection models
- Advanced Export: Shapefile, KML, GeoPackage, and Cloud Optimized GeoTIFF
- Performance Dashboard: GPU/memory monitoring and optimization hints
- CLIP/VLM Semantic Search: Natural language queries for geospatial features
- Autonomous Survey Planning: AI-optimized flight path generation
- Digital Twin Integration: Real-time sensor fusion and 3D reconstruction
- Model Marketplace: Community-shared detection models and configs
- Edge Deployment: Lightweight inference for embedded/field devices
- AR Field Overlay: Mobile AR for on-site navigation and annotation
- Integration with Google Earth Engine
- Integration with OpenTopography and support for point cloud visualization (LAS/LAZ)
- 3D building/structure modeling from imagery
- Automated change detection between time periods
This software is provided "as is" without warranty. Use at your own risk. Intended for research and educational purposes. AI analysis results should be validated independently for critical applications.
Powered by Earth Innovation Hub (Arizona STEAM non-profit corporation)