Skip to content

Popular repositories Loading

  1. DGX-SPARK-COMFYUI-DOCKER DGX-SPARK-COMFYUI-DOCKER Public

    Shell 9 2

  2. Ollama-Model-Interaction Ollama-Model-Interaction Public

    A simple Gradio-based app for interacting with Ollama models, supporting image analysis, text completion, and model pullin

    Python 3 2

  3. comfyui-windows-installer comfyui-windows-installer Public

    Automated setup for ComfyUI on Windows with CUDA, custom plugins, and optimized PyTorch settings. Made to Run as Server and Error Correct,. Easy installation and launch using Miniconda.

    Shell 2 1

  4. Art-Style-Fusion-Prompt-Enginner Art-Style-Fusion-Prompt-Enginner Public

    Art Style Fusion Prompt Engineering is a Gradio app that blends art styles, descriptions, and artist recommendations into prompts for T5 text encoders (Tested Heavily with Flux)

    JavaScript 1

  5. proxmox-ubuntu-lxc-provisioner proxmox-ubuntu-lxc-provisioner Public

    Ansible automation for Proxmox VE that downloads Ubuntu LXC templates, provisions hardened LXC containers from YAML maps, and generates a ready-to-copy Ansible deployment scaffold (inventory, keys,…

    Shell 1

  6. quick-llama.cpp-server quick-llama.cpp-server Public

    The framework for posting a more modern cuda image for llama.cpp with cuda13 for just newer cards with RPC support. Started as just learning how to compile llama.cpp custom.

    Shell

Repositories

Showing 9 of 9 repositories
  • Art-Style-Fusion-Prompt-Enginner Public

    Art Style Fusion Prompt Engineering is a Gradio app that blends art styles, descriptions, and artist recommendations into prompts for T5 text encoders (Tested Heavily with Flux)

    HurbaLurba/Art-Style-Fusion-Prompt-Enginner’s past year of commit activity
    JavaScript 1 MIT 0 0 1 Updated Mar 3, 2026
  • Ollama-Model-Interaction Public

    A simple Gradio-based app for interacting with Ollama models, supporting image analysis, text completion, and model pullin

    HurbaLurba/Ollama-Model-Interaction’s past year of commit activity
    Python 3 MIT 2 0 1 Updated Mar 3, 2026
  • proxmox-ubuntu-lxc-provisioner Public

    Ansible automation for Proxmox VE that downloads Ubuntu LXC templates, provisions hardened LXC containers from YAML maps, and generates a ready-to-copy Ansible deployment scaffold (inventory, keys, playbooks).

    HurbaLurba/proxmox-ubuntu-lxc-provisioner’s past year of commit activity
    Shell 1 MIT 0 1 0 Updated Jan 19, 2026
  • HurbaLurba/DGX-SPARK-COMFYUI-DOCKER’s past year of commit activity
    Shell 9 MIT 2 1 0 Updated Nov 17, 2025
  • DGX-SPARK-ANSIBLE-GOKIT Public

    building a mobile cluster and getting ready with ansible for the #dgxspark

    HurbaLurba/DGX-SPARK-ANSIBLE-GOKIT’s past year of commit activity
    Shell 0 MIT 0 0 0 Updated Nov 10, 2025
  • garage-k8s-deployment Public

    Using Garage, It demonstrates best practices for deploying S3-compatible object storage on Kubernetes with performance optimizations

    HurbaLurba/garage-k8s-deployment’s past year of commit activity
    Shell 0 0 0 0 Updated Sep 30, 2025
  • metadata_scrubbers Public

    I needed to scrub the metadata off images and videos and python is slow sometimes, simple work, could become a bigger project one day

    HurbaLurba/metadata_scrubbers’s past year of commit activity
    Java 0 0 0 0 Updated Sep 20, 2025
  • quick-llama.cpp-server Public

    The framework for posting a more modern cuda image for llama.cpp with cuda13 for just newer cards with RPC support. Started as just learning how to compile llama.cpp custom.

    HurbaLurba/quick-llama.cpp-server’s past year of commit activity
    Shell 0 0 0 0 Updated Sep 20, 2025
  • comfyui-windows-installer Public

    Automated setup for ComfyUI on Windows with CUDA, custom plugins, and optimized PyTorch settings. Made to Run as Server and Error Correct,. Easy installation and launch using Miniconda.

    HurbaLurba/comfyui-windows-installer’s past year of commit activity
    Shell 2 MIT 1 0 0 Updated Aug 12, 2025