vind offers powerful advanced features that set it apart from other local Kubernetes solutions. This guide covers sleep/wake, load balancers, external nodes, and more.
One of vind's standout features is the ability to pause and resume clusters, saving resources when not in use.
When you pause a cluster:
- The Docker containers are stopped (not deleted)
- All cluster state is preserved
- Resources are freed on your host
- The cluster can be resumed instantly
# Pause a cluster
vcluster pause my-cluster
# Resume a cluster
vcluster resume my-cluster- Save Resources: Free up CPU, memory, and disk when clusters aren't in use
- Instant Resume: Clusters resume in seconds, not minutes
- State Preservation: All data, pods, and configurations are maintained
- Perfect for Development: Pause during breaks, resume when needed
# Morning: Start your development cluster
vcluster create dev-cluster
# ... do your work ...
# Lunch break: Pause to save resources
vcluster pause dev-cluster
# After lunch: Resume instantly
vcluster resume dev-cluster
# Everything is exactly as you left it!vind provides automatic LoadBalancer service support out of the box. Load balancer is enabled by default.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: my-appOnce created, the service gets an IP address automatically:
kubectl get svc my-service
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
# my-service LoadBalancer 10.96.0.1 172.17.0.2 80:30000/TCPYou can access it directly via the EXTERNAL-IP!
- vind automatically configures the Docker network
- LoadBalancer IPs are assigned from the Docker network
- On macOS, ports are forwarded to localhost automatically
- No MetalLB or other load balancer setup required!
- Linux: Works automatically
- macOS (Docker Desktop): Requires privileged port access (may need sudo)
- Windows: Limited support
Speed up image pulls by using your local Docker daemon's containerd image storage.
Registry proxy is enabled by default.
The registry proxy allows the vCluster to pull images directly from the host Docker daemon's containerd image storage. This means:
- Images already pulled to your local Docker are available to the cluster
- No need to re-pull images that are already cached locally
- Can use purely local images without a registry
- Faster image pulls since they come from local storage
-
Docker must use containerd image storage
- Check:
docker info | grep "Storage Driver" - Should show:
containerdoroverlay2with containerd
- Check:
-
Enable containerd storage in Docker:
When enabled, vCluster mounts the containerd socket from the host Docker daemon. When the cluster needs an image:
- It checks the local Docker daemon's containerd storage
- If the image exists locally, it uses it directly
- If not found, it pulls from the registry as normal
Note: This only works if Docker is using containerd as the image storage backend.
Important: Joining external nodes requires Private Nodes mode, not the Docker experimental nodes. Private Nodes is a different feature that allows joining real worker nodes (like EC2 instances) to your vCluster control plane.
- Hybrid Development: Test with real cloud resources
- GPU Access: Use cloud GPUs from your local cluster
- Cost Optimization: Use local control plane with cloud workers
- Testing: Test multi-cloud scenarios
For a complete walkthrough with a GCP external node, see: Replacing KinD with vind - Deep Dive
- Private Nodes enabled in your vCluster
- vCluster Platform running (required for VPN)
- External node with network access
- Join token from the cluster
To enable private nodes with VPN support:
privateNodes:
enabled: true
vpn:
enabled: true # Enables node-to-control-plane VPN
nodeToNode:
enabled: true # Enables node-to-node VPN (optional)vcluster create my-cluster \
--set privateNodes.enabled=true \
--set privateNodes.vpn.enabled=truekubectl get secret join-token -n default -o jsonpath='{.data.token}' | base64 -dOn your external node (e.g., EC2 instance):
# Install vCluster binary
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64"
chmod +x vcluster
sudo mv vcluster /usr/local/bin/
# Join the cluster (requires platform URL)
vcluster join my-cluster \
--token <join-token> \
--platform-url https://your-platform-urlNote: If the join script does not execute directly via
curl | bashon your external node, download it first and then run it withsudo. See the troubleshooting guide for details.
kubectl get nodes -o wide
# You should see your external node with a tailscale IP (100.64.x.x)!vCluster VPN uses Tailscale technology to create secure connections:
- Node-to-Control-Plane VPN: Connects nodes to the control plane
- Node-to-Node VPN: Connects nodes to each other (optional)
- Works across NAT and complex network setups
- Requires vCluster Platform to be accessible
Note: VPN is only available for Private Nodes mode, not for Docker experimental nodes. Docker experimental nodes are local Docker containers, while Private Nodes are real worker nodes that can be anywhere.
vind natively supports Flannel CNI, but you can install other CNI plugins manually.
- Flannel: Default, simple overlay network (built-in)
- Calico: Advanced networking and policy (manual install)
- Cilium: eBPF-powered networking (manual install)
- Weave: Network encryption (manual install)
- Local Path Provisioner: Default, simple storage
- NFS: Network file system
- Rook: Ceph storage
- Longhorn: Distributed block storage
vCluster only supports Flannel natively. To use other CNI plugins:
- Disable Flannel:
deploy:
cni:
flannel:
enabled: false-
Create the cluster and connect to it
-
Install your preferred CNI (e.g., Calico):
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yamlOr for Cilium:
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.14.0storage:
persistence: true
storageClassName: "nfs-client"Create clusters with multiple worker nodes:
experimental:
docker:
nodes:
- name: worker-1
- name: worker-2
- name: worker-3vcluster create my-cluster -f config.yamlVerify:
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# my-cluster Ready master 5m v1.28.0
# worker-1 Ready <none> 4m v1.28.0
# worker-2 Ready <none> 4m v1.28.0
# worker-3 Ready <none> 4m v1.28.0Coming Soon! vind will support saving and restoring cluster snapshots.
This will allow you to:
- Save cluster state at any point
- Restore to a previous state
- Create templates from snapshots
- Share cluster configurations
Stay tuned for updates!
- Pause clusters during breaks
- Resume before starting work
- Don't pause clusters with active workloads (if needed)
- Use LoadBalancer services for easy access
- Check EXTERNAL-IP after service creation
- On macOS, ensure privileged port access
- Enable for faster development cycles
- Pre-pull common images to Docker
- Use for offline development
- Use VPN for secure connections
- Test network connectivity first
- Monitor resource usage
# Check cluster status
vcluster list
# View container status
docker ps -a | grep vcluster
# Check control plane logs
docker exec vcluster.cp.my-cluster journalctl -u vcluster --nopager- Load balancer is enabled by default
- Check Docker network connectivity
- On macOS, ensure sudo access for port forwarding
- Check service status:
kubectl get svc
- Verify containerd storage:
docker info - Check containerd socket access
- Registry proxy is enabled by default
- Check logs for errors
- Verify VPN is configured
- Check network connectivity
- Verify join token is valid
- Check platform URL is accessible
For more help, see the Troubleshooting Guide.