Skip to content

Latest commit

 

History

History
416 lines (276 loc) · 8.75 KB

File metadata and controls

416 lines (276 loc) · 8.75 KB

Troubleshooting Guide

Common issues and solutions when using LayerD.

Installation Issues

Dependency conflicts

ERROR: Cannot install layerd because these package versions have conflicting dependencies.

Solutions:

# Use fresh virtual environment
python -m venv layerd-env
source layerd-env/bin/activate  # Windows: layerd-env\Scripts\activate
pip install git+https://github.com/CyberAgentAILab/LayerD.git

# Or upgrade pip first
pip install --upgrade pip setuptools wheel
pip install git+https://github.com/CyberAgentAILab/LayerD.git

# With conda
conda create -n layerd python=3.12
conda activate layerd
pip install git+https://github.com/CyberAgentAILab/LayerD.git

uv not found

bash: uv: command not found

Solution:

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh  # macOS/Linux
# or
pip install uv

NumPy version conflicts

Solution: LayerD requires numpy 2.0+

pip install --upgrade "numpy>=2.0"

Model Download Issues

Download fails or times out

Solutions:

# Check internet connection, then try manual download
python -c "from huggingface_hub import snapshot_download; snapshot_download('cyberagent/layerd-birefnet')"

# If behind proxy
export HTTP_PROXY=http://proxy.example.com:8080
export HTTPS_PROXY=http://proxy.example.com:8080

# Use HuggingFace CLI for better resume support
pip install huggingface_hub[cli]
huggingface-cli download cyberagent/layerd-birefnet

No space left on device

Solution: Free up 2GB+ or change cache directory

export HF_HOME=/path/to/new/cache
export TORCH_HOME=/path/to/new/cache

CUDA and GPU Issues

CUDA out of memory

Solutions:

# Reduce process size
layerd = LayerD(matting_process_size=(512, 512))

# Or use CPU
layerd = layerd.to("cpu")

# For training: reduce batch size or use mixed precision
uv run python ./tools/train.py ... batch_size=2 mixed_precision=bf16

CUDA not available

Solutions:

# Check PyTorch CUDA installation
python -c "import torch; print(torch.cuda.is_available(), torch.version.cuda)"

# Reinstall PyTorch with CUDA (see https://pytorch.org/get-started/locally/)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121

# Check NVIDIA driver
nvidia-smi

cuDNN errors

Solutions:

  1. Update GPU drivers
  2. Reinstall PyTorch: pip uninstall torch torchvision && pip install torch torchvision
  3. Check hardware: nvidia-smi
  4. Try CPU mode to isolate issue: layerd.to("cpu")

Inference Issues

Poor quality around text edges

Solutions:

# Use PNG input (not JPEG) to avoid compression artifacts
image = Image.open("design.png")

# Increase kernel_scale for better edge handling
layerd = LayerD(kernel_scale=0.020)  # Default: 0.015

# Increase matting process size
layerd = LayerD(matting_process_size=(1024, 1024))

Too few or too many layers

Solution: Adjust max_iterations

layers = layerd.decompose(image, max_iterations=5)  # More layers
layers = layerd.decompose(image, max_iterations=2)  # Fewer layers

Inference is very slow

Solutions:

# Use GPU
layerd = layerd.to("cuda")

# Reduce process size
layerd = LayerD(matting_process_size=(512, 512))

# Reduce iterations
layers = layerd.decompose(image, max_iterations=2)

Cannot identify image file

Solutions:

# Verify file is valid
file your_image.png

# Try with OpenCV
python -c "import cv2; img = cv2.imread('image.png'); print(img.shape)"

# Check permissions and re-download if needed

Training Issues

Training loss is NaN

Solutions:

# Reduce learning rate
uv run python ./tools/train.py ... learning_rate=5e-5

# Use mixed precision with bf16
uv run python ./tools/train.py ... mixed_precision=bf16

# Check dataset for corrupted images

Training is very slow

Solutions:

# Use multiple GPUs
CUDA_VISIBLE_DEVICES=0,1 uv run torchrun --standalone --nproc_per_node 2 \
  ./tools/train.py ... dist=true

# Use mixed precision
uv run python ./tools/train.py ... mixed_precision=bf16

# Increase data loading workers
uv run python ./tools/train.py ... num_workers=8

Dataset preparation fails

Solutions:

  1. Check internet connection (downloads ~20GB)
  2. Ensure sufficient disk space (~100GB)
  3. Verify HuggingFace access: python -c "from datasets import load_dataset; load_dataset('cyberagent/crello')"

Multi-GPU not using all GPUs

Solutions:

# Verify dist=true is set
uv run torchrun ... dist=true

# Check GPU visibility
echo $CUDA_VISIBLE_DEVICES
CUDA_VISIBLE_DEVICES=0,1,2,3 uv run torchrun --nproc_per_node 4 ...

# Verify nproc_per_node matches GPU count

Evaluation Issues

Mismatched layer counts warning

This is expected. LayersEditDist handles different layer counts automatically.

Evaluation is very slow

Solutions:

# Evaluate subset first
samples = list(pred_dir.iterdir())[:100]

# Use multiprocessing (modify evaluation script)
# Use smaller images if pixel-perfect accuracy not needed

High edit distance despite good visuals

Solutions:

  1. Check layer ordering (background should be first)
  2. Verify alpha quality: compute_alpha_iou(layer_pred, layer_gt) should be > 0.8
  3. Check for extra/missing layers

Export Issues

SVG file size too large

Problem: Generated SVG files are very large (100MB+)

Solutions:

# Use external image mode instead of base64 embedding
from layerd import LayerDPipeline

pipeline = LayerDPipeline()
result = pipeline(image)
result.save("output.svg", image_mode="external", image_dir="./images")

Additional tips:

  • Reduce layer count with higher matting threshold
  • Use fewer iterations: pipeline(image, max_iterations=2)
  • Consider PSD format for large designs

PSD not opening in Photoshop

Problem: Adobe Photoshop shows "Not a valid Photoshop document" error

Solutions:

# Try different compression method
result.save("output.psd", compression="rle")  # Default
# or
result.save("output.psd", compression="zip")

# Verify color mode compatibility
result.save("output.psd", color_depth=8)  # Try 8-bit instead of 16/32

Common causes:

  • Corrupted file during write (check disk space)
  • Very large files (>2GB) may have compatibility issues
  • Some Photoshop versions have stricter validation

Export fails with "No module named 'pytoshop'"

Problem: PSD export raises import error

Solution: Install LayerD with PSD support:

pip install "git+https://github.com/CyberAgentAILab/LayerD.git#egg=layerd[psd]"

Missing images in exported SVG

Problem: SVG shows empty boxes or missing images

Solutions:

# For external mode, verify image directory exists
result.save("output.svg", image_mode="external", image_dir="./images")

# Check that images/ directory was created
# Images should be in: ./images/element_0.png, ./images/element_1.png, etc.

# Or use base64 mode for self-contained SVG
result.save("output.svg", image_mode="base64")

Element classification is incorrect

Problem: Text detected as image, or vice versa

Solutions:

# Use custom labeler with adjusted threshold
from layerd import LayerDPipeline, EntropyLabeler

labeler = EntropyLabeler(entropy_threshold=4.0)  # Default: 5.0
pipeline = LayerDPipeline(labeler=labeler)
result = pipeline(image)

# Or use gradient-aware labeler
from layerd import GradientAwareLabeler
pipeline = LayerDPipeline(labeler=GradientAwareLabeler())

See Pipeline Guide - Element Classification for details.

General Issues

Import errors

ModuleNotFoundError: No module named 'layerd'

Solutions:

# Verify installation
pip list | grep layerd

# Reinstall
pip uninstall layerd && pip install git+https://github.com/CyberAgentAILab/LayerD.git

# Check environment
which python

# For development
cd LayerD && uv sync --all-extras

Type checking errors

LayerD requires strict type annotations:

# Bad
def process(image):
    ...

# Good
def process(image: Image.Image) -> list[Image.Image]:
    ...

See development.md for details.

Permission denied

Solutions:

# Check permissions
ls -la /path/to/file

# Make writable
chmod +w /path/to/output

# Don't run as root

Getting Help

If you can't resolve your issue:

  1. Check GitHub issues
  2. Create new issue with: LayerD version, Python version, OS, full traceback, minimal example
  3. Read the paper for methodology details
  4. Check related docs: Installation, Inference, Training, Development