Skip to content

I have some remapped issues #3

@GrainSack

Description

@GrainSack

Below script is my terminal error for running

python precompute_noises_and_conditionings.py
--config ./config/parameter_estimation.yaml
--inversion_subfolder noise
--token_subfolder tokens \
--triplet_file triplets.csv
--data_path ./dataset/data/

Model loaded
/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torchvision/transforms/functional_pil.py:42: DeprecationWarning: FLIP_LEFT_RIGHT is deprecated and will be removed in Pillow 10 (2023-07-01). Use Transpose.FLIP_LEFT_RIGHT instead.
  return img.transpose(Image.FLIP_LEFT_RIGHT)
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
Selected timesteps: tensor([4, 0, 5, 2, 3, 6, 7, 1])
  0%|                                                                                                                                                                                        | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "estimate_CLIP_features.py", line 65, in <module>
    output = invertor.perform_cond_inversion_individual_timesteps(file_path, None, optimize_tokens=True)
  File "/hdd1/kss/home/DIA/ddim_invertor.py", line 275, in perform_cond_inversion_individual_timesteps
    noise_prediction = self.ddim_sampler.model.apply_model(noisy_samples, steps_in, cond_init.expand(self.config.conditioning_optimization.batch_size, -1 , -1))
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input,  #**kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 213, in _forward
    x = self.attn2(self.norm2(x), context=context) + x
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/hdd1/kss/home/DIA/stable-diffusion/ldm/modules/attention.py", line 180, in forward
    sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
  File "/hdd1/kss/home/miniconda3/envs/dia_env/lib/python3.8/site-packages/torch/functional.py", line 330, in einsum
    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [64, 4096, 40]->[64, 4096, 1, 40] [8, 77, 40]->[8, 1, 77, 40]

And did't found Token inversion at last

Traceback (most recent call last):
  File "estimate_input_noise.py", line 70, in <module>
    outputs = invertor.perform_inversion(file_name, cond = None, init_noise_init = None, loss_weights= {'latents': 1. , 'pixels':1.} )
  File "/hdd1/kss/home/DIA/ddim_invertor.py", line 93, in perform_inversion
    assert cond_out is not None, 'Token inversion was not found...'
AssertionError: Token inversion was not found...

My torch version is same as repo(1.11.0)
And nvidia-smi cuda version in 12.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions