Ok great work in that I'm able to create some images very fast when using the settings in the included workflow. However, after having tried many permutations, I'm unable to get most of the sampler inputs to make any difference. To be specific:
img load error is below.
I'll try this more on a clean comfy install w/o other custom nodes for grins, but that seems unlikely to change the result. Anyone else seeing these sorts of issues?
The expanded size of the tensor (13) must match the existing size (16) at non-singleton dimension 0. Target sizes: [13]. Tensor sizes: [16]
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI_StreamDiffusion\nodes.py", line 232, in sample
output = model.sample(image).permute(0, 2, 3, 1)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI_StreamDiffusion\streamdiffusion\wrapper.py", line 332, in sample
image_tensor = self.stream.sample(image)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI_StreamDiffusion\streamdiffusion\pipeline.py", line 540, in sample
x_0_pred_out = self.predict_x0_batch(x_t_latent)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI_StreamDiffusion\streamdiffusion\pipeline.py", line 443, in predict_x0_batch
x_0_pred_batch, model_pred = self.unet_step(x_t_latent, t_list)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI_StreamDiffusion\streamdiffusion\pipeline.py", line 356, in unet_step
model_pred = self.unet(
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\hal\stable-diffusion\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 923, in forward
timesteps = timesteps.expand(sample.shape[0])