Code Monkey home page Code Monkey logo

comfyui-if_ai_tools's Introduction

ComfyUI-IF_AI_tools

ComfyUI-IF_AI_tools

ComfyUI-IF_AI_tools is a set of custom nodes for ComfyUI that allows you to generate prompts using a local Large Language Model (LLM) via Ollama. This tool enables you to enhance your image generation workflow by leveraging the power of language models.

Features

-[NEW] Endpoints for Gemini, LlamaCpp and Mistral

-[NEW] Omost_tool the first tool Omost_tool ollama run impactframes/dolphin_llama3_omost Omost via Ollama can be 2x to 3x faster than other Omost servings

You are going to need the comfyui omost nodes here -https://github.com/huchenlei/ComfyUI_omost?tab=readme-ov-file

https://github.com/huchenlei/ComfyUI_densediffusion

LLama3 and Phi3 IF_AI Prompt mkr models released ComfyUI_00021_

ollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest

ollama run impactframes/ifai_promptmkr_dolphin_phi3:latest

https://huggingface.co/impactframes/llama3_if_ai_sdpromptmkr_q4km

https://huggingface.co/impactframes/ifai_promptmkr_dolphin_phi3_gguf

Prerequisites

  • Ollama - install Ollama. Visit ollama.com for more information.

  • Optionally Kobold.cpp, Oobabooga Llama.cpp or LM Studio. (vision not supported for Oobabooga and Kobold)

  • For optional Apis Set enviromnet variables for "ANTHROPIC_API_KEY", "GEMINI_API_KEY", "OPENAI_API_KEY", "MISTRAL_API _KEY" & "GROQ_API_KEY" with those names or otherwise it won't pick it up and the respective API keys

You can use a .env in the custom_nodes/ComfyUI-IF_AI_tools/.env to define the variables with the same names as above or use the external api_key field on the node

Installation

  1. Install Ollama by following the instructions on their GitHub page on windows

You can also install the Node from the ComfyUI manager

  1. Open a terminal and type following command to install the model:

       ollama run impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest
  2. Move the IF_AI folder from the ComfyUI-IF_AI_tools to inside the root input ComfyUI/input/IF_AI

  3. Navigate to your ComfyUI custom_nodes folder, type CMD on the address bar to open a command prompt, and run the following command to clone the repository:

       git clone https://github.com/if-ai/ComfyUI-IF_AI_tools.git
  4. In ComfyUI protable version just dounle click embedded_install.bat or type CMD on the address bar on the newly created custom_nodes\ComfyUI-IF_AI_tools folder type

       H:\ComfyUI_windows_portable\python_embeded\python.exe -m pip install -r requirements.txt

    replace C:\ for your Drive letter where you have the ComfyUI_windows_portable directory

    On custom environment activate the environment and move to the newly created ComfyUI-IF_AI_tools

       cd ComfyUI-IF_AI_tools
       python -m pip install -r requirements.txt

Usage

  1. Start ComfyUI.

  2. Load the custom workflow located in the custom_nodes\ComfyUI-IF_AI_tools\workflows folder.

  3. Run the queue to generate an image.

Recommended Models

Support

If you find this tool useful, please consider supporting my work by:

Related Tools

  • IF_prompt_MKR -
  • A similar tool available for Stable Diffusion WebUI

AIFuzz made a great video usining ollama and IF_AI tools

AIFuzz

Also Future thinker @ Benji Thankyou both for putting out this awesome videos

Future Thinker @Benji

Example using normal Model

ancient Megastructure, small lone figure 'A dwarfed figure standing atop an ancient megastructure, worn stone towering overhead. Underneath the dim moonlight, intricate engravings adorn the crumbling walls. Overwhelmed by the sheer size and age of the structure, the small figure appears lost amidst the weathered stone behemoth. The background reveals a dark landscape, dotted with faint twinkles from other ancient structures, scattered across the horizon. The silent air is only filled with the soft echoes of distant whispers, carrying secrets of times long past. ethereal-fantasy-concept-art, magical-ambiance, magnificent, celestial, ethereal-lighting, painterly, epic, majestic, dreamy-atmosphere, otherworldly, mystic-elements, surreal, immersive-detail' IF_prompt_Mkr__00011 IF_prompt_Mkr__00012 IF_prompt_Mkr__00010 IF_prompt_Mkr__00014

:IFAItools_comfy

comfyui-if_ai_tools's People

Contributors

fsdymy1024 avatar haohaocreates avatar if-ai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-if_ai_tools's Issues

no module found pathway.xpacks

C:\Users\Administrator>pip install pathway
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting pathway
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ef/c7/bbba15b3afd8bbf37cbac9dd69461e72315aa01b937dfa85091b5f82b
270/pathway-0.post1-py3-none-any.whl (2.8 kB)
Installing collected packages: pathway
Successfully installed pathway-0.post1

[notice] A new release of pip is available: 23.2.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip

C:\Users\Administrator>python
Python 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

import pathway
This is not the real Pathway package.
Visit https://pathway.com/developers/ to get Pathway.
Already tried that? Visit https://pathway.com/troubleshooting/ to get help.
Note: your platform is Windows-10-10.0.19045-SP0, your Python is CPython 3.11.6.

exit()

(https://pathway.com/developers/user-guide/development/troubleshooting/#windows-users)
โš ๏ธ Pathway is currently not supported on Windows. Windows users may want to use Windows Subsystem for Linux (WSL), docker, or a VM.
You can also try these steps in an online notebook environment like Colab.

ValueError: Invalid model selected: for engine ollama. Available models:

Error occurred when executing IF_ChatPrompt:

Invalid model selected: for engine ollama. Available models: []

File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\BaiduNetdiskDownload\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFChatPromptNode.py", line 256, in describe_picture
raise ValueError(error_message)

good use case

Hi if-ai, do you have contact email to discuss a good use case? Thanks

Where to Set enviromnet variables?

I woud like to know where to Set enviromnet variables for "ANTHROPIC_API_KEY" & "OPENAI_API_KEY"

I am getting this at the end of the log when I start comfyui

Error: ANTHROPIC_API_KEY is required
Error: OPENAI_API_KEY is required
Error: ANTHROPIC_API_KEY is required
Error: OPENAI_API_KEY is required

Failed to install it (error when running a json workflow)

I am getting this error:

image

I installed the node through the node manager
I had ollama already installed
I installed 2 models: Llava and nous hermes

Yet for some reason I am getting these errors.
I copied the workflow from here aiFUZZ video (on ther drive url for text to image workflow)

Any help?

feature request โ€œOmostโ€

The recently released "Omost" seems to have some similarities to the functionality of your node.
Are you interested in supporting this?

https://github.com/lllyasviel/Omost

I found a plugin for "Omost" but it runs very slowly.

https://github.com/huchenlei/ComfyUI_omost

If it is available in your plug-in, use Ollama to load the "Omost" model and integrate it into your node. There should be faster response times.

I hope you will support it. Because it works really well.

Meaning of "profile" and "temperature"

Hi IF-AI,
thanks for your nodes, they work great for me.
I am curious what the profile and temperature parameters do, can you please explain?
Thank you for your kindness!

can i use the old version ?

After updating (I use ComfyUI windows portable) I cannot use the new version no matter what I do. Is it possible to install and use the old version ?

Remove space at the start of response

You wouldn't know how to remove the space from in front of response that gets added by default would you? I'm trying to pass true and false to a boolean to use as an automated type of switch, but I need to remove the space from in front of the response output. It works without the space, but with the output automatically adding it, it messes it up lol.

Screenshot 2024-05-26 100040

The purple area is dimmed.

If_Ai
As you can see in the screenshot above, the purple area seems dimmed, so it's hard to read the text.
How do I change it?

IMPORT FAILS COMFYUI

import fails with this error

File "D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFPromptMkrNode.py", line 5, in
import anthropic
ModuleNotFoundError: No module named 'anthropic'

Cannot import D:\ComfyUI_Training\ComfyUI_windows_portable_nvidia_cu121_or_cpu (4)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools module for custom nodes: No module named 'anthropic'

Different LLM Server

What if i wanted to use LM Studio? Which file would i have to add the configuration to? Also thanks for this. Great work.

Error occurred in one node after update

FETCH DATA from: G:\AI\ComfyUI_M\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[ERROR] An error occurred while retrieving information for the 'IF_ChatPrompt' node.
Traceback (most recent call last):
File "G:\AI\ComfyUI_M\ComfyUI\server.py", line 415, in get_object_info
out[x] = node_info(x)
^^^^^^^^^^^^
File "G:\AI\ComfyUI_M\ComfyUI\server.py", line 393, in node_info
info['input'] = obj_class.INPUT_TYPES()
^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI\ComfyUI_M\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFChatPromptNode.py", line 78, in INPUT_TYPES
"external_api_key": ("STRING", {"default": "", "multiline": false}),
^^^^^
NameError: name 'false' is not defined

"external_api_key": ("STRING", {"default": "", "multiline": false}), false The first letter is not capitalized

Fail to import after latest update

Hello, the latest upgrade gives me this error

..requirement already satisfied: mpmath>=0.19 in e:\comfyui\comfyui_windows_portable12_1\python_embeded\lib\site-packages (from sympy->torch==2.1.2+cu121->torchaudio->-r requirements.txt (line 1)) (1.3.0)
Collecting ruamel.yaml.clib>=0.2.7 (from ruamel.yaml>=0.17.28->hyperpyyaml->speechbrain<1.0->WhisperSpeech->-r requirements.txt (line 25))
  Using cached ruamel.yaml.clib-0.2.8-cp311-cp311-win_amd64.whl.metadata (2.3 kB)
Using cached av-12.0.0-cp311-cp311-win_amd64.whl (26.3 MB)
Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB)
Using cached WhisperSpeech-0.8-py3-none-any.whl (62 kB)
Using cached speechbrain-0.5.16-py3-none-any.whl (630 kB)
Using cached fastcore-1.5.29-py3-none-any.whl (67 kB)
Using cached fastprogress-1.0.3-py3-none-any.whl (12 kB)
Using cached vocos-0.1.0-py3-none-any.whl (24 kB)
Using cached HyperPyYAML-1.2.2-py3-none-any.whl (16 kB)
Using cached ruamel.yaml-0.18.6-py3-none-any.whl (117 kB)
Using cached ruamel.yaml.clib-0.2.8-cp311-cp311-win_amd64.whl (118 kB)
Building wheels for collected packages: dlib
  Building wheel for dlib (pyproject.toml) ... error
  error: subprocess-exited-with-error

  ร— Building wheel for dlib (pyproject.toml) did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [6 lines of output]
      running bdist_wheel
      running build
      running build_ext

      ERROR: CMake must be installed to build dlib

      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for dlib
Failed to build dlib
ERROR: Could not build wheels for dlib, which is required to install pyproject.toml-based projects

  File "E:\COMFYUI\ComfyUI_windows_portable12_1\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\__init__.py", line 10, in <module>
    from .IFDreamTalkNode import IFDreamTalk
  File "E:\COMFYUI\ComfyUI_windows_portable12_1\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFDreamTalkNode.py", line 20, in <module>
    from .dreamtalk.core.utils import (
  File "E:\COMFYUI\ComfyUI_windows_portable12_1\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\core\utils.py", line 14, in <module>
    import dlib
ModuleNotFoundError: No module named 'dlib'

Cannot import E:\COMFYUI\ComfyUI_windows_portable12_1\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools module for custom nodes: No module named 'dlib'

I guess it has problem running dlib, hope it can be saved.

LLM not outputting text most of the time with Kobaldcpp or ollama back end.

I am having trouble getting usable if any text to come out of the response pin, the question pin comes out as expected tho. The llm back end receives the question, but nothing is coming out. I am using the If chat prompt node. My workflow is just a if chat prompt connected to a show text in the 4 pins. I am on a windows os computer.

I have tried changing the stop, using <|end_of_text|>, < /s>, . , and <|eot_id|>. I also changed the assistant to MKR, and the model to llama3_if_ai_sdpromptmkr_q4km. I have tried a few other models also, mostly llama 3 or 2 models. Besides that I left everything default besides the port numbers. Is there anyway to just get the raw text out? I tried flipping the text cleanup, but nothing happened. Also changed assistant to none. I have nothing going into the image or context pin, just a command in the prompt box. Thanks for your help.

Below is the output from Koboldcpp back end terminal, first one being MKR with the llama 3_if_ai model, second being none as the assistant and a llama 3 model. I had similar results with ollama, but it wasn't showing anything in the terminal.

Input: {"prompt": "System: {'instruction': ' You are a prompt maker. Create a high-quality, coherent, and concise prompts based on the given subject, following the provided guidelines and format.', 'rules': ['Break keywords by commas', 'Focus solely on visual elements; avoid art commentaries or intentions', 'Construct prompt with subject, scene, and background components', 'Limit to 7 keywords per component', 'Include all subject keywords verbatim as main focus', 'Be varied and creative in descriptions', 'Keep prompt under 100 words', 'Do not enumerate or enunciate components', 'Do not include additional information beyond prompt'], 'examples': [{'input': 'Demon Hunter, Cyber City', 'output': 'A Demon Hunter, standing, lone figure, glowing eyes, deep purple light, cybernetic exoskeleton, sleek, metallic, glowing blue accents, energy weapons, fighting Demon, grotesque creature, twisted metal, glowing red eyes, sharp claws, in Cyber City, towering structures, shrouded haze, shimmering energy'}]}\nUser: I want you to describe a photo of a forest in detail.\nUser: I want you to describe a photo of a forest in detail.", "max_length": 128, "temperature": 0.7, "top_k": 40, "top_p": 0.2, "rep_pen": 1.1, "stop_sequence": ["\n\n\n\n\n", "<|end_of_text|>"]}

Processing Prompt (29 / 29 tokens)
Generating (128 / 128 tokens)
CtxLimit: 394/8192, Process:0.00s (0.1ms/T = 7250.00T/s), Generate:1.88s (14.7ms/T = 68.01T/s), Total:1.89s (67.87T/s)
Output:

System: Serene, mystical forest landscape, dense foliage, towering trees, vibrant green leaves, soft sunlight filtering through canopy above, dappled shadows beneath, moss-covered rocks, hidden streams, misty veil rising from valley floor, ethereal atmosphere, dreamlike quality, captivating composition, rich colors, depth of field, clarity, vividness, serenity, enchantment.

Break keywords by commas: serene, mystical, forest, landscape, dense, foliage, towering, trees, vibrant, green, leaves, soft, sunlight, filtering, canopy, above, dappled, shadows, beneath, moss-covered, rocks,

Input: {"prompt": "System: \nUser: I want you to describe a photo of a forest in detail.\nUser: I want you to describe a photo of a forest in detail.", "max_length": 8000, "temperature": 0.7, "top_k": 40, "top_p": 0.2, "rep_pen": 1.1, "stop_sequence": ["\n\n\n\n\n", "<|end_of_text|>"]}

Processing Prompt (31 / 31 tokens)
Generating (228 / 8000 tokens)
(EOS token triggered!)
(Special Stop Token Triggered! ID:128001)
CtxLimit: 262/8192, Process:0.00s (0.1ms/T = 10333.33T/s), Generate:3.92s (0.5ms/T = 2041.86T/s), Total:3.92s (2040.30T/s)
Output:

Response:
The photo depicts a dense and lush forest, with tall trees that stretch up towards the sky, their leaves rustling gently in the breeze. The sunlight filters through the canopy above, casting dappled patterns of light and shadow on the forest floor below. Various shades of green dominate the scene, from the deep emerald of the tree trunks to the vibrant foliage of the branches and leaves. In the distance, one can see the silhouette of a winding river, its surface reflecting the colors of the surrounding landscape.

In addition to the main focus on the trees, there are also smaller details scattered throughout the image. A carpet of moss covers much of the forest floor, providing a soft and damp contrast to the rough bark of the tree trunks. Some fallen logs lie strewn about, adding texture and depth to the overall composition. Birds and insects flit between the branches, further bringing life to this already thriving ecosystem.

Overall, the photo captures the beauty and tranquility of a forest at its peak, showcasing both the grandeur of nature as well as the intricate details that make it so unique.

Last June 10th update broke it for me [FIXED]

Hi there, dunno if the requirements have changed, I've been working perfectly fine with it all week but since I updated Comfy this morning The node install is in error

Screenshot 2024-06-10 133358
Screenshot 2024-06-10 133245

tried uninstall, update requirements, install from url, update ollama, nothing works.

Thanks in advance

Phi-3 models

Dear, thank you for your contribution to community open source.
I wonder if you are interested in training a model for "Phi-3-mini-4k-instruct-q4", which is only a little over 2GB in size. This will save more resources and result in faster response. Very suitable for efficient writing of prompts.
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf
I am currently using the version of phi3, which has no trained model and the performance is not good.
https://ollama.com/library/phi3
Really looking forward to your new model.

---- The above is from Google Translate, please forgive me if it offends you.

ImpactFrames Can't find ollama models at "selected_model"

I am looking for information, but there is none about this...

Even if I take ready working Workflow - with exact same setup: Windows Ollama (WSL or native) - even the same local addresses, ports, same models ...

I tired different nodes that works with my Ollama and they all work without a problem.

I tried the 3 of the ImpactFrames LLM related, but none of them seem to get the Ollama models - the field is not active:

image

image

image

Any Ideas how to troubleshoot this?

Comfyui API cannot access local variable

I use a simple workflow from comfyui to generate an image. I tested it with the example proposed by Comfy: websockets_api_example.py and it worked fine. Then I added the ComfyUI-IF_AI_tools technology and there's a bug. The images are generated correctly, but the API get_image() function causes the code to bug. Here's the error:

Traceback (most recent call last):
File "d:\ComfyUi\ComfyUI_windows_portable\ComfyUI\script_examples\websockets_api_example.py", line 108, in
generated("n")
File "d:\ComfyUi\ComfyUI_windows_portable\ComfyUI\script_examples\websockets_api_example.py", line 97, in generated
images = get_images(ws, comfy_json)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "d:\ComfyUi\ComfyUI_windows_portable\ComfyUI\script_examples\websockets_api_example.py", line 79, in get_images
output_images[node_id] = images_output
^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'images_output' where it is not associated with a value

input prompt

Hi,

Great nodes! I loved it on A111, even more so on Comfy. One question though:

Can we have an input prompt that can be passed through to the IF Prompt to Prompt node? I would like to first build my own prompt using wildcards and feed it into this node to be send to the LLM.

Would this be possible?

Thanks,
Mathieu.

Where to copy the model?

I downloaded Hermes-2-Pro-Mistral-7B.Q3_K_L but it is not clear where to copy this filse so it will be detected in IF Prompt to Prompt๐Ÿ’ฌ node.

Could you help on this?

Custom presets

I have a small request.
Every time the extension is updated, my custom presets are overwritten. I was wondering if you could somehow make a separate folder for custom presets and the one you add to be the "build-in" ones.
Thank you.

API key??

When I load your workflow it keeps B*tching about an API key, I gave it mine for Llama3, but still complains, do I need to give that when running my llama 3 locally? the prompt to promt does not seem to have that issue, it also doesn't ask for an api key

Additional negative prompt not recognize.

I try to add new negative prompts in the neg_prompts.json file, but the response is always:ย  Message will appear here.

I added this after: "None": "None",
"Mine": "text, logo",

About breaking changes

Hello! Every commit you make you break working workflows. I use ComfyUI's API a lot and it requires that all REQUIRED fields are sent with the POST json. Please, when you add new fields like "top_p" and "top_k", make them OPTIONAL with the OLLAMA's recommended default value. Thanks!
image

Can't find the directory?

D:\AI\ComfyUI-aki-v1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py
If in such a location, the plugin will not run and cannot find the directory, but I want to use my own directory name

The location causing the error:

def find_comfy_dir(current_path):
""" Recursively search for a directory named 'ComfyUI' starting from the current path. """
if os.path.basename(current_path) == 'ComfyUI':
return current_path
else:
parent_path = os.path.dirname(current_path)
if parent_path == current_path: # if reached the root of the directory tree
raise Exception("ComfyUI directory not found")
return find_comfy_dir(parent_path)

new version load fail

Traceback (most recent call last):
File "D:\ai\ComfyUI-aki-v1.1\nodes.py", line 1864, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools_init
.py", line 10, in
from .IFDreamTalkNode import IFDreamTalk
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\IFDreamTalkNode.py", line 20, in
from .dreamtalk.core.utils import (
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\core\utils.py", line 13, in
from configs.default import get_cfg_defaults
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py", line 15, in
comfy_dir = find_comfy_dir(script_path)
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py", line 12, in find_comfy_dir
return find_comfy_dir(parent_path)
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py", line 12, in find_comfy_dir
return find_comfy_dir(parent_path)
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py", line 12, in find_comfy_dir
return find_comfy_dir(parent_path)
[Previous line repeated 4 more times]
File "D:\ai\ComfyUI-aki-v1.1\custom_nodes\ComfyUI-IF_AI_tools\dreamtalk\configs\default.py", line 11, in find_comfy_dir
raise Exception("ComfyUI directory not found")
Exception: ComfyUI directory not found

how to re-generate prompt with the same input text

image After generating once, I want the model to regenerate the prompt without altering the input. However, currently, if the input text remains unchanged, it won't generate a new prompt but instead refers to a previously generated one.

I wonder if there's any way to achieve that every time I click on "generate," it regenerates a new Prompt?
Thanks!

no llm models showing up in Prompt to Prompt Node

Hello I have no model showing on my node, despite havign installed ollama and the prompt generator and if ai tools on node managers. And I installed Laava and nous hermes. Whats the problem , why would my node "if prompt to prompt" has no value in the parameter "base ip"?

oobabooga

Would it be possible to add oobabooga as a backend?

ComfyUI-IF_AI_tools Nodes Fail to Load

Updated Comfyui and since then the ComfyUI-IF_AI_tools Nodes Fail to Load.
Have tried reinstalling, uninstalling etc no success.
It was working well till i run the update.

ComfyUI-IF_AI_tools: last update: 2024-04-13

ERROR Message:

ComfyUI-Manager: EXECUTE => ['/home/kad/comfy/comfy_env/bin/python3', '-m', 'pip', 'install', 'dlib']

Collecting dlib
Using cached dlib-19.24.4.tar.gz (3.3 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: dlib
Building wheel for dlib (pyproject.toml): started
Building wheel for dlib (pyproject.toml): finished with status 'error'
[!] error: subprocess-exited-with-error
[!]
[!] ร— Building wheel for dlib (pyproject.toml) did not run successfully.
[!] โ”‚ exit code: 1
[!] โ•ฐโ”€> [10 lines of output]
[!] running bdist_wheel
[!] running build
[!] running build_ext
[!] Traceback (most recent call last):
[!] File "/home/kad/comfy/comfy_env/bin/cmake", line 5, in
[!] from cmake import cmake
[!] ModuleNotFoundError: No module named 'cmake'
[!]
[!] ERROR: CMake must be installed to build dlib
[!]
[!] [end of output]
[!]
[!] note: This error originates from a subprocess, and is likely not a problem with pip.
[!] ERROR: Failed building wheel for dlib
[!] Failed to build dlib
ERROR: Could not build wheels for dlib, which is required to install pyproject.toml-based projects
install script failed: https://github.com/if-ai/ComfyUI-IF_AI_tools

.
.
.
.
.

[!] ERROR: Invalid requirement: '#sudo apt-get install libsox-fmt-all'
install script failed: https://github.com/if-ai/ComfyUI-IF_AI_tools

latest version is very slow after first run?

EDIT:
Sorry, I tried something else and found the "problem" for the slow speed.
The "keep alive" is off by default and after turning it on the speed is back to normal as how it was in previous version.
I guess in previous version it's on by default?

But as you can see on the log, the first run was fast (16 secs, this includes the time to load the model).
subsequent run, even with model loading and unloading (because keep alive is off), it took around 100 secs.


Hello. I just upgraded comfyui, ComfyUI-IF_AI_tools, and ollama.
When I try it, it ran normally when I first executed it. Then subsequent execution, without changing the image, it's getting extremely slow.

I tried to run llava directly on ollama (not comfyui) , the speed is normal (very quick).

The node I use are IF Chat Prompt, and IF Image Prompt.
All settings are default except the model (llava 7b) and profile (none and IF_PromptMKR_IMG)

Requested to load SD1ClipModel
Loading 1 new model
Requested to load BaseModel
Loading 1 new model
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 8/8 [00:01<00:00,  4.49it/s]
Global Step: 840001
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates']
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 16.24 seconds
got prompt
[rgthree] Using rgthree's optimized recursive execution.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 8/8 [00:01<00:00,  5.51it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 138.04 seconds
got prompt
[rgthree] Using rgthree's optimized recursive execution.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 8/8 [00:01<00:00,  5.57it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 119.72 seconds
got prompt
[rgthree] Using rgthree's optimized recursive execution.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 8/8 [00:01<00:00,  5.62it/s]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 133.17 seconds

Also, is the node supposed to keep executed even when nothing is connected to it?

msedge_YvvHLF759b

Thank you :)

Image to Prompt Issue with OpenAI and Claude models

Hi,

I'm trying to use the IF_ImageToPrompt node to analyze images, and both openai gp4 vision and claude3 options are not generating any prompts.

I have exported the api keys properly and the IF_PromptToPrompt is working fine, so the api key is good.

I've also tested with the comfyui_llmvision module, and claude3 and gpt4vision are working with that also.

The IF_ImageToPrompt note just displays the characters: 'E', 'x', and 'c' on the Question, Response and Negative outputs respectively. Here's a screenshot of the output:

Screen Shot 2024-04-13 at 18 59 58

I get the error Exception occurred: 'IFImagePrompt' object has no attribute 'openai_api_key' on the backend

Screen Shot 2024-04-13 at 18 53 03

please assist.

Image Input

Could you possibly have an alt version of the IF Image to Prompt node that has an input for an image or image batch? Currently, if you convert the 'image' widget to an input, it doesn't accept images to the input as it is combo type.

I tried to do this myself, but always unsuccessfully as I don't know what I'm doing. Initial error was that pic had 4 dimensions, but should be 2/3 dimensional, so I used squeeze to remove what I think was the batch dimension. Tried to manipulate that a few different ways before doing b64encode, but I always got "Failed to fetch response, status code: 400" which also prints the response "Failed to fetch response from Ollama."

An updated node, or a suggestion to help me figure the rest out myself, would be greatly appreciated.

Load Image Node not working

When trying to get a Prompt with the load image Node i get random prompts with llama 3 ifai sd prompt mkr. when using IFPromptMKR IMG

Fixed seeds don't work?

When the seed is fixed, it can still generate new content, I don't know if I used it wrong or was designed to do so

ValueError: Invalid model selected: llama3:latest for engine ollama. Available models: []

even if i had selected model llama3, yet still got this error. Or only some specific models are workable in this case?

Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\ComfyUI-IF_AI_tools\IFChatPromptNode.py", line 290, in describe_picture
raise ValueError(error_message)
ValueError: Invalid model selected: llama3:latest for engine ollama. Available models: []

Disable the `Error: <...> is required` log message

Is it possible to disable the Error: ... is required log message if ollama is used?

I see the following frequently in my console log.

Error: ANTHROPIC_API_KEY is required
Error: OPENAI_API_KEY is required
Error: ANTHROPIC_API_KEY is required
Error: OPENAI_API_KEY is required

Not Loading Ollama Models

For some reason I noticed that it's not loading the Ollama models when Ollama is on a different server than the ComfyUI url is currently open on. I'm guessing it's querying localhost, even though when I force the model names using a "String List to Combo" node it works perfectly based on the IP Address I put into the node.

IF chat prompt does't get the image

I load a image to IF chat prompt image input, and ask for describing the picture, but the response is not correct,, it seems that the node does't get the image.
ๅพฎไฟกๆˆชๅ›พ_20240424200216

Image To Prompt issue

Hi, The Image to Prompt doesn't work correctly for generating images from the output prompt, it loops without outputting anything
I use your workflow, Ollama version 1.30, Win 10, ComfyUI: 209296b4c7 Manager: V2.11
You mention in the note "Ollama has a bug on the latest version" I don't know if you mean a bug to output the prompt (working fine) or generating images from the prompt

brave_01042024_049 cmd_01042024_048 Also, what are the API key errors in the console Thank you

Dear if-ai, could you please provide a ai larger language model?

Dear if-ai,

I hope you donโ€™t mind the intrusion. Could you possibly provide a larger version of the if-ai large language model? For instance, a quantized version of llama3 70B. My GPU has 24GB of memory, and the current if-ai large language model you offer on Hugging Face is too small for my needs.

I would be extremely grateful!

Ollama keep_alive:

By default ollama keeps the last loaded model in the vram for 5 min. Could you please add a keep_alive: 0 to flush it right after the generation and free up the vram for comfy? I've tried to add it manually but it only works on image to prompt node, prompt to prompt fails for some reason. here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.