freddyaboulton / gradio-tools Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
I am getting the error "The current space is in the invalid state: RUNTIME_ERROR. Please contact the owner to fix this." when running this code:
from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))
Could you add documentation on how to configure/use your own stable diffusion models? I got timeouts when trying to connect to the models that the stable diffusion tool relies on
Thanks for this project btw! Looks really cool and fits well with something I want to prototype
Looks interesting! There will be a gradio demo upon release: https://github.com/deep-floyd/IF
To avoid merge conflicts, etc. if multiple people are contributing tools at the same time
Some tools are named based on generic data types ("ImageToMusicTool" and "ImageCaptioningTool"), while others include the name of the specific model ("WhisperTool" and "StableDiffusionPromptGeneratorTool").
We should have a consistent terminology for naming tools so that contributors also know how to name their tools. My perference would be to include both the model and the specific function that the tool is doing, like "StableDiffusionPromptGeneratorTool". If we go with this, I'd rename "WhisperTool" to "WhisperSpeechTranscriptionTool", etc.
Not clear to me what's happening in the GIF. A lot of it just printing the status message. Can we reduce that? Also, a caption that provides some context may be helpful.
Thoughts on having a gradio-tools
org on Hugging Face to host all of the built-in demos? I think it would be helpful for discoverability and so that we can easily maintain these Spaces (make sure they're not down, etc.)
I think it would be helpful if we can update this section and put it in a contributing.md to encourage people to add more tools to the library
I ran the code in the gradio-tools repo and it resulted to the above error.
from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
from dotenv import load_dotenv
load_dotenv()
llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")
tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))
Error Traceback
[9](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=8) llm = OpenAI(temperature=0)
[10](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=9) memory = ConversationBufferMemory(memory_key="chat_history")
---> [11](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=10) tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
[12](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=11) StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_tools\tools\image_captioning.py:28](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py:28), in ImageCaptioningTool.__init__(self, name, description, src, hf_token, duplicate)
[16](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=15) def __init__(
[17](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=16) self,
[18](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=17) name="ImageCaptioner",
(...)
[26](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=25) duplicate=True,
[27](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=26) ) -> None:
---> [28](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=27) super().__init__(name, description, src, hf_token, duplicate)
File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_tools\tools\gradio_tool.py:36](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py:36), in GradioTool.__init__(self, name, description, src, hf_token, duplicate)
[34](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=33) else:
[35](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=34) self.src = src
---> [36](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=35) self.client = grc.Client(self.src, hf_token=hf_token)
[37](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=36) self.description += (
[38](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=37) "If the output is the string QUEUE_FULL you "
[39](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=38) "should try this action again and never stop until you "
[40](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=39) "don't see this message."
[41](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=40) )
[42](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=41) self._block = None
File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_client\client.py:118](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py:118), in Client.__init__(self, src, hf_token, max_workers, serialize, verbose)
[116](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=115) self.upload_url = urllib.parse.urljoin(self.src, utils.UPLOAD_URL)
[117](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=116) self.reset_url = urllib.parse.urljoin(self.src, utils.RESET_URL)
--> [118](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=117) self.config = self._get_config()
[119](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=118) self.session_hash = str(uuid.uuid4())
[121](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=120) self.endpoints = [
[122](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=121) Endpoint(self, fn_index, dependency)
[123](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=122) for fn_index, dependency in enumerate(self.config["dependencies"])
[124](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=123) ]
File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_client\client.py:575](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py:575), in Client._get_config(self)
[573](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=572) config = json.loads(result.group(1)) # type: ignore
[574](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=573) except AttributeError as ae:
--> [575](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=574) raise ValueError(
[576](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=575) f"Could not get Gradio config from: {self.src}"
[577](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=576) ) from ae
[578](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=577) if "allow_flagging" in config:
[579](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=578) raise ValueError(
[580](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=579) "Gradio 2.x is not supported by this client. Please upgrade your Gradio app to Gradio 3.x or higher."
[581](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=580) )
ValueError: Could not get Gradio config from: https://gradio-client-demos-blip-2.hf.space/
ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
2 TextToVideoTool)
4 from langchain.agents import initialize_agent
5 from langchain.llms import OpenAI
File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/init.py:1
----> 1 from gradio_tools.tools import (ClipInterrogatorTool,
2 DocQueryDocumentAnsweringTool, GradioTool,
3 ImageCaptioningTool, ImageToMusicTool,
4 StableDiffusionPromptGeneratorTool,
5 StableDiffusionTool, TextToVideoTool,
6 WhisperAudioTranscriptionTool)
8 all = [
9 "GradioTool",
10 "StableDiffusionTool",
(...)
17 "DocQueryDocumentAnsweringTool",
18 ]
File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/tools/init.py:1
----> 1 from gradio_tools.tools.clip_interrogator import ClipInterrogatorTool
2 from gradio_tools.tools.document_qa import DocQueryDocumentAnsweringTool
3 from gradio_tools.tools.gradio_tool import GradioTool
File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/tools/clip_interrogator.py:3
1 from typing import TYPE_CHECKING
----> 3 from gradio_client.client import Job
5 from gradio_tools.tools.gradio_tool import GradioTool
7 if TYPE_CHECKING:
ModuleNotFoundError: No module named 'gradio_client'
Hi, Thanks for sharing this. Great work. It might help to update the links in readme file since some of the old links do not work any more and the new space is updated in the code but not in the readme tutorial.
eg.
StableDiffusionTool - Generate an image from a given prompt using the open source stable diffusion demo hosted on HuggingFace spaces
am just trying to pass a token- while duplicating the space
`tools = [BarkTextToSpeechTool(hf_token="<token_hidden>",duplicate=True).langchain, StableDiffusionTool().langchain, StableDiffusionPromptGeneratorTool().langchain]
`
the client is recognizing my space :
Using your existing Space: https://hf.space/eliehabib/bark ๐ค
But am getting 401 , anything am missing or is it not supported ?
Traceback (most recent call last):
File "/Users/eliehabib/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
response.raise_for_status()
File "/Users/eliehabib/miniconda3/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/spaces/eliehabib/bark/hardware
Let's use this space https://huggingface.co/spaces/sanchit-gandhi/whisper-jax when gradio-app/gradio#3924 is shipped
Trying to run example from documentation using StableDiffusionTool only. On the latest version of Langchain (0.0.161) this results in a series of errors:
tools
has been refactored into a langchain.tools
, where BaseTools and Tool are defined in base.py
.pydantic
validation, specifically:ValidationError: 1 validation error for AgentExecutor
tools -> 0
cannot pickle '_queue.SimpleQueue' object (type=type_error)
AgentExecutor
class validates each of the tools passed in and because of the previous error, will give a cryptic error when calling the root validators here. Because the above error (which you won't see unless you comment out / remove root_validators
, you'll get a KeyError
since tools fails because of the above per how pydantic root validation worksA declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.