Code Monkey home page Code Monkey logo

gradio-tools's People

Contributors

abidlabs avatar freddyaboulton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

gradio-tools's Issues

The current space is in the invalid state: RUNTIME_ERROR. Please contact the owner to fix this.

I am getting the error "The current space is in the invalid state: RUNTIME_ERROR. Please contact the owner to fix this." when running this code:

from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
TextToVideoTool)

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory

llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")

tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]

agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
"but improve my prompt prior to using an image generator."
"Please caption the generated image and create a video for it using the improved prompt."))

Couldn't get readme example to work - workspaces wouldn't load

Could you add documentation on how to configure/use your own stable diffusion models? I got timeouts when trying to connect to the models that the stable diffusion tool relies on

Thanks for this project btw! Looks really cool and fits well with something I want to prototype

Nit: consistent terminology for naming Tools

Some tools are named based on generic data types ("ImageToMusicTool" and "ImageCaptioningTool"), while others include the name of the specific model ("WhisperTool" and "StableDiffusionPromptGeneratorTool").

We should have a consistent terminology for naming tools so that contributors also know how to name their tools. My perference would be to include both the model and the specific function that the tool is doing, like "StableDiffusionPromptGeneratorTool". If we go with this, I'd rename "WhisperTool" to "WhisperSpeechTranscriptionTool", etc.

Suggestion: Improve GIF on README

Not clear to me what's happening in the GIF. A lot of it just printing the status message. Can we reduce that? Also, a caption that provides some context may be helpful.

ValueError: Could not get Gradio config from: https://gradio-client-demos-blip-2.hf.space

I ran the code in the gradio-tools repo and it resulted to the above error.

from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
                          TextToVideoTool)

from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
from dotenv import load_dotenv
load_dotenv()

llm = OpenAI(temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history")

tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
         StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]


agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
output = agent.run(input=("Please create a photo of a dog riding a skateboard "
                          "but improve my prompt prior to using an image generator."
                          "Please caption the generated image and create a video for it using the improved prompt."))
                          

Error Traceback

[9](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=8) llm = OpenAI(temperature=0)
    [10](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=9) memory = ConversationBufferMemory(memory_key="chat_history")
---> [11](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=10) tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
    [12](file:///c%3A/Users/Glodaris/Downloads/LangChain%20Application/db_info_retrival/db_retrival.py?line=11)          StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]

File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_tools\tools\image_captioning.py:28](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py:28), in ImageCaptioningTool.__init__(self, name, description, src, hf_token, duplicate)
    [16](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=15) def __init__(
    [17](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=16)     self,
    [18](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=17)     name="ImageCaptioner",
  (...)
    [26](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=25)     duplicate=True,
    [27](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=26) ) -> None:
---> [28](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/image_captioning.py?line=27)     super().__init__(name, description, src, hf_token, duplicate)

File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_tools\tools\gradio_tool.py:36](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py:36), in GradioTool.__init__(self, name, description, src, hf_token, duplicate)
    [34](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=33) else:
    [35](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=34)     self.src = src
---> [36](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=35)     self.client = grc.Client(self.src, hf_token=hf_token)
    [37](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=36) self.description += (
    [38](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=37)     "If the output is the string QUEUE_FULL you "
    [39](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=38)     "should try this action again and never stop until you "
    [40](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=39)     "don't see this message."
    [41](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=40) )
    [42](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_tools/tools/gradio_tool.py?line=41) self._block = None

File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_client\client.py:118](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py:118), in Client.__init__(self, src, hf_token, max_workers, serialize, verbose)
   [116](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=115) self.upload_url = urllib.parse.urljoin(self.src, utils.UPLOAD_URL)
   [117](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=116) self.reset_url = urllib.parse.urljoin(self.src, utils.RESET_URL)
--> [118](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=117) self.config = self._get_config()
   [119](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=118) self.session_hash = str(uuid.uuid4())
   [121](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=120) self.endpoints = [
   [122](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=121)     Endpoint(self, fn_index, dependency)
   [123](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=122)     for fn_index, dependency in enumerate(self.config["dependencies"])
   [124](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=123) ]

File [c:\Users\Glodaris\anaconda3\envs\detectron2\lib\site-packages\gradio_client\client.py:575](file:///C:/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py:575), in Client._get_config(self)
   [573](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=572)     config = json.loads(result.group(1))  # type: ignore
   [574](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=573) except AttributeError as ae:
--> [575](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=574)     raise ValueError(
   [576](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=575)         f"Could not get Gradio config from: {self.src}"
   [577](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=576)     ) from ae
   [578](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=577) if "allow_flagging" in config:
   [579](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=578)     raise ValueError(
   [580](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=579)         "Gradio 2.x is not supported by this client. Please upgrade your Gradio app to Gradio 3.x or higher."
   [581](file:///c%3A/Users/Glodaris/anaconda3/envs/detectron2/lib/site-packages/gradio_client/client.py?line=580)     )

ValueError: Could not get Gradio config from: https://gradio-client-demos-blip-2.hf.space/

gradio_client NOT FOUND


ModuleNotFoundError Traceback (most recent call last)
Cell In[3], line 1
----> 1 from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
2 TextToVideoTool)
4 from langchain.agents import initialize_agent
5 from langchain.llms import OpenAI

File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/init.py:1
----> 1 from gradio_tools.tools import (ClipInterrogatorTool,
2 DocQueryDocumentAnsweringTool, GradioTool,
3 ImageCaptioningTool, ImageToMusicTool,
4 StableDiffusionPromptGeneratorTool,
5 StableDiffusionTool, TextToVideoTool,
6 WhisperAudioTranscriptionTool)
8 all = [
9 "GradioTool",
10 "StableDiffusionTool",
(...)
17 "DocQueryDocumentAnsweringTool",
18 ]

File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/tools/init.py:1
----> 1 from gradio_tools.tools.clip_interrogator import ClipInterrogatorTool
2 from gradio_tools.tools.document_qa import DocQueryDocumentAnsweringTool
3 from gradio_tools.tools.gradio_tool import GradioTool

File ~/Documents/LLM4Rec/gradio-tools/gradio_tools/tools/clip_interrogator.py:3
1 from typing import TYPE_CHECKING
----> 3 from gradio_client.client import Job
5 from gradio_tools.tools.gradio_tool import GradioTool
7 if TYPE_CHECKING:

ModuleNotFoundError: No module named 'gradio_client'

Update for readme file

Hi, Thanks for sharing this. Great work. It might help to update the links in readme file since some of the old links do not work any more and the new space is updated in the code but not in the readme tutorial.
eg.
StableDiffusionTool - Generate an image from a given prompt using the open source stable diffusion demo hosted on HuggingFace spaces

401 when trying to use token & duplicating the space

am just trying to pass a token- while duplicating the space

`tools = [BarkTextToSpeechTool(hf_token="<token_hidden>",duplicate=True).langchain, StableDiffusionTool().langchain, StableDiffusionPromptGeneratorTool().langchain]

`

the client is recognizing my space :
Using your existing Space: https://hf.space/eliehabib/bark ๐Ÿค—

But am getting 401 , anything am missing or is it not supported ?

Traceback (most recent call last):
  File "/Users/eliehabib/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 259, in hf_raise_for_status
    response.raise_for_status()
  File "/Users/eliehabib/miniconda3/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/spaces/eliehabib/bark/hardware

Unable to run example

Trying to run example from documentation using StableDiffusionTool only. On the latest version of Langchain (0.0.161) this results in a series of errors:

  • First, tools has been refactored into a langchain.tools, where BaseTools and Tool are defined in base.py.
  • The problem seems to be caused by pydantic validation, specifically:
ValidationError: 1 validation error for AgentExecutor
tools -> 0
  cannot pickle '_queue.SimpleQueue' object (type=type_error)
  • The AgentExecutor class validates each of the tools passed in and because of the previous error, will give a cryptic error when calling the root validators here. Because the above error (which you won't see unless you comment out / remove root_validators, you'll get a KeyError since tools fails because of the above per how pydantic root validation works

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.