Code Monkey home page Code Monkey logo

chatblade's Issues

Knowledge cut off September 2021 for GPT4

Hey there.

Im probably just confused, but when I use chatblade with the gpt4 model, i get a response that the models knowledge cut off is still 2021.

When I ask it the same question in my official chat gpt iphone app, i get the response, april 2023.

Is there anything i must do to use the latest model?

Feature: option to rebind submission key in interactive mode

I think that it would be great to be able to change the key binding of the "submit" key (default: <Enter>) when working with chatblade in interactive mode. I would like to be able to use <Shift-Enter> or something comparable.

Implementation question(s):

  • Would you prefer that this be exposed as a command-line argument? Alternatively, if it were to be an option in ~/.config/chatblade, how would we differentiate that from saved prompts?
  • Would we still be able to use the rich Prompt.ask method for querying user input? Upon first glance I don't see a way to override key bindings.

ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.

I installed it via cloning and the error pops up: ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.

I managed to fix it locally with:

pip install --upgrade "requests==2.26.0"

But i think it should work out-of-the-box.


~ pip install 'chatblade @ git+https://github.com/npiv/chatblade'
Collecting chatblade@ git+https://github.com/npiv/chatblade
Cloning https://github.com/npiv/chatblade to /tmp/pip-install-xlf9ougx/chatblade
Running command git clone -q https://github.com/npiv/chatblade /tmp/pip-install-xlf9ougx/chatblade
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting openai>=0.27.2
Downloading openai-0.27.4-py3-none-any.whl (70 kB)
|████████████████████████████████| 70 kB 2.0 MB/s
Collecting rich
Downloading rich-13.3.4-py3-none-any.whl (238 kB)
|████████████████████████████████| 238 kB 4.0 MB/s
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from chatblade@ git+https://github.com/npiv/chatblade) (5.3.1)
Collecting tiktoken
Downloading tiktoken-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.7 MB)
|████████████████████████████████| 1.7 MB 12.2 MB/s
Collecting platformdirs
Downloading platformdirs-3.4.0-py3-none-any.whl (15 kB)
Requirement already satisfied: tqdm in ./.local/lib/python3.8/site-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (4.64.0)
Requirement already satisfied: aiohttp in ./.local/lib/python3.8/site-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (3.8.1)
Requirement already satisfied: requests>=2.20 in /usr/lib/python3/dist-packages (from openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.22.0)
Collecting typing-extensions<5.0,>=4.0.0; python_version < "3.9"
Downloading typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting markdown-it-py<3.0.0,>=2.2.0
Downloading markdown_it_py-2.2.0-py3-none-any.whl (84 kB)
|████████████████████████████████| 84 kB 1.3 MB/s
Collecting pygments<3.0.0,>=2.13.0
Downloading Pygments-2.15.1-py3-none-any.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 8.7 MB/s
Requirement already satisfied: regex>=2022.1.18 in ./.local/lib/python3.8/site-packages (from tiktoken->chatblade@ git+https://github.com/npiv/chatblade) (2022.4.24)
Requirement already satisfied: charset-normalizer<3.0,>=2.0 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.0.12)
Requirement already satisfied: yarl<2.0,>=1.0 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.7.2)
Requirement already satisfied: aiosignal>=1.1.2 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.2.0)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (4.0.2)
Requirement already satisfied: attrs>=17.3.0 in /usr/lib/python3/dist-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (19.3.0)
Requirement already satisfied: multidict<7.0,>=4.5 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (6.0.2)
Requirement already satisfied: frozenlist>=1.1.1 in ./.local/lib/python3.8/site-packages (from aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (1.3.0)
Collecting mdurl~=0.1
Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Requirement already satisfied: idna>=2.0 in /usr/lib/python3/dist-packages (from yarl<2.0,>=1.0->aiohttp->openai>=0.27.2->chatblade@ git+https://github.com/npiv/chatblade) (2.8)
Building wheels for collected packages: chatblade
Building wheel for chatblade (PEP 517) ... done
Created wheel for chatblade: filename=chatblade-0.2.3-py3-none-any.whl size=27555 sha256=f24cf5360089bd6536bbf5c31d35d7eb063e552113175b4f90fa0e7d2517814d
Stored in directory: /tmp/pip-ephem-wheel-cache-7zc882ev/wheels/0f/f6/d1/5ae4fda48f2c65b64809bc54498c6ce0578678f9c52eaa3829
Successfully built chatblade
ERROR: tiktoken 0.3.3 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible.
Installing collected packages: openai, typing-extensions, mdurl, markdown-it-py, pygments, rich, tiktoken, platformdirs, chatblade
Attempting uninstall: pygments
Found existing installation: Pygments 2.12.0
Uninstalling Pygments-2.12.0:
Successfully uninstalled Pygments-2.12.0
Successfully installed chatblade-0.2.3 markdown-it-py-2.2.0 mdurl-0.1.2 openai-0.27.4 platformdirs-3.4.0 pygments-2.15.1 rich-13.3.4 tiktoken-0.3.3 typing-extensions-4.5.0

looks like -c 4 is not working.

It does set the model name in debug mode but query return results that are not.
chatblade --debug -i -c 4  ✔  1m 9s 
{
│ 'cli input': {
│ │ 'query': None,
│ │ 'params': {
│ │ │ 'openai_api_key': 'chipichipichapachapadubidubidabadaba',
│ │ │ 'temperature': 0.0,
│ │ │ 'interactive': True,
│ │ │ 'stream': False,
│ │ │ 'tokens': False,
│ │ │ 'prompt_file': None,
│ │ │ 'extract': False,
│ │ │ 'raw': False,
│ │ │ 'no_format': False,
│ │ │ 'only': False,
│ │ │ 'theme': None,
│ │ │ 'session': None,
│ │ │ 'session_op': None,
│ │ │ 'session_rename': None,
│ │ │ 'debug': True,
│ │ │ 'model': 'gpt-4'
│ │ }
│ }
}
query (type 'quit' to exit): : Are you gpt3 or gpt4 and what is your knowledge cutoff date?
{'counted': {'md_links': 0, 'md_text': 0, 'md_inline_blocks': 0, 'md_blocks': 0}}
{'detected': 'regular'}
───────────────────────────────────────────────────────────────────────────────────────── assistant ──────────────────────────────────────────────────────────────────────────────────────────
I am an AI model developed by OpenAI and I'm based on GPT-3. My training only includes knowledge up to September 2020, and I don't have the ability to access or retrieve personal data unless
it has been shared with me in the course of our conversation. I am designed to respect user privacy and confidentiality.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

gpt4 should have knowledge cutoff date of 2023

Same question in web interface:

Are you gpt3 or gpt4 and what is your knowledge cutoff date?
ChatGPT:
I am based on the GPT-4 architecture, and my knowledge is up to date as of April 2023.

Tried the arch package and pip install with repo.

Using CTRL-D in interactive mode throws an EOFError

query (type 'quit' to exit): : Foo
╭──────────────────────────────────────────────────────── assistant ────────────────────────────────────────────────────────╮
│ Bar                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
query (type 'quit' to exit): : <CTRL-D>
Traceback (most recent call last):
  File "/usr/bin/chatblade", line 33, in <module>
    sys.exit(load_entry_point('chatblade==0.0.2', 'console_scripts', 'chatblade')())
  File "/usr/lib/python3.10/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 144, in handle_input
    start_repl(None, params)
  File "/usr/lib/python3.10/site-packages/chatblade/cli.py", line 116, in start_repl
    query = Prompt.ask("[yellow]query (type 'quit' to exit): [/yellow]")
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 141, in ask
    return _prompt(default=default, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 274, in __call__
    value = self.get_input(self.console, prompt, self.password, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/prompt.py", line 203, in get_input
    return console.input(prompt, password=password, stream=stream)
  File "/usr/lib/python3.10/site-packages/rich/console.py", line 2123, in input
    result = input()
EOFError

CTRL-D is often used to quit interactive prompts, so it shouldn't throw an error.

error with version flag

I'm using the latest version of chatblade from homebrew (although the issue was there with 0.4.0_2 as well). Attempting to check the version gives the following response.

I suspect (no prof though) it's something to do with python 3.12?

➜ chatblade --version
/opt/homebrew/Cellar/chatblade/0.4.0_3/libexec/lib/python3.12/site-packages/chatblade/session.py:13: SyntaxWarning: invalid escape sequence '\.'
  re.sub("\.yaml\Z", "", os.path.basename(sess_path))
Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.4.0_3/libexec/lib/python3.12/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/Cellar/chatblade/0.4.0_3/libexec/lib/python3.12/site-packages/chatblade/cli.py", line 155, in cli
    version = dist.get_option_dict('metadata')['version'][1]
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'version'

yaml not working

when i am placing a file called expert.yaml or anything else on the config folder as instructed i get error invoking the command :

chatblade -p expert

no prompt expert found in any of following locations:

  • expert
  • ~/.config/chatblade/expert

Allow to set default model via env variable

For daily usage it would be helpful to be able to set the default GPT model via an env variable similar to the OPENAI_API_KEY.
e.g. OPENAI_API_MODEL. Because I am most of the time using GPT-4 and not 3.5.

model: gpt-4 does not exist

Commit used: 4ce2873

file.md | chatblade continue this -c 4                                                                                      │
│                                                                                                                                                                                       │
│ Traceback (most recent call last): File "/my/home/.local/bin/chatblade", line 8, in <module> sys.exit(main()) File "/my/home/.local/lib/python3.10/site-packages/chatblade/main.py",  │
│ line 5, in main cli.cli() File "/my/home/.local/lib/python3.10/site-packages/chatblade/cli.py", line 154, in cli messages = fetch_and_cache(messages, params) File                    │
│ "/my/home/.local/lib/python3.10/site-packages/chatblade/cli.py", line 126, in fetch_and_cache response_msg, _ = chat.query_chat_gpt(messages, params) File                            │
│ "/my/home/.local/lib/python3.10/site-packages/chatblade/chat.py", line 44, in query_chat_gpt result = openai.ChatCompletion.create(messages=dict_messages, **config) File             │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File                                │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File               │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File                         │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File                                          │
│ "/my/home/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The │
│ model: gpt-4 does not exist

Support streaming to pipe

Describe the bug
When using in terminal text streaming works as it should, but when piping output - it shows up all at once.

To Reproduce

chatblade --stream --raw --no-format --only -- write a story|less

Expected behavior
When I pipe other utilities to less - I see output appear line by line.

Apostrophe's aren't escaped in input

System:
Windows 10
Windows Terminal>Powershell
Steps to reproduce:
Type a prompt with an apostrophe like
"I'm trying to setup a web server, could you give me the first steps?"
The apostrophe in I'm isn't escaped and causes powershell to wait for input for the ending '

API key problems problems (insufficient_quota)

I have the free plan and I wanted to use the app. I have configured the app an am launching it with --openai-api-key.

However no matter what I try to do I get the message

openai error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more 
information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 
'param': None, 'code': 'insufficient_quota'}}

What can be the problem?

Fish shell support

I can't run chatblade on fish shell.

Even after setting the OPENAI_API_KEY environment variable, I get the error

openai error: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.

Not detecting env variable

Steps:

I installed via pip install .
Added OPENAI_API_KEY="sk-blahblahblahblah" in my .env file
Confirmed that echo $OPENAI_API_KEY prints the expected result
Ran chatblade is this working and got expecting openai API Key back
Confirmed that chatblade is this working --openai-api-key $OPENAI_API_KEY does work

My setup is zsh, MacOS, python 3.11.2

feature request: support Creates an image given a prompt

It would be nice to support the create image API endpoint :

curl https://api.openai.com/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "dall-e-3",
    "prompt": "A cute baby sea otter",
    "n": 1,
    "size": "1024x1024"
  }'

and also the Creates a variation of a given image

curl https://api.openai.com/v1/images/variations \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -F image="@otter.png" \
  -F n=2 \
  -F size="1024x1024"

Better error handling

When I run this example command:

chatblade "is casio f-91w a good watch?" -c 4

I receive following output:

Traceback (most recent call last):
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/bin/chatblade", line 8, in <module>
    sys.exit(main())
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 154, in handle_input
    messages = fetch_and_cache(messages, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/cli.py", line 107, in fetch_and_cache
    response_msg, _ = chat.query_chat_gpt(messages, params)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/chatblade/chat.py", line 44, in query_chat_gpt
    result = openai.ChatCompletion.create(messages=dict_messages, **config)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/bdmbdsm/.local/share/virtualenvs/chat-X6P-Bc3p/lib/python3.10/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: The model: `gpt-4` does not exist

Perhaps that's not a high priority thing, but it would be nice to show that error through rich & without all that stacktrace.

Feature request: Allow retrying

The API is wonky and gets stuck sometimes. It would be nice to automatically time out and retry after a configurable number of seconds, or just upon pressing a button.

error providing prompt file

trying to use prompt file with chatblade (tried installation with via pip, brew, tea)

chatblade -p testing hello 

$ Prompt ~/.config/chatblade/testing not found in ~/.config/chatblade/testing.yaml

echo $PROMPT > ~/.config/chatblade/testing

chatblade -p testing hello

Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 164, in cli
    handle_input(query, params)
  File "/opt/homebrew/Cellar/chatblade/0.0.2/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 139, in handle_input
    messages = chat.init_conversation(query, prompt_config["system"])
                                             ~~~~~~~~~~~~~^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
error

$ Prompt ~/.config/chatblade/testing not found in ~/.config/chatblade/testing.yaml

Unusually high estimate of tokens

$ chatblade what is 1 + 1 --raw -t
what is 1 + 1

      tokens/costs

┏━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┓
┃ Model ┃ Tokens ┃ Price ┃
┡━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━┩
│ gpt-3.5 │ 4103 │ $0.008206 │
│ gpt-4 │ 4103 │ $0.123090 │
└─────────┴────────┴───────────┘

  • estimated costs do not include the tokens that may be returned

Azure OpenAI support for preview version fix

The current version of chatblade doesn't work with at least 2023-09-01-preview version of API because the parameters temperature, max_tokens, presence_penalty and top_p are required.

I managed to get it working with quick-and-dirty solution by adding these to chat.py DEFAULT_OPENAI_SETTINGS dict. However, I believe that providing the ability to manually adjust these parameters with CLI arguments or environment variables would be a better solution.

FileNotFoundError on MacOS as it is expecting ~/.cache directory

$ chatblade how can I extract a still frame from a video at 22:01 with ffmpeg
Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 170, in cli
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 135, in fetch_and_cache
    to_cache(messages)
  File "/opt/homebrew/lib/python3.11/site-packages/chatblade/cli.py", line 113, in to_cache
    with open(path, "wb") as f:
         ^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/myusername/.cache/chatblade'

Running mkdir ~/.cache fixed the issue but just filing this so we can add a check in the code to see if the cache directory exists.

It doesn't accept any API key

What am I doing wrong?
chatblade --openai-api-key
returns:
no query or option given. nothing to do...
if:
chatblade --openai-api-key key
returns:
openai error: Incorrect API key provided: key. You can find your API key at https://platform.openai.com/account/api-keys.

About setting it up as a ENV VARIABLE, can someone help, please?
I'm on macOS Ventura, M1

TiA

gpt-4 does not exist

Linux command: chatblade -c 4 "example text"
Return: openai error: The model: gpt-4 does not exist
According to readme this is wrong behavior I think

argument --chat-gpt=4 does not work

I installed through pip about 20 minutes ago, 3.5 (default) works great, but when I try to provide model 4, I get the following error:

➜  chatblade -c 4 write a short poem for me
openai error: The model: `gpt-4` does not exist

Is this something you're familiar with?
🙏

Crash on ctrl+c

It seems like chatblade crashes if it receives a ctrl+c key. Since this is a cli app, I would expect it to handle the exit request a little more gracefully, and not with a 20 line traceback to some python modules.

Light theme issue

Really loving chatblade, use it all the time.

Small issue tho, when a light terminal theme is used, code blocks still render out dark, and typically text is dark on light terms

image

The above is an example, should I look to fix this on my end, or is this something chatblade can figure out?

Add build instructions using a PEP517 installer

Up until now I've build this using setup.py, but that will go away sometime (and python 3.12 is around the corner).

Could you add some instructions on how to build this using a PEP517 installer?

Tag versions

It would be great if you could tag specific commits, so one can provide fixed packages.

cannot install on debian

pip install chatblade
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this,
at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.


sudo apt install python3-chatblade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package python3-chatblade


pip install 'chatblade @ git+https://github.com/npiv/chatblade'
error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.11/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this,
at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.


UnicodeEncodeError in NVIM when Running ':! chatblade' Command

Error

When in NVIM, running the ":! chatblade " command, it returns the following error:

------------------------------------ user -------------------------------------
tell me a joke
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Scripts\chatblade.exe\__main__.py", line 7, in <module>
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\chatblade\__main__.py", line 5, in main
    cli.cli()
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\chatblade\cli.py", line 152, in cli
    handle_input(query, params)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\chatblade\cli.py", line 82, in handle_input
    printer.print_messages(messages, params)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\chatblade\printer.py", line 55, in print_messages
    print_message(message, args)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\chatblade\printer.py", line 76, in print_message
    console.print(Rule(style=COLORS[message.role]))
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\console.py", line 1673, in print
    with self:
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\console.py", line 865, in __exit__
    self._exit_buffer()
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\console.py", line 823, in _exit_buffer
    self._check_buffer()
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\console.py", line 2027, in _check_buffer
    legacy_windows_render(buffer, LegacyWindowsTerm(self.file))
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\_windows_renderer.py", line 17, in legacy_windows_render
    term.write_styled(text, style)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\_win32_console.py", line 442, in write_styled
    self.write_text(text)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\site-packages\rich\_win32_console.py", line 403, in write_text
    self.write(text)
  File "C:\Users\manchurianman\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-78: character maps to <undefined>

shell returned 1

It has to do with the horizontal dividers. When I run the same command but with the -n option, it works just fine.

Environment

Python version: 3.11.4
OS: Windows 10.0.19045 N/A Build 19045
Chatblade version: 0.3.4
NVIM version: v0.9.4

Feature request: Stream output

Would be better if the output can be streamed so the user can get immediate result from his query without waiting until the entire response ready.

Remove borders or add configuration to optionally remove them

Really hand tool! Thanks for this. Very helpful to now be able to pipeline into other tools.

I like the --raw and --extract options to be able to get raw access to responses but have found myself wanting to copy/paste more of the input and output from the terminal history. Even though the borders are nice to organize the content, I find they get in the way when trying to select and copy:

Kapture 2023-03-21 at 09 54 31

Results in:

│ given the above list of responses where each line is an individual response, provide up to 5 │
│ categories that summarize all of the responses and then categories each response. Provide    │
│ the categorized responses in json

I'd like to have the side borders go away to be more friendly for copying/pasting.

Syntax error

chatblade python <nazwa pliku | >, this generate syntax error.
chatblade "python <nazwa pliku | >" while this works fine.
Typical input is without using ""
This problem can be easily handled. But may be better handle to make chatblade easier to use in command line

-c 4 indeed does not work

When test on complex questions, chatblade -c 4 act the same with chatblade. But chatblade -c 4t work.

(base) ➜ ~ chatblade --chat-gpt 3.5 lease estimate roughly how many Fermi questions are being asked everyday
user
lease estimate roughly how many Fermi questions are being asked everyday assistant. It is difficult to provide an exact estimate as the number of Fermi questions being asked daily can vary greatly depending on various factors such as the context, audience, and platform. However, considering the popularity of Fermi questions as a tool for critical thinking and problem-solving, it is reasonable to assume that a significant number of Fermi questions are being asked every day across different educational, scientific, and creative communities.

(base) ➜ ~ chatblade --chat-gpt 4 what version are you
user
what version are you
assistant
As an artificial intelligence, I don't have a specific version. I'm constantly updated and improved by OpenAI.

(base) ➜ ~ chatblade --chat-gpt 4 lease estimate roughly how many Fermi questions are being asked everyday
user
lease estimate roughly how many Fermi questions are being asked everyday
assistant
As an AI, I don't have real-time data or the ability to monitor all conversations happening globally. However,
considering that Fermi questions are often used in educational settings, science competitions, and casual
discussions, it's safe to say that potentially hundreds or even thousands could be asked daily worldwide. This is
a rough estimate and the actual number could be higher or lower.

(base) ➜ ~ chatblade --chat-gpt 4t lease estimate roughly how many Fermi questions are being asked everyday
user
lease estimate roughly how many Fermi questions are being asked everyday
assistant
A Fermi question is a type of estimation problem that typically requires making educated guesses about quantities
that seem impossible to know offhand. These questions are named after physicist Enrico Fermi, who was known for
his ability to make good approximate calculations with little or no actual data.

Estimating the number of Fermi questions asked every day is itself a Fermi question. To make this estimation, we
would need to consider several factors:

1 Contexts in which Fermi questions are asked: These questions are often used in educational settings, job
interviews (especially for consulting, finance, and tech roles), and casual conversations among people
interested in problem-solving or trivia.
2 Population involved: We would need to estimate the number of people engaged in activities where Fermi questions
might be asked. This includes students, teachers, interviewers, interviewees, and enthusiasts.
3 Frequency of Fermi questions per context: We would need to estimate how often Fermi questions are asked in each
context. For example, a teacher might ask a few Fermi questions in a class, or an interviewer might ask one or
two during an interview.

Given the lack of specific data, we can make a very rough estimate:

• Assume there are about 1 million people worldwide who are in a position to ask Fermi questions daily (teachers,
interviewers, etc.).
• Each of these individuals might ask, on average, one Fermi question per day.

This would lead to an estimate of about 1 million Fermi questions asked per day. However, this number could be
significantly higher or lower depending on the actual number of people asking such questions and the frequency
with which they do so.

It's important to note that this is a very rough estimate and the actual number could be much different. The
purpose of a Fermi estimation is not to arrive at an exact number but to provide a reasonable order-of-magnitude
guess.

All previous answers get printed for new chatblade calls with existing session

I'm not sure if this is a bug or I have just specified that I want this functionality somewhere by accident:
I am using

chatblade \
	--session "$SESSION_NAME" \
	--raw \
	--no-format \
	--only \
	"$query"

to call chatblade every time I have a new query (instead of using the interactive mode). What I have noticed, is that, every time I ask another question, chatblade prints all of the answers to previous questions as well. E.g. if I have asked three questions so far and ask a fourth (all with seperate/new invocations of the above command I use to launch chatblade), then it will output the answers to the last three questions and the output to the foruth question. This is quite annoying, I would only like to see the answer to my latest question (I do want it to remember the last questions/answers of the session, hence I specify it manually).
Thanks for this awesome tool!

Certificate verify failed: self signed certificate in certificate chain

I'm getting cert errors running chatblade currently:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
    self._validate_conn(conn)
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn
    conn.connect()
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connection.py", line 642, in connect
    sock_and_verified = _ssl_wrap_socket_and_match_hostname(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connection.py", line 783, in _ssl_wrap_socket_and_match_hostname
    ssl_sock = ssl_wrap_socket(
               ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 469, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 513, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1075, in _create
    self.do_handshake()
  File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1346, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
    response = self._make_request(
               ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
    raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 874, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 874, in urlopen
    return self.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/connectionpool.py", line 844, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 596, in request_raw
    result = _thread_context.session.request(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/requests/adapters.py", line 517, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL:CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/homebrew/bin/chatblade", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/__main__.py", line 5, in main
    cli.cli()
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 152, in cli
    handle_input(query, params)
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 81, in handle_input
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/cli.py", line 14, in fetch_and_cache
    result = chat.query_chat_gpt(messages, params)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/chatblade/chat.py", line 121, in query_chat_gpt
    result = openai.ChatCompletion.create(messages=dict_messages, **config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 288, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/chatblade/0.3.1_1/libexec/lib/python3.11/site-packages/openai/api_requestor.py", line 609, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')))

I've tested other openai API endpoints manually with curl and am not currently getting these errors. I've tried uninstalling chatblade (via brew) and reinstalling to try to upgrade dependencies but that didn't help.

TypeError: __str__ returned non-string (type dict)

Traceback (most recent call last):
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\chat.py", line 122, in query_chat_gpt
    result = openai.ChatCompletion.create(messages=dict_messages, **config)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "D:\OpenAI\chatblade\venv\lib\site-packages\openai\api_requestor.py", line 298, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
    self._interpret_response_line(
  File "D:\OpenAI\chatblade\venv\lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line
    raise self.handle_error_response(
openai.error.AuthenticationError: <exception str() failed>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\a\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\a\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\OpenAI\chatblade\venv\Scripts\chatblade.exe\__main__.py", line 7, in <module>
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\__main__.py", line 5, in main
    cli.cli()
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\cli.py", line 152, in cli
    handle_input(query, params)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\cli.py", line 81, in handle_input
    messages = fetch_and_cache(messages, params)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\cli.py", line 14, in fetch_and_cache
    result = chat.query_chat_gpt(messages, params)
  File "D:\OpenAI\chatblade\venv\lib\site-packages\chatblade\chat.py", line 136, in query_chat_gpt
    raise errors.ChatbladeError(f"openai error: {e}")
TypeError: __str__ returned non-string (type dict)

Indeed __str__ returns dict:

{'code': 'account_deactivated', 'message': 'Your OpenAI account has been deactivated, please check your email for more information. If you feel this is an error, contact us through our help center at help.openai
.com.', 'param': None, 'type': 'invalid_request_error'}

How can I send a pdf to gpt4?

cat my.pdf | chatblade 'explain the following pdf'

produces an error in sys.stdin.read() , called from parser.get_piped_input()

The error is UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c

Is there a way of sending the pdf?

Many thanks!

FileExistsError on Windows

Thank you for developing this tool, it has been very helpful. However, I have encountered issues with the session feature while using it on Windows. When starting a query, an error message appears as follows:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "E:\python\Scripts\chatblade.exe\__main__.py", line 7, in <module>
  File "E:\python\Lib\site-packages\chatblade\__main__.py", line 5, in main
    cli.cli()
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 152, in cli
    handle_input(query, params)
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 81, in handle_input
    messages = fetch_and_cache(messages, params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\python\Lib\site-packages\chatblade\cli.py", line 26, in fetch_and_cache
    storage.to_cache(messages, params.session or utils.scratch_session)
  File "E:\python\Lib\site-packages\chatblade\storage.py", line 56, in to_cache
    os.rename(file_path_tmp, file_path)
FileExistsError: [WinError 183] 当文件已存在时,无法创建该文件。: 'C:\\Users\\qipao\\AppData\\Local\\chatblade\\chatblade\\Cache\\chatblade\\last.yaml.eDdQy479DK' -> 'C:\\Users\\qipao\\AppData\\Local\\chatblade\\chatblade\\Cache\\chatblade\\last.yaml'

It says that you can not create a file when it already exists. I tried on my mac and everything goes well. Can this be fixed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.