anthropics / anthropic-sdk-python Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Trying to run the basic code for Anthropic and getting this error: AttributeError: module 'anthropic' has no attribute 'Anthropic'
Using anthropic == 0.3.6
Code in my notebook:
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
anthropic = Anthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key='replaced with my actual api key',
)
completion = anthropic.completions.create(
model="claude-2",
max_tokens_to_sample=300,
prompt=f"{HUMAN_PROMPT} how does a court case get to the Supreme Court? {AI_PROMPT}",
)
print(completion.completion)
First thing: I'm quite noob in Python.
I'm trying out this package, but I can't find a way to make a conversation longer than one sentence with Claude: how can I code an iterative conversation with him? Like "How are you?" "Not so well" "Why not?" "Because I'm tired".
This is my code:
import os
c = anthropic.Client(api_key="XXX")
def ask_Claude(frase):
resp = c.completion(
prompt=f"{anthropic.HUMAN_PROMPT} {frase} {anthropic.AI_PROMPT}",
stop_sequences=[anthropic.HUMAN_PROMPT],
model="claude-v1.3",
max_tokens_to_sample=3000,
)
print(resp)
ask_Claude("Am I dumb?")
ask_Claude("Did I ask you something before?") ```
I'm guessing this is just not recommended to init the Client on each api call like this but if you are to do something like the below you get an ever growing number of open files as indicated by lsof
until I hit an OSError
.
from src import anthropic
import os
def send_something():
prompt = "\n\nHuman: This is a test prompt. Return some randome text\n\nAssistant:"
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
resp = client.completions.create(
prompt=prompt,
stop_sequences=[anthropic.HUMAN_PROMPT],
model="claude-v1",
max_tokens_to_sample=100,
)
return resp.completion
if __name__ == "__main__":
client = None
for i in range(1000):
print(send_something())
I can fix this by calling client._client.close()
but seems like it would be nicer to expose a close()
method or enable use of the client as a context manager. Suggestion here: #83
Or if this is just totally the wrong way to use the client maybe this could be mentioned in the Readme.
Thanks!
I want to know how to modify my code so that Claude can recall the previous Q&A.
Thank you very much.
This is my code:
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
anthropic = Anthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key=
"sk-xxxxxxxxxxxxxxxxx",
)
user_prompt = "Please write a poem about cats."
completion = anthropic.completions.create(
model="claude-2",
max_tokens_to_sample=300,
prompt=f"{HUMAN_PROMPT} {user_prompt} {AI_PROMPT}",
)
print("ANS1:", completion.completion)
user_prompt = "Can you retell the topic we just talked about?"
completion = anthropic.completions.create(
model="claude-2",
max_tokens_to_sample=300,
prompt=f"{HUMAN_PROMPT} {user_prompt} {AI_PROMPT}",
)
print("ANS2:", completion.completion)
This is the ouput:
ANS1: Here is a poem about cats:
Furry Felines
With soft fur and gentle purrs,
Cats slink around on quiet paws.
Watchful eyes take in their world,
While their tails twitch without pause.
Lounging in sunny windowsills,
Napping the afternoon away.
Exploring with curiosity,
Filled with energy at play.
Pouncing and leaping gracefully,
Chasing feathers on a string.
Rubbing against legs affectionately,
The comfort their presence brings.
Mysteries hid behind wise eyes,
Independent spirits roam free.
Enchanting pets beloved by many,
Furry felines are poetry.
ANS2: Unfortunately I don't have a long-term memory to retell past conversations. As an AI assistant created by Anthropic to be helpful, harmless, and honest, I can only respond to our current exchange.`
The simplest possible use of count_tokens ends with an error
import anthropic
anthropic.count_tokens('hello')
The error:
raceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/mobarski/.local/lib/python3.10/site-packages/anthropic/tokenizer.py", line 34, in count_tokens
tokenizer = get_tokenizer()
File "/home/mobarski/.local/lib/python3.10/site-packages/anthropic/tokenizer.py", line 29, in get_tokenizer
claude_tokenizer = Tokenizer.from_str(tokenizer_data)
Exception: expected value at line 1 column 1
A quick inspection in tokenizer.py
shows the URL from where the tokenizer should be downloaded:
CLAUDE_TOKENIZER_REMOTE_FILE = "https://public-json-tokenization-0d8763e8-0d7e-441b-a1e2-1c73b8e79dc3.storage.googleapis.com/claude-v1-tokenization.json"
And trying to access this link in the browser gives this:
<Error>
<Code>UserProjectAccountProblem</Code>
<Message>
The project to be billed is associated with a closed billing account.
</Message>
<Details>
The billing account for the owning project is disabled in state closed
</Details>
</Error>
When I run:
pip install anthropic
Then:
from anthropic import Anthropic
client = Anthropic()
client.count_tokens('Hello world!') # 3
I get the error:
Traceback (most recent call last):
File "", line 1, in
File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic_client.py", line 225, in count_tokens
tokenizer = self.get_tokenizer()
^^^^^^^^^^^^^^^^^^^^
File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic_client.py", line 230, in get_tokenizer
return sync_get_tokenizer()
^^^^^^^^^^^^^^^^^^^^
File ".pyenv\pyenv-win\versions\3.11.3\Lib\site-packages\anthropic_tokenizers.py", line 33, in sync_get_tokenizer
text = tokenizer_path.read_text()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".pyenv\pyenv-win\versions\3.11.3\Lib\pathlib.py", line 1059, in read_text
return f.read()
^^^^^^^^
File ".pyenv\pyenv-win\versions\3.11.3\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980: character maps to
I'm on a windows machine and setting my environment variables -> system settings PYTHONUTF8 to 1 didn't work either.
I've been getting a lot of APITimeoutErrors which doesn't seem to be documented in https://docs.anthropic.com/claude/reference/errors-and-rate-limits . I only have one request out at a time, so I don't think I should be over the rate limit?
Thanks for the help!
completion = anthropic.completions.create(
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_utils/_utils.py", line 253, in wrapper
return func(*args, **kwargs)
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/resources/completions.py", line 303, in create
return self._post(
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 925, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 724, in request
return self._request(
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 764, in _request
return self._retry_request(options, cast_to, retries, stream=stream, stream_cls=stream_cls)
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 801, in _retry_request
return self._request(
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 764, in _request
return self._retry_request(options, cast_to, retries, stream=stream, stream_cls=stream_cls)
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 801, in _retry_request
return self._request(
File "/Users/cass/Documents/GitHub/core/env/lib/python3.10/site-packages/anthropic/_base_client.py", line 765, in _request
raise APITimeoutError(request=request) from err
anthropic.APITimeoutError: Request timed out.
I've tried using map_reduce, refine, and stuff with langchain but that results in much worse responses:
https://docs.langchain.com/docs/components/chains/index_related_chains
One idea I had was to break up 100k tokens into chunks, summarize per chunk, and then do a full summary with the chucks.
I have API access and am testing claude-v1-100k
.
I call API passing in text
to the prompt, which is a large doc.
response = client.completion_stream(
prompt=f"{anthropic.HUMAN_PROMPT} What are the limitations of task-specific fine-tuning? Use this context to answer: %s {anthropic.AI_PROMPT}"%text,
stop_sequences=[anthropic.HUMAN_PROMPT],
max_tokens_to_sample=200,
model="claude-v1-100k",
stream=True,)
for data in response:
print(data)
I see max token count ~10k:
ApiException: Prompt tokens (62386) + max-sampled tokens (200) exceeds max (9216)
Is it possible that some users do not have access to claude-v1-100k
?
I also tried with a second API key.
Does Anthropic have any SDKs that can enable uploading attachments and then chatting?
Hey! I'm trying to use the API (streaming & async) but one annoying thing is that responses are streamed cumulatively, but the response always has a space at the beginning of the completion.
I'm trying to get the incremental difference (by doing diffs) but this small spacing change is messing up the response. Just wanted to flag, thanks!
Hello,
I noticed that the project is currently depending on pydantic version "^1.9.0". Given that pydantic is already at 2.1.1, I was wondering if there are any plans to upgrade your dependencies for this project.
Thanks!
Does anyone know if it is possible to use Claude for document question and answering in LangChain?
The Claude LLM model is integrated to LangChain. What is missing: LangChain-compatible Claude embeddings.
Does anyone know how I can generate them? Or alternative routes I could take to get Claude to work with Langchain document question and answering?
Hi,
To start off, I'd like to say I'm loving Claude so far, as it's more up to date than OpenAI's models. I'm planning on integrating Claude in my startup's technology for sure.
This probably isn't the most ideal place to put this issue and you can close it out if so, but I was having a hard time figuring out where to report bugs to for Claude via the online chat beta. Claude gave me numerous suggestions and links, but none of them were real.
I was using Claude chat online for navigating Azure and a specific phrase/word causes fetch failure. It had made 3 separate suggestions on fixing a bug I'm having. One of them was checking the web.config for the app service. I asked it to expand on all three suggestions in the same message and it kept failing, so I decided to go one-by-one. Using "web.config" in the chat was deemed the culprit.
It's probably a security rule built in or something, and probably a phrase that would appear very rarely, but I wanted to report it just in case.
I noticed a small issue with the URL concatenation logic in
anthropic-sdk-python/anthropic/api.py
Line 71 in 4187c65
http://localhost:5000/anthropic
and /v1/complete
, givinghttp://localhost:5000/v1/complete
instead of the expectedhttp://localhost:5000/anthropic/v1/complete
.abs_url = "%s%s" % (self.api_base, url)
You model is good but like all models could use improvement.
Is there a way provide feedback about the model to help you improve it? I know there's the option in the chat console, But I am using it through the api so I can't really submit feedback through that. Maybe I can drop you an email.
I'm receiving KeyErrors when trying to run a completion. I've tried 5 different API keys and all the different models with no success.
Using the code from the docs:
import os
import anthropic
client = anthropic.Client(os.environ['API_KEY_HERE'])
response = client.completion(
prompt=f"{anthropic.HUMAN_PROMPT} How many toes do dogs have?{anthropic.AI_PROMPT}",
model="claude-1",
max_tokens_to_sample=100,
)
print(response)
I am using python 3.9 and I install anthropic==0.3.1 and when I am running:
from anthropic import AsyncAnthropic
I am getting:
Traceback (most recent call last):
File "", line 1, in
File "venv/lib/python3.9/site-packages/anthropic/init.py", line 3, in
from . import types
File "venv/lib/python3.9/site-packages/anthropic/types/init.py", line 5, in
from .completion import Completion as Completion
File "venv/lib/python3.9/site-packages/anthropic/types/completion.py", line 3, in
from .._models import BaseModel
File "venv/lib/python3.9/site-packages/anthropic/_models.py", line 11, in
from pydantic.fields import ModelField
ImportError: cannot import name 'ModelField' from 'pydantic.fields' (venv/lib/python3.9/site-packages/pydantic/fields.py)
Windows 11 with Pythno 3.10 and used these codes . It turned out to an "Invalid API Key" error.
But I'm sure the api_key is good because I could get good response via unofficial API call (from other github repository).
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
anthropic = Anthropic(api_key = "sk-ant-XXXXXX")
def getResponse(prompt):
msg=f"{HUMAN_PROMPT} {prompt} {AI_PROMPT}"
print(msg)
completion = anthropic.completions.create(
model = "claude-2",
max_tokens_to_sample = 30000,
prompt = msg,
)
res = completion.completion
print(res)
return res
if name == "main":
getResponse("Hello, Claude")
the last 3 lines of error messages:
File "D:\Python310\anthropic\lib\site-packages\anthropic_base_client.py", line 761, in _request
raise self._make_status_error_from_response(request, err.response) from None
anthropic.AuthenticationError: Error code: 401 - {'error': {'type': 'authentication_error', 'message': 'Invalid API Key'}}
Appreciate your help. Thanks.
Hey. We want to release an SDK for C# and would like to know if you have an official Swagger/OpenAPI spec? I see that your SDKs are generated with Stainless and the OpenAPI spec, but didn't find it publicly available.
I think it would be great if this is available in a separate repository and updated according to the latest changes, like OpenAI - https://github.com/openai/openai-openapi
In the README where it mentions about AsyncAnthropic, the import statement is using Anthropic instead of AsyncAnthropic
Async Usage
Simply import AsyncAnthropic instead of Anthropic and use await with each API call:
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
anthropic = AsyncAnthropic(
# defaults to os.environ.get("ANTHROPIC_API_KEY")
api_key="my api key",
)
#suggestion
Right now it seems like the only way to install anthropic is that we have to clone the project first. Is there any possibility that this sdk can be available as a pip package so we can install it more easily?
Pydantic 2 has been out for a while but I'm unable to use it in any project that has an anthropic dependency because of this library's versioning restrictions: pydantic [required: >=1.9.0,<2.0.0, installed: 1.9.0]
. Could we upgrade it?
Hey, I'm trying to install the module and am getting the following bug, where the module seems to install as UNKNOWN.
Defaulting to user installation because normal site-packages is not writeable
Processing /home/ubuntu/Projects/anthropic-sdk-python
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: UNKNOWN
Building wheel for UNKNOWN (pyproject.toml) ... done
Created wheel for UNKNOWN: filename=UNKNOWN-0.0.0-py3-none-any.whl size=1772 sha256=104b40aa63bc234ef702dd764ee0f3006a526856e0e72b3b16f8ef40b02ab761
Stored in directory: /home/ubuntu/.cache/pip/wheels/36/a6/f9/24ebd35574b84752dd8eff2d63418a52e41200a3f10b9652df
Successfully built UNKNOWN
Installing collected packages: UNKNOWN
Attempting uninstall: UNKNOWN
Found existing installation: UNKNOWN 0.0.0
Uninstalling UNKNOWN-0.0.0:
Successfully uninstalled UNKNOWN-0.0.0
Successfully installed UNKNOWN-0.0.0
I'm on the following Ubuntu version:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
Running the following version of Python:
Python 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
And, I'm on this version of setup tools:
setuptools 67.6.1
I upgraded setup tools to make sure it was working and also checked to make sure that the TOML file was a valid TOML file.
We have a server implemented with FastAPI to cal Anthropic through Python, but when run the following experiments, the memory kept increasing and not released after stop sending request to the server.
Sample server code logic
chat = Anthropic(api_key=api_key)
completion = chat.completions.create(
model="claude-2",
max_tokens_to_sample=20,
prompt=f"{HUMAN_PROMPT} what is {passed_in_number} {AI_PROMPT}",
)
return completion
When I replaced the Anthropic python sdk with httpx request directly, the memory kept low and stable during the same experiment.
Same issue when using 0.3.11 as well.
Would be great if you can help to take a look.
I have tested anthropic 0.3.11 and 0.2.9 , both might have File descriptor leak .
code 1 :
@retry(stop=stop_after_attempt(5), wait=wait_exponential(max=10), retry=retry_if_exception_type((ConnectionResetError,anthropic.InternalServerError) ))
def send_something():
prompt = "\n\nHuman: This is a test prompt. Return some randome text\n\nAssistant":
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
resp = client.completions.create(
prompt=prompt,
stop_sequences=[anthropic.HUMAN_PROMPT],
model="claude-instant-1.2",
max_tokens_to_sample=100,
)
client.close()
return resp.completion
if name == "main":
client = None
for i in range(10000):
print(send_something())
code 2 :
@retry(stop=stop_after_attempt(5), wait=wait_exponential(max=10), retry=retry_if_exception_type((ConnectionResetError,anthropic.InternalServerError) ))
def send_something():
prompt = "\n\nHuman: This is a test prompt. Return some randome text\n\nAssistant:"
resp = client.completions.create(
prompt=prompt,
stop_sequences=[anthropic.HUMAN_PROMPT],
model="claude-instant-1.2",
max_tokens_to_sample=100,
)
return resp.completion
if name == "main":
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
for i in range(10000):
print(send_something())
Both code led to File descriptor leak eventually after 5000-10000 loops . I ever tried to remove the retry and the same File descriptor leak happened . When changed to use
headers = {"x-api-key": f'{api_key}', "Content-Type": 'application/json'}
data = {"prompt": f'\n\nHuman: {prompt1}\n\nAssistant: ',"model": 'claude-instant-1.2',"max_tokens_to_sample": 10000,"temperature":0}
r = requests.post('https://api.anthropic.com/v1/complete', headers=headers, json=data,timeout=500)
everything is fine . no File descriptor leak at all .
I now have some plain text files to process, but I found that if I upload attachments on the web page, the response I get is significantly better than the results obtained by using the API. I would like to know if there is any way to make the results of the API look like the results of the web page?
The format of the prompt I send in the API is roughly like this:
{Introduction}
Here is the file, in <File></File>XML tags:
<File>
{document content}
</File>
What I send on the web page is this: "{Introduction}", and then upload the file as an attachment.
My understanding is that there should be a fixed format on the web page to connect the file content and my prompt to get better results. Can I get this splicing format? If you need the text of the {introduction} part or specific text files involved here, please contact me, thank you very much!
This problem has bothered me for several days, I am looking forward to your reply :)
try to login, get the error "You do not currently have a valid Anthropic account. You may need to accept your invite: check for an email with the subject “Your invitation to Claude, from Anthropic” and click the link there."
can anyone send me an invitation? Thanks.
Race conditions during file modification between multiple calls to get_tokenizer
can result in error such as anthropic.tokenizer.TokenizerException: Failed to load tokenizer: EOF while parsing a value at line 1 column 0
.
get_tokenizer
is thread-hostile. It should either have a docstring indicating that it's thread-hostile, or a lock should be used to make it thread-safe.
I noticed that Anthropics APIs use cumulative streaming in the completion endpoint, resulting in repetitive data being sent over the wire. Is there a reason for this design choice? I imagine Incremental streaming is typically preferred for efficiency.
I see this is on the endpoint and not a python or ts issue. so this could impact existing customers.
Hey there, I was wondering if asynchronous calls are or will be supported. Thanks!
Even though cloude-2 has 100k tokens context window I can not get it to generate more than 4k tokens. It interrupts generation after exactly 4096 tokens. Even if I set max_tokens_to_sample
to be more than 4096. I tried both anthropic-sdk-python and web interface - both interrupt in the same place.
Code to reproduce:
from anthropic import Anthropic
anthropic = Anthropic(
api_key="",
)
prompt_4649_tokens = """
Human:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ultrices vitae auctor eu augue ut lectus arcu. Sed nisi lacus sed viverra tellus in hac. Augue neque gravida in fermentum et sollicitudin. Mollis nunc sed id semper risus in hendrerit. Quam id leo in vitae turpis massa sed elementum tempus. Sed viverra tellus in hac habitasse. Ornare suspendisse sed nisi lacus sed viverra. Nisi porta lorem mollis aliquam. Convallis posuere morbi leo urna molestie at elementum.
Vulputate sapien nec sagittis aliquam malesuada bibendum arcu vitae. Aenean vel elit scelerisque mauris pellentesque. Magna fermentum iaculis eu non diam. Cursus eget nunc scelerisque viverra mauris. Convallis aenean et tortor at risus. Et tortor at risus viverra adipiscing at in tellus integer. Diam phasellus vestibulum lorem sed risus ultricies. Commodo sed egestas egestas fringilla phasellus faucibus scelerisque. Donec ac odio tempor orci dapibus ultrices. Ultrices in iaculis nunc sed augue.
Posuere sollicitudin aliquam ultrices sagittis orci. Aliquet bibendum enim facilisis gravida neque convallis. Lorem dolor sed viverra ipsum nunc aliquet bibendum enim facilisis. Tristique et egestas quis ipsum suspendisse ultrices gravida dictum. Ornare lectus sit amet est placerat. Ac ut consequat semper viverra nam libero. Dignissim cras tincidunt lobortis feugiat vivamus at augue. Enim tortor at auctor urna. Neque vitae tempus quam pellentesque nec nam. Volutpat commodo sed egestas egestas fringilla phasellus faucibus. Eget sit amet tellus cras adipiscing enim eu turpis egestas. Nisl tincidunt eget nullam non nisi est sit. In fermentum et sollicitudin ac orci. Vulputate mi sit amet mauris commodo. Velit euismod in pellentesque massa placerat duis ultricies lacus sed. Donec ac odio tempor orci dapibus ultrices in iaculis nunc. Elit eget gravida cum sociis natoque penatibus et. In massa tempor nec feugiat nisl pretium fusce id.
Urna porttitor rhoncus dolor purus non enim praesent. Semper risus in hendrerit gravida rutrum quisque. Venenatis cras sed felis eget velit aliquet sagittis. Augue neque gravida in fermentum et sollicitudin ac orci. Risus in hendrerit gravida rutrum quisque non tellus orci. Eu ultrices vitae auctor eu augue ut. Metus vulputate eu scelerisque felis. Platea dictumst quisque sagittis purus sit amet volutpat. Imperdiet massa tincidunt nunc pulvinar sapien et ligula ullamcorper malesuada. Tortor posuere ac ut consequat semper. Volutpat odio facilisis mauris sit amet massa vitae. Gravida in fermentum et sollicitudin ac orci phasellus egestas tellus.
Adipiscing elit pellentesque habitant morbi tristique senectus et netus. Volutpat sed cras ornare arcu dui vivamus arcu felis. Aliquam etiam erat velit scelerisque. Mauris nunc congue nisi vitae suscipit tellus mauris a diam. Odio ut sem nulla pharetra diam sit amet nisl suscipit. Volutpat commodo sed egestas egestas fringilla phasellus faucibus scelerisque eleifend. Diam sollicitudin tempor id eu nisl nunc mi. Pellentesque id nibh tortor id aliquet. At risus viverra adipiscing at. Arcu odio ut sem nulla pharetra diam sit amet nisl. Lectus mauris ultrices eros in cursus turpis massa tincidunt dui. Nec sagittis aliquam malesuada bibendum arcu vitae elementum. Porttitor rhoncus dolor purus non. Non curabitur gravida arcu ac tortor dignissim convallis. Augue mauris augue neque gravida in. Purus sit amet volutpat consequat mauris nunc. Nisl vel pretium lectus quam id leo in. Facilisis gravida neque convallis a cras semper.
Velit euismod in pellentesque massa placerat duis. Cras tincidunt lobortis feugiat vivamus at augue eget arcu dictum. Turpis nunc eget lorem dolor sed viverra ipsum nunc. Odio eu feugiat pretium nibh ipsum consequat. Viverra justo nec ultrices dui sapien eget mi proin. Netus et malesuada fames ac. Consequat semper viverra nam libero. Massa placerat duis ultricies lacus sed. A scelerisque purus semper eget duis at tellus at. Sit amet dictum sit amet justo donec enim. Bibendum at varius vel pharetra vel. Congue quisque egestas diam in arcu cursus.
Scelerisque purus semper eget duis at tellus at urna condimentum. Id consectetur purus ut faucibus pulvinar elementum integer enim neque. Id nibh tortor id aliquet. Lectus quam id leo in vitae turpis massa sed elementum. Sit amet mattis vulputate enim nulla aliquet porttitor lacus luctus. Faucibus pulvinar elementum integer enim neque. Elit at imperdiet dui accumsan sit amet nulla facilisi. Tincidunt tortor aliquam nulla facilisi cras. Sollicitudin aliquam ultrices sagittis orci a scelerisque. Ac ut consequat semper viverra nam libero justo.
Risus in hendrerit gravida rutrum quisque non tellus orci. Pharetra sit amet aliquam id diam. Fermentum odio eu feugiat pretium nibh. Sit amet dictum sit amet justo donec enim diam vulputate. Non arcu risus quis varius quam quisque id diam. Nullam non nisi est sit amet facilisis magna. Elit eget gravida cum sociis natoque penatibus et magnis. Integer feugiat scelerisque varius morbi. Nec sagittis aliquam malesuada bibendum arcu vitae. Id consectetur purus ut faucibus pulvinar elementum integer.
Vitae elementum curabitur vitae nunc sed velit dignissim. Eget felis eget nunc lobortis mattis aliquam. Pretium vulputate sapien nec sagittis aliquam malesuada bibendum arcu. Faucibus turpis in eu mi bibendum neque egestas congue. In ornare quam viverra orci sagittis eu volutpat. Nulla facilisi etiam dignissim diam quis enim lobortis scelerisque. Ut pharetra sit amet aliquam id diam maecenas ultricies. Ornare arcu dui vivamus arcu felis bibendum ut. In eu mi bibendum neque egestas congue. Nisl condimentum id venenatis a condimentum vitae sapien pellentesque habitant. Sed viverra ipsum nunc aliquet bibendum enim facilisis. Interdum posuere lorem ipsum dolor sit amet consectetur adipiscing. Velit laoreet id donec ultrices tincidunt arcu non sodales.
Viverra justo nec ultrices dui sapien. Viverra orci sagittis eu volutpat odio facilisis mauris sit amet. Vel pretium lectus quam id leo in vitae. Senectus et netus et malesuada fames ac. Eu tincidunt tortor aliquam nulla facilisi cras. Blandit volutpat maecenas volutpat blandit aliquam etiam erat velit scelerisque. Et egestas quis ipsum suspendisse ultrices gravida dictum fusce ut. In hac habitasse platea dictumst. Posuere ac ut consequat semper. Diam sit amet nisl suscipit adipiscing bibendum est ultricies. Id volutpat lacus laoreet non curabitur. Lectus magna fringilla urna porttitor rhoncus dolor. Massa tincidunt nunc pulvinar sapien et ligula. Nunc id cursus metus aliquam eleifend mi in. Non arcu risus quis varius quam quisque id diam. Duis tristique sollicitudin nibh sit. Erat pellentesque adipiscing commodo elit at imperdiet dui accumsan sit. Praesent elementum facilisis leo vel.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ultrices vitae auctor eu augue ut lectus arcu. Sed nisi lacus sed viverra tellus in hac. Augue neque gravida in fermentum et sollicitudin. Mollis nunc sed id semper risus in hendrerit. Quam id leo in vitae turpis massa sed elementum tempus. Sed viverra tellus in hac habitasse. Ornare suspendisse sed nisi lacus sed viverra. Nisi porta lorem mollis aliquam. Convallis posuere morbi leo urna molestie at elementum.
Vulputate sapien nec sagittis aliquam malesuada bibendum arcu vitae. Aenean vel elit scelerisque mauris pellentesque. Magna fermentum iaculis eu non diam. Cursus eget nunc scelerisque viverra mauris. Convallis aenean et tortor at risus. Et tortor at risus viverra adipiscing at in tellus integer. Diam phasellus vestibulum lorem sed risus ultricies. Commodo sed egestas egestas fringilla phasellus faucibus scelerisque. Donec ac odio tempor orci dapibus ultrices. Ultrices in iaculis nunc sed augue.
Posuere sollicitudin aliquam ultrices sagittis orci. Aliquet bibendum enim facilisis gravida neque convallis. Lorem dolor sed viverra ipsum nunc aliquet bibendum enim facilisis. Tristique et egestas quis ipsum suspendisse ultrices gravida dictum. Ornare lectus sit amet est placerat. Ac ut consequat semper viverra nam libero. Dignissim cras tincidunt lobortis feugiat vivamus at augue. Enim tortor at auctor urna. Neque vitae tempus quam pellentesque nec nam. Volutpat commodo sed egestas egestas fringilla phasellus faucibus. Eget sit amet tellus cras adipiscing enim eu turpis egestas. Nisl tincidunt eget nullam non nisi est sit. In fermentum et sollicitudin ac orci. Vulputate mi sit amet mauris commodo. Velit euismod in pellentesque massa placerat duis ultricies lacus sed. Donec ac odio tempor orci dapibus ultrices in iaculis nunc. Elit eget gravida cum sociis natoque penatibus et. In massa tempor nec feugiat nisl pretium fusce id.
Urna porttitor rhoncus dolor purus non enim praesent. Semper risus in hendrerit gravida rutrum quisque. Venenatis cras sed felis eget velit aliquet sagittis. Augue neque gravida in fermentum et sollicitudin ac orci. Risus in hendrerit gravida rutrum quisque non tellus orci. Eu ultrices vitae auctor eu augue ut. Metus vulputate eu scelerisque felis. Platea dictumst quisque sagittis purus sit amet volutpat. Imperdiet massa tincidunt nunc pulvinar sapien et ligula ullamcorper malesuada. Tortor posuere ac ut consequat semper. Volutpat odio facilisis mauris sit amet massa vitae. Gravida in fermentum et sollicitudin ac orci phasellus egestas tellus.
Adipiscing elit pellentesque habitant morbi tristique senectus et netus. Volutpat sed cras ornare arcu dui vivamus arcu felis. Aliquam etiam erat velit scelerisque. Mauris nunc congue nisi vitae suscipit tellus mauris a diam. Odio ut sem nulla pharetra diam sit amet nisl suscipit. Volutpat commodo sed egestas egestas fringilla phasellus faucibus scelerisque eleifend. Diam sollicitudin tempor id eu nisl nunc mi. Pellentesque id nibh tortor id aliquet. At risus viverra adipiscing at. Arcu odio ut sem nulla pharetra diam sit amet nisl. Lectus mauris ultrices eros in cursus turpis massa tincidunt dui. Nec sagittis aliquam malesuada bibendum arcu vitae elementum. Porttitor rhoncus dolor purus non. Non curabitur gravida arcu ac tortor dignissim convallis. Augue mauris augue neque gravida in. Purus sit amet volutpat consequat mauris nunc. Nisl vel pretium lectus quam id leo in. Facilisis gravida neque convallis a cras semper.
Velit euismod in pellentesque massa placerat duis. Cras tincidunt lobortis feugiat vivamus at augue eget arcu dictum. Turpis nunc eget lorem dolor sed viverra ipsum nunc. Odio eu feugiat pretium nibh ipsum consequat. Viverra justo nec ultrices dui sapien eget mi proin. Netus et malesuada fames ac. Consequat semper viverra nam libero. Massa placerat duis ultricies lacus sed. A scelerisque purus semper eget duis at tellus at. Sit amet dictum sit amet justo donec enim. Bibendum at varius vel pharetra vel. Congue quisque egestas diam in arcu cursus.
Scelerisque purus semper eget duis at tellus at urna condimentum. Id consectetur purus ut faucibus pulvinar elementum integer enim neque. Id nibh tortor id aliquet. Lectus quam id leo in vitae turpis massa sed elementum. Sit amet mattis vulputate enim nulla aliquet porttitor lacus luctus. Faucibus pulvinar elementum integer enim neque. Elit at imperdiet dui accumsan sit amet nulla facilisi. Tincidunt tortor aliquam nulla facilisi cras. Sollicitudin aliquam ultrices sagittis orci a scelerisque. Ac ut consequat semper viverra nam libero justo.
Risus in hendrerit gravida rutrum quisque non tellus orci. Pharetra sit amet aliquam id diam. Fermentum odio eu feugiat pretium nibh. Sit amet dictum sit amet justo donec enim diam vulputate. Non arcu risus quis varius quam quisque id diam. Nullam non nisi est sit amet facilisis magna. Elit eget gravida cum sociis natoque penatibus et magnis. Integer feugiat scelerisque varius morbi. Nec sagittis aliquam malesuada bibendum arcu vitae. Id consectetur purus ut faucibus pulvinar elementum integer.
Vitae elementum curabitur vitae nunc sed velit dignissim. Eget felis eget nunc lobortis mattis aliquam. Pretium vulputate sapien nec sagittis aliquam malesuada bibendum arcu. Faucibus turpis in eu mi bibendum neque egestas congue. In ornare quam viverra orci sagittis eu volutpat. Nulla facilisi etiam dignissim diam quis enim lobortis scelerisque. Ut pharetra sit amet aliquam id diam maecenas ultricies. Ornare arcu dui vivamus arcu felis bibendum ut. In eu mi bibendum neque egestas congue. Nisl condimentum id venenatis a condimentum vitae sapien pellentesque habitant. Sed viverra ipsum nunc aliquet bibendum enim facilisis. Interdum posuere lorem ipsum dolor sit amet consectetur adipiscing. Velit laoreet id donec ultrices tincidunt arcu non sodales.
Viverra justo nec ultrices dui sapien. Viverra orci sagittis eu volutpat odio facilisis mauris sit amet. Vel pretium lectus quam id leo in vitae. Senectus et netus et malesuada fames ac. Eu tincidunt tortor aliquam nulla facilisi cras. Blandit volutpat maecenas volutpat blandit aliquam etiam erat velit scelerisque. Et egestas quis ipsum suspendisse ultrices gravida dictum fusce ut. In hac habitasse platea dictumst. Posuere ac ut consequat semper. Diam sit amet nisl suscipit adipiscing bibendum est ultricies. Id volutpat lacus laoreet non curabitur. Lectus magna fringilla urna porttitor rhoncus dolor. Massa tincidunt nunc pulvinar sapien et ligula. Nunc id cursus metus aliquam eleifend mi in. Non arcu risus quis varius quam quisque id diam. Duis tristique sollicitudin nibh sit. Erat pellentesque adipiscing commodo elit at imperdiet dui accumsan sit. Praesent elementum facilisis leo vel.
You are echo bot, just repeat the text above
Assistant:"""
completion_4096_tokens = anthropic.completions.create(
model="claude-2.0",
max_tokens_to_sample=5000,
prompt=prompt_4649_tokens,
)
print(completion_4096_tokens.completion)
If you run that code you can see, that completion interrupted mid-sentense after reaching 4096 tokens:
...
Vitae elementum curabitur vitae nunc sed velit dignissim. Eget felis eget n
It is the last chars of the output.
And in developer log on anthropic page it looks like that:
So that 100k tokens only for input, and only 4k of them for output? It is very unexpected (because openai models not like that) and not documented behaviour.
When I put a json file as example in prompt , the api request is not going through due to KeyError , I guess the prompt treat { } as a system prompt . can you advise the solution ? below is the json file I put into prompt .
`[
{
"verified": "yes",
"relationship": "friends",
"name": "John"
},
{
"verified": "yes",
"relationship": "workmates",
"name": "Richard"
}`
Where can I find my API key?
Super interested in testing this out 🚀
Hello,
I'm on Windows and installed with pip install git+https://github.com/anthropics/anthropic-sdk-python.git
Seeing this issue when issuing a standard completion prompt.
This is the command:
import anthropic
anthropic_client = anthropic.Client(api_key=...)
prompt = '\n\nHuman: This is a chat between me and you\n\nAssistant:'
resp = anthropic_client.completion(
prompt=prompt,
stop_sequences=[anthropic.HUMAN_PROMPT],
model="claude-instant-v1",
max_tokens_to_sample=200,
)
Here is the complete error:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In[35], line 1
----> 1 resp = anthropic_client.completion(
2 prompt=prompt,
3 stop_sequences=[anthropic.HUMAN_PROMPT],
4 model=model,
5 max_tokens_to_sample=max_tokens,
6 )
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:237, in Client.completion(self, **kwargs)
236 def completion(self, **kwargs) -> dict:
--> 237 return self._request_as_json(
238 "post",
239 "/v1/complete",
240 params=kwargs,
241 )
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:196, in Client._request_as_json(self, *args, **kwargs)
195 def _request_as_json(self, *args, **kwargs) -> dict:
--> 196 result = self._request_raw(*args, **kwargs)
197 content = result.content.decode("utf-8")
198 json_body = json.loads(content)
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:117, in Client._request_raw(self, method, path, params, headers, request_timeout)
109 def _request_raw(
110 self,
111 method: str,
(...)
115 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
116 ) -> requests.Response:
--> 117 request = self._request_params(headers, method, params, path, request_timeout)
118 result = self._session.request(
119 request.method,
120 request.url,
(...)
124 timeout=request.timeout,
125 )
127 _process_request_error(
128 method, result.content.decode("utf-8"), result.status_code
129 )
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:85, in Client._request_params(self, headers, method, params, path, request_timeout)
79 del params["disable_checks"]
80 else:
81 # NOTE: disabling_checks can lead to very poor sampling quality from our API.
82 # _Please_ read the docs on "Claude instructions when using the API" before disabling this.
83 # Also note, future versions of the API will enforce these as hard constraints automatically,
84 # so please consider these SDK-side checks as things you'll need to handle regardless.
---> 85 _validate_request(params)
86 data = None
87 if params:
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:271, in _validate_request(params)
269 if prompt.endswith(" "):
270 raise ApiException(f"Prompt must not end with a space character")
--> 271 _validate_prompt_length(params)
File ~\.conda\envs\guideml\lib\site-packages\anthropic\api.py:276, in _validate_prompt_length(params)
274 def _validate_prompt_length(params: dict) -> None:
275 prompt: str = params["prompt"]
--> 276 prompt_tokens = tokenizer.count_tokens(prompt)
277 max_tokens_to_sample: int = params["max_tokens_to_sample"]
278 token_limit = 9 * 1024
File ~\.conda\envs\guideml\lib\site-packages\anthropic\tokenizer.py:34, in count_tokens(text)
33 def count_tokens(text: str) -> int:
---> 34 tokenizer = get_tokenizer()
35 encoded_text = tokenizer.encode(text)
36 return len(encoded_text.ids)
File ~\.conda\envs\guideml\lib\site-packages\anthropic\tokenizer.py:29, in get_tokenizer()
27 if not claude_tokenizer:
28 tokenizer_data = _get_cached_tokenizer_file_as_str()
---> 29 claude_tokenizer = Tokenizer.from_str(tokenizer_data)
31 return claude_tokenizer
Exception: EOF while parsing a value at line 1 column 0
Hi! I'm looking to write a ruby client and was wondering why not including the tokenizer in the repo instead of downloading it from the web. Is it because of licensing or similar?
Thanks!
It would be lovely if type hinting were used for the return type of Client.completion
😄
Currently, kwargs
are passed down as params
, hence headers
and request_timeout
cannot be set.
Relevant code snippet.
def completion(self, **kwargs) -> dict:
return self._request_as_json(
"post",
"/v1/complete",
params=kwargs,
)
...
def _request_raw(
self,
method: str,
path: str,
params: dict,
headers: Optional[Dict[str, str]] = None,
request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
) -> requests.Response:
...
As shown above, if users set headers
or requesst_timeout
in completion
method, it will be pass down as params
. Headers
and request_timeout
will remain None.
It can be handled like below instead.
def completion(self, **kwargs) -> dict:
return self._request_as_json(
"post",
"/v1/complete",
**kwargs,
)
def _request_raw(
self,
method: str,
path: str,
headers: Optional[Dict[str, str]] = None,
request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
**params: dict,
) -> requests.Response:
On https://docs.anthropic.com/claude/reference/selecting-a-model
Claude Instant: low-latency, high throughout
should be throughput.
Relevant location:
https://github.com/anthropics/anthropic-sdk-python/blob/main/examples/basic_sync.py doesn't have one.
In the examples give in the python sdk, there is a space between the end of the user message and the beginning of AI_PROMPT, whereas in the official API docs, there is specifically no space: https://docs.anthropic.com/claude/reference/getting-started-with-the-api#prompt-formatting. While this seems minor we know that LLMs can have varied outputs depending on small changes to things like this. What is the recommended way to do it?
Hi,
I was trying to call Claude-v1.3-100k (Clong) using a very long input (119923 tokens from OpenAI tiktokenizer). I encountered a "timeout" error which after debugging, should trace back to
DEFAULT_TIMEOUT = Timeout(timeout=60.0, connect=5.0)
where the timeout is 60 seconds but Clong requires more than 60s to compute (in my case, 93s after I modified the timeout variable).
I feel like this should be a general issue when calling Claude for long-context. I considered adding a pull request extending timeout to 5 minutes but I also feel like probably letting users to decide timeout as an argument might also be a solution. So I just raise up the issue here and feel free to ask for more clarification / details
For example, I obtained the following error when attempting to receive a response:
'charmap' codec can't encode character '\u0100' in position 2452: character maps to
The attempted solution that works in my case:
Modify the tokenizer.py file by adding encoding="utf-8"
when attempting to read or write the tokenizer_file
tokenizer_file = _get_tokenizer_filename()
if not os.path.exists(tokenizer_file):
response = httpx.get(CLAUDE_TOKENIZER_REMOTE_FILE)
response.raise_for_status()
with open(tokenizer_file, 'w', encoding="utf-8") as f:
f.write(response.text)
with open(tokenizer_file, 'r', encoding="utf-8") as f:
return f.read()
```
Problem:
'auth_token' parameter cannot be used even if 'api_key' is not set.
This is because in L87:
api_key = api_key or os.environ.get("ANTHROPIC_API_KEY", "")
api_key is set to ""
by default instead of None
.
Such that,
@property
def _api_key_header(self) -> dict[str, str]:
api_key = self.api_key
if api_key is None:
return {}
return {"X-Api-Key": api_key}
will return {"X-Api-Key": ""}
instead of {} as expected.
Therefore,
@property
def auth_headers(self) -> dict[str, str]:
if self._api_key_header:
return self._api_key_header
if self._auth_token_bearer:
return self._auth_token_bearer
return {}
will return {"X-Api-Key": ""}
instead of {"Authorization": f"Bearer {auth_token}"}
as expected.
anthropic-sdk-python/src/anthropic/_client.py
Lines 87 to 131 in 2a99183
Solution:
change L87 to
api_key = api_key or os.environ.get("ANTHROPIC_API_KEY", None)
Hello,
I was wondering if the current SDK supports sending a file (PDF, txt) as part of the prompt. We can certainly read the content of the file and append it to the prompt, but I was wondering if this is something that the SDK would support directly.
Thanks
Every time I provide explicit prompt instructions to claude-2
to return something in a specific format, it does so reliably but the output always includes a preamble of that claude did.
For example:
There are several ways a court case can reach the U.S. Supreme Court:
- Appeal from lower federal courts - Most cases come to the Supreme Court on appeal from lower federal appellate courts, like the circuit courts or the Court of Appeals for the Federal Circuit. A party who loses at the appellate level can request the Supreme Court hear the case.
- Appeal from state supreme courts - If the highest state court rules on an issue of federal law, like the constitutionality of a state law, the losing party can appeal to the U.S. Supreme Court. This accounts for about a third of the caseload.
- Original jurisdiction cases - A limited category of cases has the Supreme Court as the court of first instance. These include cases between two or more U.S. states or cases involving ambassadors and other public ministers.
- Certified questions from appellate courts - Appellate courts can ask the Supreme Court to answer a question of law in a case pending before them. This happens infrequently.
- Writ of certiorari - The most common way cases reach the Supreme Court is through a petition for writ of certiorari. This is a request that the Supreme Court order a lower court to send up the record of a case for review. The Court grants cert in less than 5% of petitions received.
I just want the bulleted list without the There are several ways..
preamble.
Hello.
I have import error when I am trying to use Anthropic.
File "/Users/iwtu/repos/expensive-faqbot/claude2.py", line 1, in <module>
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
File "/Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages/anthropic/__init__.py", line 3, in <module>
from . import types
File "/Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages/anthropic/types/__init__.py", line 5, in <module>
from .completion import Completion as Completion
File "/Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages/anthropic/types/completion.py", line 3, in <module>
from .._models import BaseModel
File "/Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages/anthropic/_models.py", line 11, in <module>
from pydantic.fields import ModelField
ImportError: cannot import name 'ModelField' from 'pydantic.fields' (/Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages/pydantic/fields.py)```
```pip show anthropic
Name: anthropic
Version: 0.3.1
Summary: Client library for the anthropic API
Home-page: https://github.com/anthropics/anthropic-sdk-python
Author: Anthropic
Author-email: [email protected]
License: MIT
Location: /Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages
Requires: anyio, distro, httpx, pydantic, tokenizers, typing-extensions
Required-by:```
```pip show pydantic
Name: pydantic
Version: 2.0.3
Summary: Data validation using Python type hints
Home-page:
Author:
Author-email: Samuel Colvin <[email protected]>, Eric Jolibois <[email protected]>, Hasan Ramezani <[email protected]>, Adrian Garcia Badaracco <[email protected]>, Terrence Dorsey <[email protected]>, David Montague <[email protected]>
License:
Location: /Users/iwtu/.pyenv/versions/3.11.3/envs/expensive-faqbot/lib/python3.11/site-packages
Requires: annotated-types, pydantic-core, typing-extensions
Required-by: anthropic, fastapi, pydantic-settings```
After running
pip install anthropic
and then creating a .py file with the following
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
I get this error below:
Traceback (most recent call last):
File "/home/ubuntu/claude_api/main.py", line 1, in <module>
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
ImportError: cannot import name 'Anthropic' from 'anthropic' (/home/ubuntu/claude_api/venv/lib/python3.11/site-packages/anthropic/__init__.py)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.