Code Monkey home page Code Monkey logo

llm's Introduction

llm package for emacs

Introduction

This library provides an interface for interacting with Large Language Models (LLMs). It allows elisp code to use LLMs while also giving end-users the choice to select their preferred LLM. This is particularly beneficial when working with LLMs since various high-quality models exist, some of which have paid API access, while others are locally installed and free but offer medium quality. Applications using LLMs can utilize this library to ensure compatibility regardless of whether the user has a local LLM or is paying for API access.

LLMs exhibit varying functionalities and APIs. This library aims to abstract functionality to a higher level, as some high-level concepts might be supported by an API while others require more low-level implementations. An example of such a concept is “examples,” where the client offers example interactions to demonstrate a pattern for the LLM. While the GCloud Vertex API has an explicit API for examples, OpenAI’s API requires specifying examples by modifying the system prompt. OpenAI also introduces the concept of a system prompt, which does not exist in the Vertex API. Our library aims to conceal these API variations by providing higher-level concepts in our API.

Certain functionalities might not be available in some LLMs. Any such unsupported functionality will raise a 'not-implemented signal.

Setting up providers

Users of an application that uses this package should not need to install it themselves. The llm package should be installed as a dependency when you install the package that uses it. However, you do need to require the llm module and set up the provider you will be using. Typically, applications will have a variable you can set. For example, let’s say there’s a package called “llm-refactoring”, which has a variable llm-refactoring-provider. You would set it up like so:

(use-package llm-refactoring
  :init
  (require 'llm-openai)
  (setq llm-refactoring-provider (make-llm-openai :key my-openai-key))

Here my-openai-key would be a variable you set up before with your OpenAI key. Or, just substitute the key itself as a string. It’s important to remember never to check your key into a public repository such as GitHub, because your key must be kept private. Anyone with your key can use the API, and you will be charged.

All of the providers (except for llm-fake), can also take default parameters that will be used if they are not specified in the prompt. These are the same parameters as appear in the prompt, but prefixed with default-chat-. So, for example, if you find that you like Ollama to be less creative than the default, you can create your provider like:

(make-llm-ollama :embedding-model "mistral:latest" :chat-model "mistral:latest" :default-chat-temperature 0.1)

For embedding users. if you store the embeddings, you must set the embedding model. Even though there’s no way for the llm package to tell whether you are storing it, if the default model changes, you may find yourself storing incompatible embeddings.

Open AI

You can set up with make-llm-openai, with the following parameters:

  • :key, the Open AI key that you get when you sign up to use Open AI’s APIs. Remember to keep this private. This is non-optional.
  • :chat-model: A model name from the list of Open AI’s model names. Keep in mind some of these are not available to everyone. This is optional, and will default to a reasonable 3.5 model.
  • :embedding-model: A model name from list of Open AI’s embedding model names. This is optional, and will default to a reasonable model.

Open AI Compatible

There are many Open AI compatible APIs and proxies of Open AI. You can set up one with make-llm-openai-compatible, with the following parameter:

  • :url, the URL of leading up to the command (“embeddings” or “chat/completions”). So, for example, “https://api.openai.com/v1/” is the URL to use Open AI (although if you wanted to do that, just use make-llm-openai instead.

Gemini (not via Google Cloud)

This is Google’s AI model. You can get an API key via their page on Google AI Studio. Set this up with make-llm-gemini, with the following parameters:

  • :key, the Google AI key that you get from Google AI Studio.
  • :chat-model, the model name, from the [[https://ai.google.dev/models][list of models. This is optional and will default to the text Gemini model.
  • :embedding-model: the model name, currently must be “embedding-001”. This is optional and will default to “embedding-001”.

Vertex (Gemini via Google Cloud)

This is mostly for those who want to use Google Cloud specifically, most users should use Gemini instead, which is easier to set up.

You can set up with make-llm-vertex, with the following parameters:

In addition to the provider, which you may want multiple of (for example, to charge against different projects), there are customizable variables:

  • llm-vertex-gcloud-binary: The binary to use for generating the API key.
  • llm-vertex-gcloud-region: The gcloud region to use. It’s good to set this to a region near where you are for best latency. Defaults to “us-central1”.

    If you haven’t already, you must run the following command before using this:

    gcloud beta services identity create --service=aiplatform.googleapis.com --project=PROJECT_ID
        

Claude

Claude is Anthropic’s large language model. It does not support embeddings. It does support function calling, but currently not in streaming. You can set it up with the following parameters:

:key: The API key you get from Claude’s settings page. This is required. :chat-model: One of the Claude models. Defaults to “claude-3-opus-20240229”, the most powerful model.

Ollama

Ollama is a way to run large language models locally. There are many different models you can use with it, and some of them support function calling. You set it up with the following parameters:

  • :scheme: The scheme (http/https) for the connection to ollama. This default to “http”.
  • :host: The host that ollama is run on. This is optional and will default to localhost.
  • :port: The port that ollama is run on. This is optional and will default to the default ollama port.
  • :chat-model: The model name to use for chat. This is not optional for chat use, since there is no default.
  • :embedding-model: The model name to use for embeddings (only [some models](https://ollama.com/search?q=&c=embedding) can be used for embeddings. This is not optional for embedding use, since there is no default.

GPT4All

GPT4All is a way to run large language models locally. To use it with llm package, you must click “Enable API Server” in the settings. It does not offer embeddings or streaming functionality, though, so Ollama might be a better fit for users who are not already set up with local models. You can set it up with the following parameters:

  • :host: The host that GPT4All is run on. This is optional and will default to localhost.
  • :port: The port that GPT4All is run on. This is optional and will default to the default ollama port.
  • :chat-model: The model name to use for chat. This is not optional for chat use, since there is no default.

llama.cpp

llama.cpp is a way to run large language models locally. To use it with the llm package, you need to start the server (with the “–embedding” flag if you plan on using embeddings). The server must be started with a model, so it is not possible to switch models until the server is restarted to use the new model. As such, model is not a parameter to the provider, since the model choice is already set once the server starts.

There is a deprecated provider, however it is no longer needed. Instead, llama cpp is Open AI compatible, so the Open AI Compatible provider should work.

Fake

This is a client that makes no call, but it just there for testing and debugging. Mostly this is of use to programmatic clients of the llm package, but end users can also use it to understand what will be sent to the LLMs. It has the following parameters:

  • :output-to-buffer: if non-nil, the buffer or buffer name to append the request sent to the LLM to.
  • :chat-action-func: a function that will be called to provide a string or symbol and message cons which are used to raise an error.
  • :embedding-action-func: a function that will be called to provide a vector or symbol and message cons which are used to raise an error.

llm and the use of non-free LLMs

The llm package is part of GNU Emacs by being part of GNU ELPA. Unfortunately, the most popular LLMs in use are non-free, which is not what GNU software should be promoting by inclusion. On the other hand, by use of the llm package, the user can make sure that any client that codes against it will work with free models that come along. It’s likely that sophisticated free LLMs will, emerge, although it’s unclear right now what free software means with respsect to LLMs. Because of this tradeoff, we have decided to warn the user when using non-free LLMs (which is every LLM supported right now except the fake one). You can turn this off the same way you turn off any other warning, by clicking on the left arrow next to the warning when it comes up. Alternatively, you can set llm-warn-on-nonfree to nil. This can be set via customization as well.

To build upon the example from before:

(use-package llm-refactoring
  :init
  (require 'llm-openai)
  (setq llm-refactoring-provider (make-llm-openai :key my-openai-key)
        llm-warn-on-nonfree nil)

Programmatic use

Client applications should require the llm package, and code against it. Most functions are generic, and take a struct representing a provider as the first argument. The client code, or the user themselves can then require the specific module, such as llm-openai, and create a provider with a function such as (make-llm-openai :key user-api-key). The client application will use this provider to call all the generic functions.

For all callbacks, the callback will be executed in the buffer the function was first called from. If the buffer has been killed, it will be executed in a temporary buffer instead.

Main functions

  • llm-chat provider prompt: With user-chosen provider , and a llm-chat-prompt structure (created by llm-make-chat-prompt), send that prompt to the LLM and wait for the string output.
  • llm-chat-async provider prompt response-callback error-callback: Same as llm-chat, but executes in the background. Takes a response-callback which will be called with the text response. The error-callback will be called in case of error, with the error symbol and an error message.
  • llm-chat-streaming provider prompt partial-callback response-callback error-callback: Similar to llm-chat-async, but request a streaming response. As the response is built up, partial-callback is called with the all the text retrieved up to the current point. Finally, reponse-callback is called with the complete text.
  • llm-embedding provider string: With the user-chosen provider, send a string and get an embedding, which is a large vector of floating point values. The embedding represents the semantic meaning of the string, and the vector can be compared against other vectors, where smaller distances between the vectors represent greater semantic similarity.
  • llm-embedding-async provider string vector-callback error-callback: Same as llm-embedding but this is processed asynchronously. vector-callback is called with the vector embedding, and, in case of error, error-callback is called with the same arguments as in llm-chat-async.
  • llm-count-tokens provider string: Count how many tokens are in string. This may vary by provider, because some provideres implement an API for this, but typically is always about the same. This gives an estimate if the provider has no API support.
  • llm-cancel-request request Cancels the given request, if possible. The request object is the return value of async and streaming functions.
  • llm-name provider. Provides a short name of the model or provider, suitable for showing to users.
  • llm-chat-token-limit. Gets the token limit for the chat model. This isn’t possible for some backends like llama.cpp, in which the model isn’t selected or known by this library.

    And the following helper functions:

    • llm-make-chat-prompt text &keys context examples functions temperature max-tokens: This is how you make prompts. text can be a string (the user input to the llm chatbot), or a list representing a series of back-and-forth exchanges, of odd number, with the last element of the list representing the user’s latest input. This supports inputting context (also commonly called a system prompt, although it isn’t guaranteed to replace the actual system prompt), examples, and other important elements, all detailed in the docstring for this function. The non-standard-params let you specify other options that might vary per-provider. The correctness is up to the client.
    • llm-chat-prompt-to-text prompt: From a prompt, return a string representation. This is not usually suitable for passing to LLMs, but for debugging purposes.
    • llm-chat-streaming-to-point provider prompt buffer point finish-callback: Same basic arguments as llm-chat-streaming, but will stream to point in buffer.
    • llm-chat-prompt-append-response prompt response role: Append a new response (from the user, usually) to the prompt. The role is optional, and defaults to 'user.

Logging

Interactions with the llm package can be logged by setting llm-log to a non-nil value. This should be done only when developing. The log can be found in the *llm log* buffer.

How to handle conversations

Conversations can take place by repeatedly calling llm-chat and its variants. The prompt should be constructed with llm-make-chat-prompt. For a conversation, the entire prompt must be kept as a variable, because the llm-chat-prompt-interactions slot will be getting changed by the chat functions to store the conversation. For some providers, this will store the history directly in llm-chat-prompt-interactions, but other LLMs have an opaque conversation history. For that reason, the correct way to handle a conversation is to repeatedly call llm-chat or variants with the same prompt structure, kept in a variable, and after each time, add the new user text with llm-chat-prompt-append-response. The following is an example:

(defvar-local llm-chat-streaming-prompt nil)
(defun start-or-continue-conversation (text)
  "Called when the user has input TEXT as the next input."
  (if llm-chat-streaming-prompt
      (llm-chat-prompt-append-response llm-chat-streaming-prompt text)
    (setq llm-chat-streaming-prompt (llm-make-chat-prompt text))
    (llm-chat-streaming-to-point provider llm-chat-streaming-prompt (current-buffer) (point-max) (lambda ()))))

Caution about llm-chat-prompt-interactions

The interactions in a prompt may be modified by conversation or by the conversion of the context and examples to what the LLM understands. Different providers require different things from the interactions. Some can handle system prompts, some cannot. Some require alternating user and assistant chat interactions, others can handle anything. It’s important that clients keep to behaviors that work on all providers. Do not attempt to read or manipulate llm-chat-prompt-interactions after initially setting it up for the first time, because you are likely to make changes that only work for some providers. Similarly, don’t directly create a prompt with make-llm-chat-prompt, because it is easy to create something that wouldn’t work for all providers.

Function calling

Note: function calling functionality is currently alpha quality. If you want to use function calling, please watch the =llm= discussions for any announcements about changes.

Function calling is a way to give the LLM a list of functions it can call, and have it call the functions for you. The standard interaction has the following steps:

  1. The client sends the LLM a prompt with functions it can call.
  2. The LLM may return which functions to execute, and with what arguments, or text as normal.
  3. If the LLM has decided to call one or more functions, those functions should be called, and their results sent back to the LLM.
  4. The LLM will return with a text response based on the initial prompt and the results of the function calling.
  5. The client can now can continue the conversation.

This basic structure is useful because it can guarantee a well-structured output (if the LLM does decide to call the function). Not every LLM can handle function calling, and those that do not will ignore the functions entirely. The function llm-capabilities will return a list with function-calls in it if the LLM supports function calls. Right now only Gemini, Vertex, Claude, and Open AI support function calling. Ollama should get function calling soon. However, even for LLMs that handle function calling, there is a fair bit of difference in the capabilities. Right now, it is possible to write function calls that succeed in Open AI but cause errors in Gemini, because Gemini does not appear to handle functions that have types that contain other types. So client programs are advised for right now to keep function to simple types.

The way to call functions is to attach a list of functions to the llm-function-call slot in the prompt. This is a list of llm-function-call structs, which takes a function, a name, a description, and a list of llm-function-arg structs. The docstrings give an explanation of the format.

The various chat APIs will execute the functions defined in llm-function-call with the arguments supplied by the LLM. Instead of returning (or passing to a callback) a string, instead an alist will be returned of function names and return values.

After sending a function call, the client could use the result, but if you want to proceed with the conversation, or get a textual response that accompany the function you should just send the prompt back with no modifications. This is because the LLM gives the function call to make as a response, and then expects to get back the results of that function call. The results were already executed at the end of the previous call, which also stores the result of that execution in the prompt. This is why it should be sent back without further modifications.

Be aware that there is no gaurantee that the function will be called correctly. While the LLMs mostly get this right, they are trained on Javascript functions, so imitating Javascript names is recommended. So, “write_email” is a better name for a function than “write-email”.

Examples can be found in llm-tester. There is also a function call to generate function calls from existing elisp functions in utilities/elisp-to-function-call.el.

Advanced prompt creation

The llm-prompt module provides helper functions to create prompts that can incorporate data from your application. In particular, this should be very useful for application that need a lot of context.

A prompt defined with llm-prompt is a template, with placeholders that the module will fill in. Here’s an example of a prompt definition, from the ekg package:

(llm-defprompt ekg-llm-fill-prompt
  "The user has written a note, and would like you to append to it,
to make it more useful.  This is important: only output your
additions, and do not repeat anything in the user's note.  Write
as a third party adding information to a note, so do not use the
first person.

First, I'll give you information about the note, then similar
other notes that user has written, in JSON.  Finally, I'll give
you instructions.  The user's note will be your input, all the
rest, including this, is just context for it.  The notes given
are to be used as background material, which can be referenced in
your answer.

The user's note uses tags: {{tags}}.  The notes with the same
tags, listed here in reverse date order: {{tag-notes:10}}

These are similar notes in general, which may have duplicates
from the ones above: {{similar-notes:1}}

This ends the section on useful notes as a background for the
note in question.

Your instructions on what content to add to the note:

{{instructions}}
")

When this is filled, it is done in the context of a provider, which has a known context size (via llm-chat-token-limit). Care is taken to not overfill the context, which is checked as it is filled via llm-count-tokens. We usually want to not fill the whole context, but instead leave room for the chat and subsequent terms. The variable llm-prompt-default-max-pct controls how much of the context window we want to fill. The way we estimate the number of tokens used is quick but inaccurate, so limiting to less than the maximum context size is useful for guarding against a miscount leading to an error calling the LLM due to too many tokens.

Variables are enclosed in double curly braces, like this: {{instructions}}. They can just be the variable, or they can also denote a number of tickets, like so: {{tag-notes:10}}. Tickets should be thought of like lottery tickets, where the prize is a single round of context filling for the variable. So the variable tag-notes gets 10 tickets for a drawing. Anything else where tickets are unspecified (unless it is just a single variable, which will be explained below) will get a number of tickets equal to the total number of specified tickets. So if you have two variables, one with 1 ticket, one with 10 tickets, one will be filled 10 times more than the other. If you have two variables, one with 1 ticket, one unspecified, the unspecified one will get 1 ticket, so each will have an even change to get filled. If no variable has tickets specified, each will get an equal chance. If you have one variable, it could have any number of tickets, but the result would be the same, since it would win every round. This algorithm is the contribution of David Petrou.

The above is true of variables that are to be filled with a sequence of possible values. A lot of LLM context filling is like this. In the above example, {{similar-notes}} is a retrieval based on a similarity score. It will continue to fill items from most similar to least similar, which is going to return almost everything the ekg app stores. We want to retrieve only as needed. Because of this, the llm-prompt module takes in generators to supply each variable. However, a plain list is also acceptable, as is a single value. Any single value will not enter into the ticket system, but rather be prefilled before any tickets are used.

So, to illustrate with this example, here’s how the prompt will be filled:

  1. First, the {{tags}} and {{instructions}} will be filled first. This will happen regardless before we check the context size, so the module assumes that these will be small and not blow up the context.
  2. Check the context size we want to use (llm-prompt-default-max-pct multiplied by llm-chat-token-limit) and exit if exceeded.
  3. Run a lottery with all tickets and choose one of the remaining variables to fill.
  4. If the variable won’t make the text too large, fill the variable with one entry retrieved from a supplied generator, otherwise ignore.
  5. Goto 2

The prompt can be filled two ways, one using predefined prompt template (llm-defprompt and llm-prompt-fill), the other using a prompt template that is passed in (llm-prompt-fill-text).

(llm-defprompt my-prompt "My name is {{name}} and I'm here's to say {{messages}}")

(llm-prompt-fill 'my-prompt my-llm-provider :name "Pat" :messages #'my-message-retriever)

(iter-defun my-message-retriever ()
  "Return the messages I like to say."
  (my-message-reset-messages)
  (while (my-has-next-message)
    (iter-yield (my-get-next-message))))

Alternatively, you can just fill it directly:

(llm-prompt-fill-text "Hi, I'm {{name}} and I'm here to say {{messages}}"
                      :name "John" :messages #'my-message-retriever)

As you can see in the examples, the variable values are passed in with matching keys.

Contributions

If you are interested in creating a provider, please send a pull request, or open a bug. This library is part of GNU ELPA, so any major provider that we include in this module needs to be written by someone with FSF papers. However, you can always write a module and put it on a different package archive, such as MELPA.

llm's People

Contributors

ahyatt avatar r0man avatar tquartus avatar s-kostyaev avatar ahyatt-continua avatar hraban avatar conao3 avatar ultronozm avatar smallandsoft avatar stebalien avatar

Stargazers

tomoyukim avatar Matthew Avant avatar Henrique Goncalves avatar George Watson avatar Haruka Asakura avatar Kyure_A avatar toyboot4e avatar  avatar Roberto Previdi avatar Rémi Louf avatar Josh Kingsley avatar  avatar Georges avatar Dimitris Mostrous avatar  avatar Adrien Vakili avatar Mike Fletcher avatar Hao avatar Weaver Marquez avatar  avatar Benson Li avatar  avatar  avatar Sthenno avatar Aakarsh Nair avatar John Mercouris avatar Annenpolka avatar Dylan Fitzgerald avatar Abdelhak Bougouffa avatar yibie avatar t. m. k. avatar Sebastian Stabinger avatar  avatar  avatar Eric Ihli avatar Sangwoo Joh avatar Lucian avatar Ad avatar  avatar Braulio Cassule avatar Marco Craveiro avatar Panagiotis Vlantis avatar Brandon Amos avatar Feraidoon Mehri avatar Desmond Deng avatar Dr. Alessandro Wollek avatar Olav Fosse avatar AJ Armstrong avatar  avatar Precise Simulation avatar Johan Lindstrom avatar Daniel Liden avatar Anton Makieiev avatar Ryuei avatar SohaibYaser avatar Tavis Rudd avatar Cheng Kuan avatar Lumos avatar Garret Buell avatar Jarkko Saltiola avatar Chris Barrett avatar pol avatar Thomas Moulia avatar Gilbert avatar Maxime Alardo avatar Jacob Bridges avatar Shayon avatar Michael Nissen avatar  avatar nik gaffney avatar Yilun Guan avatar Wasym Atieh Alonso avatar Roma Latyshenko avatar Zhen Sun avatar  avatar Ric Cabrera avatar tomyli avatar kostafey avatar Xu Chunyang avatar  avatar Chris Perivolaropoulos avatar Wai Hon Law avatar  avatar Zachary Romero avatar Cumhur Erkut avatar Rhys avatar  avatar zhijia,.zhang avatar Colton Kopsa avatar Noé Lopez avatar Appan Ponnappan avatar xz-dev avatar LKMT avatar gty avatar Andrii Tarykin avatar Fangyuan avatar Christian Stewart avatar Paweł Kobojek avatar Delius avatar Sean Bowman avatar

Watchers

 avatar Abhik Khanra avatar James Cloos avatar Robin Winkelewski avatar  avatar  avatar

llm's Issues

[FR] Support JSON mode

Some models expose a JSON-mode API where they output correct JSON. There are some related APIs such as function calling (tool use). These are useful for programmatic use, as we need in an IDE like emacs.

Here is an example:

import os
import json
import openai
from pydantic import BaseModel, Field

# Create client
client = openai.OpenAI(
    base_url="https://api.together.xyz/v1",
    api_key=os.environ["TOGETHER_API_KEY"],
)

# Define the schema for the output.
class User(BaseModel):
    name: str = Field(description="user name")
    address: str = Field(description="address")

# Call the LLM with the JSON schema
chat_completion = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    response_format={"type": "json_object", "schema": User.model_json_schema()},
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant that answers in JSON.",
        },
        {
            "role": "user",
            "content": "Create a user named Alice, who lives in 42, Wonderland Avenue.",
        },
    ],
)

created_user = json.loads(chat_completion.choices[0].message.content)
print(json.dumps(created_user, indent=2))

"""
{
  "address": "42, Wonderland Avenue",
  "name": "Alice"
}
"""

Context does not get sent if list of interactions ≥2

I've added some debug statements and playing around with the basics I see:

(let ((p (make-llm-chat-prompt
          :context "Context")))
  (llm-chat my-provider p))
raw data sent to https://api.openai.com/v1/chat/completions: {"model":"gpt-4-turbo-preview","messages":[{"role":"system","content":"Context\n"}]}
(let ((p (make-llm-chat-prompt
          :context "Context"
          :interactions (list
                         (make-llm-chat-prompt-interaction :role 'user :content "one")))))
  (llm-chat my-provider p))
raw data sent to https://api.openai.com/v1/chat/completions: {"model":"gpt-4-turbo-preview","messages":[{"role":"system","content":"Context\n"},{"role":"user","content":"one"}]}
(let ((p (make-llm-chat-prompt
          :context "Context"
          :interactions (list
                         (make-llm-chat-prompt-interaction :role 'user :content "one")
                         (make-llm-chat-prompt-interaction :role 'assistant :content "two")
                         (make-llm-chat-prompt-interaction :role 'user :content "three")))))
  (llm-chat my-provider p))
raw data sent to https://api.openai.com/v1/chat/completions: {"model":"gpt-4-turbo-preview","messages":[{"role":"user","content":"one"},{"role":"assistant","content":"two"},{"role":"user","content":"three"}]}

Is this on purpose? It seems rather counter-intuitive but maybe I'm misunderstanding the point of context

Open WebUI compatibility

Hello. Can you please add Open WebUI compatibility? It's an app that I can self-host from docker to provide a nice ChatGPT UI for all of my devices: https://github.com/open-webui/open-webui.

I have done this here for chatgpt-shell: xenodium/chatgpt-shell#201. Basically, it seems mostly OpenAI compatible on http://127.0.0.1:3000/ollama/api/chat but doesn't have the choices part of the JSON responses. Open WebUI is also based on Ollama and has an endpoint on http://127.0.0.1:3000/ollama/ I think but it does require a key for authorization since it supports multiple users.

Error using ollama through a proxy

When this library is used with a proxy (http_proxy and http_proxy env variables defined) it encounters an error dealing with the HTTP header regex. This does not happen when not using the proxy, and plz seems to have no problem accessing it. It looks like it's an issue with plz-media-type to me.

ELISP> emacs-version
"30.0.50"

ELISP> ellama-provider
#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens
              nil :default-chat-non-standard-params nil :scheme
              "https" :host "xxxxx" :port 443
              :chat-model "DEFAULT/llama3-8b:latest" :embedding-model
              "DEFAULT/llama3-8b:latest")

ELISP> (plz 'post "https://xxxxx:443/api/chat" :body
"{\"stream\":false,\"model\":\"DEFAULT/llama3-8b:latest\",\"messages\":[{\"role\":\"user\",\"content\":\"Describe elisp in one word?\"}]}" :decode t :headers
'(("Content-Type" . "application/json")))

"{\"model\":\"DEFAULT/llama3-8b:latest\",\"created_at\":\"2024-06-06T14:27:25.339586897Z\",\"message\":{\"role\":\"assistant\",\"content\":\"**Esoteric**\"},\"done_reason\\
":\"stop\",\"done\":true,\"total_duration\":292413786,\"load_duration\":2318982,\"prompt_eval_duration\":72714000,\"eval_count\":5,\"eval_duration\":85079000}"

ELISP> (llm-chat ellama-provider (llm-make-chat-prompt "Describe elisp in one word?"))
*** Eval error ***  Wrong type argument: number-or-marker-p, nil
Debugger entered--Lisp error: (search-failed "\\(?:\\(?:\n\\|\15\n\15\\)\n\\)")
  plz-media-type--parse-headers()
  plz-media-type--parse-response()
  plz-media-type-process-filter(#<process plz-request-curl> ((application/json . #<plz-media-type:application/json plz-media-type:application/json-1033f28d3632>) (application/octet-stream . #<plz-media-type:application/octet-stream plz-media-type:application/octet-stream-1033f28d360b>) (application/xml . #<plz-media-type:application/xml plz-media-type:application/xml-1033f28d3604>) (text/html . #<plz-media-type:text/html plz-media-type:text/html-1033f28d39c5>) (text/xml . #<plz-media-type:text/xml plz-media-type:text/xml-1033f28d39ca>) (t . #<plz-media-type:application/octet-stream plz-media-type:application/octet-stream-1033f28d39db>)) "HTTP/1.1 200 Connection established\15\n\15\n")
  #f(compiled-function (process chunk) #<bytecode 0x2ad5e9ab453d307>)(#<process plz-request-curl> "HTTP/1.1 200 Connection established\15\n\15\n")
  accept-process-output(#<process plz-request-curl>)
  plz(post "https://xxxxxx:443/api/chat" :as buffer :body "{\"stream\":false,\"model\":\"DEFAULT/llama3-8b:latest\",\"messages\":[{\"role\":\"user\",\"content\":\"Describe elisp in one word?\"}]}" :body-type text :connect-timeout 10 :decode t :else #f(compiled-function (error) #<bytecode -0xa1acaa5e53cb633>) :finally #f(compiled-function () #<bytecode 0x9130a714e227dfa>) :headers (("Content-Type" . "application/json")) :noquery nil :filter #f(compiled-function (process chunk) #<bytecode 0x2ad5e9ab453d307>) :timeout nil :then sync)
  plz-media-type-request(post "https://xxxxxx:443/api/chat" :as (media-types ((application/json . #<plz-media-type:application/json plz-media-type:application/json-1033f28d3632>) (application/octet-stream . #<plz-media-type:application/octet-stream plz-media-type:application/octet-stream-1033f28d360b>) (application/xml . #<plz-media-type:application/xml plz-media-type:application/xml-1033f28d3604>) (text/html . #<plz-media-type:text/html plz-media-type:text/html-1033f28d39c5>) (text/xml . #<plz-media-type:text/xml plz-media-type:text/xml-1033f28d39ca>) (t . #<plz-media-type:application/octet-stream plz-media-type:application/octet-stream-1033f28d39db>))) :body "{\"stream\":false,\"model\":\"DEFAULT/llama3-8b:latest\",\"messages\":[{\"role\":\"user\",\"content\":\"Describe elisp in one word?\"}]}" :connect-timeout 10 :headers (("Content-Type" . "application/json")) :timeout nil)
  llm-request-plz-sync-raw-output("https://xxxxxx:443/api/chat" :headers nil :data (("stream" . :json-false) ("model" . "DEFAULT/llama3-8b:latest") ("messages" (("role" . "user") ("content" . "Describe elisp in one word?")))) :timeout nil)
  llm-request-plz-sync("https://xxxxxx:443/api/chat" :headers nil :data (("stream" . :json-false) ("model" . "DEFAULT/llama3-8b:latest") ("messages" (("role" . "user") ("content" . "Describe elisp in one word?")))))
  #f(compiled-function (provider prompt) #<bytecode -0x10962013be9b5add>)(#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  apply(#f(compiled-function (provider prompt) #<bytecode -0x10962013be9b5add>) (#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil)))
  #f(compiled-function (&rest args) #<bytecode -0x32dba651725b8fb>)(#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  apply(#f(compiled-function (&rest args) #<bytecode -0x32dba651725b8fb>) (#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil)))
  #f(compiled-function (&rest args) #<bytecode -0x15d83c6ffa66ce2f>)()
  #f(compiled-function (cl--cnm provider prompt) #<bytecode 0xf2da543041f40b2>)(#f(compiled-function (&rest args) #<bytecode -0x15d83c6ffa66ce2f>) #s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  apply(#f(compiled-function (cl--cnm provider prompt) #<bytecode 0xf2da543041f40b2>) #f(compiled-function (&rest args) #<bytecode -0x15d83c6ffa66ce2f>) (#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil)))
  #f(compiled-function (provider prompt) "Log the input to llm-chat." #<bytecode -0x122fca2c8b2b2dd2>)(#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  apply(#f(compiled-function (provider prompt) "Log the input to llm-chat." #<bytecode -0x122fca2c8b2b2dd2>) #s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  llm-chat(#s(llm-ollama :default-chat-temperature nil :default-chat-max-tokens nil :default-chat-non-standard-params nil :scheme "https" :host "xxxxxx" :port 443 :chat-model "DEFAULT/llama3-8b:latest" :embedding-model "DEFAULT/llama3-8b:latest") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Describe elisp in one word?" :function-call-result nil)) :functions nil :temperature nil :max-tokens nil :non-standard-params nil))
  eval((llm-chat ellama-provider (llm-make-chat-prompt "Describe elisp in one word?")) t)
  ielm-eval-input(#("(llm-chat ellama-provider (llm-make-chat-prompt \"Describe elisp in one word?\"))" 0 48 (fontified t) 48 77 (face font-lock-string-face fontified t) 77 78 (fontified t) 78 79 (rear-nonsticky t fontified t)) nil)
  ielm-send-input(nil)
  ielm-return()
  funcall-interactively(ielm-return)
  command-execute(ielm-return)

Using `make-llm-openai-compatible` with Mistral AI fails parsing the partial responses

I have configured Mistral AI (https://docs.mistral.ai/api/) as openai-compatible provider and was observing empty partial responses in the log. I debugged it until this[1] point and wondered why a regex is used for handling the JSON response. The response from Mistral AI looks like following:

data: {\"id\":\"00000000000000000\",\"object\":\"chat.completion.chunk\",\"created\":1710964073,\"model\":\"open-mixtral-8x7b\",\"choices\":[{\"index\":0,\"delta\":{\"role\":null,\"content\":\"I don\"},\"finish_reason\":null}],\"usage\":null}

I was able to get the response by fiddling with the regex[1] but I was wondering if it would be more robust if something like json-read-from-string would be used.

I am using llm version 0.12.0 with Emacs 29.2.

And thanks for providing this library, so that I don't have to leave Emacs to participate in this LLM craze :)

[1] https://github.com/ahyatt/llm/blob/5fb10b9bdbfc6bb384c14fbfc8caf09fdb4b99e8/llm-openai.el#L271C38-L271C51

Error using open ai

I have just installed EKG and receive the following error when saving a new note:

⛔ Warning (llm): Open AI API is not free software, and your freedom to use it is restricted.
See https://openai.com/policies/terms-of-use for the details on the restrictions on use.

I have a paid subscription to open ai and so I do not know why I am receiving this warning. Please advise.

Add support for https when using Ollama

Currently only hostname and port is configurable, but as far as I can tell http protocol is hardcoded. Would be nice to also support https for ollama deployments behind encrypting proxies.

Using make-llm-openai-compatible with Azure OpenAI service fails to connect to endpoint

I configured the Azure OpenAI service (https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#completions) as openai-compatible provider and ran into the following two issues:

  • The endpoint expects an api-version as URL parameter. So I ended up hacking it in like following:
    (llm-openai--url provider "chat/completions?api-version=2023-05-15")
  • The endpoint accepts the key only when provided in a extra http header called api-key. So I ended up amending the list of headers like following: (("Authorization" . ,(format "Bearer %s" (llm-openai-key provider))) ("api-key" . ,(format "%s" (llm-openai-key provider))))

Maybe you find some clean solutions for this. I could also give it a try, if you point me into the preferred direction.

Errors when function calling with Claude

In a scratch buffer, evaluating

(let ((provider (make-llm-claude
                 :key (exec-path-from-shell-getenv "CLAUDE_API_KEY")
                 :chat-model "claude-3-5-sonnet-20240620")))
  (llm-tester-function-calling-conversation-sync provider))

(adapt the :key field suitably) yields a debugger error that begins:

Debugger entered--Lisp error: (error "LLM request failed with code 400: Bad Request (additional information: ((type . error) (error (type . invalid_request_error) (message . messages.1.tool_use_id: Extra inputs are not permitted))))")
error("LLM request failed with code %d: %s (additional information: %s)" 400 "Bad Request" ((type . "error") (error (type . "invalid_request_error") (message . "messages.1.tool_use_id: Extra inputs are not permitted"))))

Evaluating

(let ((provider (make-llm-claude
                 :key (exec-path-from-shell-getenv "CLAUDE_API_KEY")
                 :chat-model "claude-3-5-sonnet-20240620")))
  (llm-tester-function-calling-conversation-async provider))

yields the following in *llm-tester*:

FAILURE: async function calling conversation for llm-claude, error of type error received: Error invalid_request_error: 'messages.1.tool_use_id: Extra inputs are not permitted'

Are these known issues?

I encountered a different error "invalid_request_error: 'messages.1.content: Input should be a valid list'" when trying to return function calls to Claude in my own code, but figured I'd start by trying to understand the built-in tests.

Ollama support

Hello @ahyatt,

Thank you for contributing the code from this project. I'm looking forward to use it as it seems the one with the least amount of OpenAI coupling I've seen.

I'm interested in running this extension with an ollama backend. This tool is pretty neat as it allow clients to run various OSS data models in a straightforward way. The tool runs a local HTTP server that provides an API to query the various models. It also offers a CLI to interact with the various model installations, model tweaks, etc.

I've the impression that implementing a backend for this shouldn't be as challenging as you have lots of foundations going on already. I'm happy to help if there is some work that can be split out.

llama2 embedding vectors from Python and llm.el don't seem to match

I noticed that an embedding vector from llm.el and Python using llama2 from ollama don't match. It is possible there is some small setting I have overlooked, and maybe even they don't use the same libraries, but I thought they would. They are least the same length. I only show the first and last elements of the embedding below for brevity.

elisp:

#+BEGIN_SRC emacs-lisp
(require 'llm)
(require 'llm-ollama)

(let* ((model (make-llm-ollama :embedding-model "llama2"))
       (v (llm-embedding model "test")))
  (list (length v) (elt v 0) (elt v 4095)))
#+END_SRC

#+RESULTS:
| 4096 | -0.9325962662696838 | -0.05561749264597893 |

Python:

#+BEGIN_SRC jupyter-python
from langchain.embeddings import OllamaEmbeddings
import numpy as np

ollama = OllamaEmbeddings(model="llama2")
vector = np.array(ollama.embed_documents(["test"]))

len(vector[0]), vector[0][0], vector[0][-1]
#+END_SRC

#+RESULTS:
:RESULTS:
| 4096 | 0.8642910718917847 | 0.4629634618759155 |
:END:

any tips on tracking down why the are different?

[Q/FR] "Legacy" Completion

Is there a way to do so-called legacy completions?

res = openrouter_client.completions.create(
    model="mistralai/mixtral-8x22b",
    prompt="""...""",
    stream=True,
    echo=False, #: Echo back the prompt in addition to the completion
    max_tokens=100,
)

This API is lower-level than the ChatML API, and usable with base models, which can be quite strong as an auto-completer.

Related:

Add ability to change open api base url

Hi.
There are many open source solutions mimicking to open ai api. It will be useful if we can use it with llm library. For now there is no working local embedding solution working with llm. Ollama's and llama.cpp are broken now (fix in progress). I wrote simple one myself, but I can't use it with llm. I can also mimic open ai api and use your library. Since there is already supported vss extension for sqlite in 29.2, it will be very useful.

Provide example code for ollama provider

Hi.
Please provide example code to work with ollama provider. This simple code:

(llm-chat
 (make-llm-ollama :chat-model "zephyr" :port 11434 :embedding-model "zephyr")
 (llm-make-simple-chat-prompt "Who are you?"))

leads to error:

Debugger entered--Lisp error: (not-implemented)
  signal(not-implemented nil)
  #f(compiled-function (provider prompt) #<bytecode 0x1158229f2628ca4>)(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  apply(#f(compiled-function (provider prompt) #<bytecode 0x1158229f2628ca4>) (#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil)))
  #f(compiled-function (&rest args) #<bytecode 0x6c97265ce378b94>)(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  apply(#f(compiled-function (&rest args) #<bytecode 0x6c97265ce378b94>) #s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  llm-chat(#s(llm-ollama :port 11434 :chat-model "zephyr" :embedding-model "zephyr") #s(llm-chat-prompt :context nil :examples nil :interactions (#s(llm-chat-prompt-interaction :role user :content "Who are you?")) :temperature nil :max-tokens nil))
  (progn (llm-chat (record 'llm-ollama 11434 "zephyr" "zephyr") (llm-make-simple-chat-prompt "Who are you?")))
  elisp--eval-last-sexp(nil)
  eval-last-sexp(nil)
  funcall-interactively(eval-last-sexp nil)
  command-execute(eval-last-sexp)

I would like to see example code with that I could start.

[feature] Anthropic / Claude2 Support

Anthropic is a close peer to OpenAI in text generation terms, and has a larger available context window that is compatible with its newest model, at 200k tokens. The API feels very similar in terms of tokens and the conversation back and forth.

I personally often favor the output of Anthropic over both Gemini and GPT-4.

Error callback not called if url request failed

Hi @ahyatt

Simple code for reproducing:

(require 'llm-ollama)
(setq provider
      (make-llm-ollama
       :chat-model "1" :embedding-model "2" :port 3333))

(require 'llm-openai)
(setq llm-warn-on-nonfree nil)
(setq provider
      (make-llm-openai-compatible
       :key "0"
       :chat-model "1" :embedding-model "2" :url "http://localhost:3333"))

(llm-chat-streaming provider (llm-make-simple-chat-prompt "test") #'ignore #'ignore
		    (lambda (_)
		      (message "error callback called")))

There is no process listening port 3333.
Reproducing with both providers.

Message will never received.

Provide a way to cancel queries

Specifically, provide a nice way to cancel queries programatically (e.g., from ellama).

This is especially important with, e.g., a local LLM like ollama. I believe it's sufficient to return the url-request process buffer, letting the user run something like (let (kill-buffer-query-functions) (kill-buffer my-llm-buffer)).

Feature request/idea: Use generic functions to extract provider errors

Hi @ahyatt,

since we are looking into error handling in the plz branch, I have a feature request/idea. Let me describe my problem first:

I have a custom LLM provider based on llm-openai-compatible. This provider talks to a server that is a proxy/gateway to the OpenAI API. With the switch to plz I could simplify it a lot (before all this curl code lived in my provider). I now basically set some curl arguments like the certificate, the private key and a header, and then just call the next method. Most of my code now look like this:

(cl-defmethod llm-chat-async ((provider nu-llm-openai) prompt response-callback error-callback)
  (nu-llm-request-with-curl-options (nu-llm-openai--curl-options provider)
    (cl-call-next-method provider prompt response-callback error-callback)))

This works well, except for one edge case. Unfortunately the server I'm dealing with is not consistent with its error messages. In rare cases like an authentication error, or a proxy error, the server returns errors in the HTTP response body that don't follow the error specification of the OpenAI API. Those errors are not from OpenAI, but from the proxy.

When this happens I get a backtrace with the current code, since the OpenAI provider assumes a specific format. I can't really fix this without copy and pasting the implementations of all those llm-xxx functions that make HTTP requests. Wrapping the response-callback or the error-callback in the example above won't work, since that wrapped code would run after the error occurred.

From what I have seen so far, most of the time we extract 2 things from some data:

  • An error code
  • An error message

So, I was wondering if we could maybe do something similar to what you did here: a35d2ce Something like this:

(cl-defgeneric llm-error-code (provider data)
  "Extract the error code from the DATA of PROVIDER.")

(cl-defgeneric llm-error-message (provider data)
  "Extract the error message from the DATA of PROVIDER.")

If we would have a mechanism like this, I could provide good error messages to my users without re-implementing a lot.

Wdyt?

LLM request timed out for Gemini

Gemini could often exceed the default timeout of 5 seconds:

(let ((buf (url-retrieve-synchronously url t nil (or timeout 5))))

Consequently, calling llm-chat with Gemini might result in the error message LLM request timed out.

I suggest either increasing the timeout or adding a variable to customize it.

Streaming multiple function calls gives an error

Executing the following code (replacing :key with whatever)

(let* ((provider (make-llm-openai
                  :key (exec-path-from-shell-getenv "OPENAI_API_KEY")
                  :chat-model "gpt-4o"))
       (prompt
        (llm-make-chat-prompt
         "Compute 3+5 and 7+9."
         :temperature 0.1
         :functions
         (list (make-llm-function-call
                :function (lambda (a b)
                            (+ a b))
                :name "add"
                :description "Sums two numbers."
                :args (list (make-llm-function-arg
                             :name "a"
                             :description "A number."
                             :type 'integer
                             :required t)
                            (make-llm-function-arg
                             :name "b"
                             :description "A number."
                             :type 'integer
                             :required t)))))))
  (llm-chat-streaming provider prompt #'ignore #'ignore #'ignore))

yields a complicated args-out-of-range error, ultimately triggered by the function llm-provider-collect-streaming-function-data in llm-openai.el, in the block (setf (llm-provider-utils-function-call-id (aref cvec index)) id), where index is 1 but cvec has length 1. I haven't yet tried to understand exactly why this is happening.

The error doesn't happen if one asks for just one arithmetic operation (e.g., just "3+5").

Use tiktoken.el for token counting of openai's models

Hello!

I noticed that one of the methods for the providers is llm-count-tokens which currently does a simple heuristic. I recently wrote a port of tiktoken that could add this functionality for at least the OpenAI models. The implementation in llm-openai.el would essentially look like the following:

(require 'tiktoken)
(cl-defmethod llm-count-tokens ((provider llm-openai) text)
  (let ((enc (tiktoken-encoding-for-model (llm-openai-chat-model provider))))
    (tiktoken-count-tokens enc text)))

There would be some design questions like should it use the chat-model or the embedding-model when doing this. Like maybe it would first try to count with the embedding-model if it exists, otherwise the chat-model, with some default.

Definitely let me know your thoughts and I could have a PR up for it along with any other required work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.