Code Monkey home page Code Monkey logo

dillionverma / llm.report Goto Github PK

View Code? Open in Web Editor NEW
887.0 10.0 71.0 24.04 MB

๐Ÿ“Š llm.report is an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

Home Page: https://llm.report

License: GNU General Public License v3.0

JavaScript 1.65% TypeScript 96.54% CSS 1.81%
gpt-3 gpt-4 llm llmops openai nextjs open-source nodejs react shadcn-ui typescript aiops mlops

llm.report's Introduction

Caution

Attention: llm.report is no longer actively maintained. This project was unable to find a sustainable business model, and the founders have moved on to other projects. If you are interested in maintaining or further developing this project, message me on twitter

llm.report โ€“ an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

An open-source logging and analytics platform for OpenAI

Introduction ยท Self Hosted Installation ยท Cloud Installation ยท Tech Stack ยท Contributing


Introduction

llm.report is an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

Here are some of the features that llm.report provides out-of-the-box:

OpenAI API Analytics

No-code solution to analyze your OpenAI API costs and token usage.

llm.report โ€“ an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

Logs

Log your OpenAI API requests / responses and analyze them to improve your prompts.

llm.report โ€“ an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

User Analytics

Calculate the cost per user for your AI app.

llm.report โ€“ an open-source logging and analytics platform for OpenAI: Log your ChatGPT API requests, analyze costs, and improve your prompts.

Self-Hosted Installation

  1. Clone the repo
git clone https://github.com/dillionverma/llm.report.git
  1. cd into the repo
cd llm.report
  1. Install dependencies
yarn
  1. Setup environment variables
cp .env.example .env
  • Generate NEXTAUTH_SECRET using openssl rand -base64 32 and add it to .env
  1. Quickstart
  • Requires Docker and Docker Compose to be installed.
  • Will start a local Postgres instance with a few test users - the credentials will be logged in the console
yarn dx

Open http://localhost:3000 with your browser!

Tech Stack

Contributing

Here's how you can contribute:

  • Open an issue if you believe you've encountered a bug.
  • Make a pull request to add new features/make quality-of-life improvements/fix bugs.

Star History

Star History Chart

License

Inspired by Dub and Plausible, both are open-source under the GNU Affero General Public License Version 3 (AGPLv3) or any later version. You can find it here. The reason for this is that we believe in the open-source ethos and want to contribute back to the community.

Credits

  • OpenAI โ€“ for creating ChatGPT
  • shadcn-ui โ€“ for making it easy to build beautiful UIs
  • tremor.so โ€“ beautiful dashboard componenets
  • mintlify โ€“ beautiful documentation
  • screen.studio โ€“ the best video recording tool
  • vercel โ€“ for making next.js and making it easy to build powerful apps
  • Dub โ€“ for inspiring me to open-source this project

llm.report's People

Contributors

axxi3 avatar dillionverma avatar haseab avatar l0g1x avatar molgit avatar naveennaidu avatar srikarsams avatar trace2798 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llm.report's Issues

Get userId from body of request to OpenAI rather then header

Currently the userId for a specific request is set through the X-User-Id header, like this:

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY, 
  basePath: "https://api.openai.withlogging.com/v1",  
  baseOptions: { 
    headers: {
      "X-Api-Key": `Bearer ${process.env.LLM_REPORT_API_KEY}`, 
      "X-User-Id": `[email protected]`,  // user Id
    },
  }
});

This means that, if different requests need to be associated to different users, the request header needs to be dynamic and change on each request (and therefore we need to create a new Configuration object on each request).

However, OpenAI API already accepts a user field in the body of the request, which has a similar purpose.

const completion = await openai.createChatCompletion({
  model: "gpt-3.5-turbo",
  user: "userid",
});

It would be handy to optionally get the userId for LLM Report from this field of the request body, rather than the X-User-Id header. This would avoid having to set new headers for each request.

Dockerhub or Github package

Great app! Build an official image and push it to a docker registry like Dockerhub or Github package would be awesome, so the ones that selfhost would be always up to date :)

Details analytics tab lags a lot

on the /openai tab, going to detailed analytics causes computer to lag

most likely due to minute chart

possible solution:

  • show minute chart but have a date dropdown to select day. This will have less load on the chart
  • another solution: so default group data by hour, and if you select a specific day, group by minute
image

Application Error: a server side exception has occurred on /logs page

Noticed this issue occurring once a day now.

image

Here's the 2 error messages associated with the issue:

Error: fail to memory allocation
at /app/node_modules/vscode-oniguruma/release/main.js:1:3101
at new u (/app/node_modules/vscode-oniguruma/release/main.js:1:3150)
at Object.n.createOnigScanner (/app/node_modules/vscode-oniguruma/release/main.js:1:4984)
at Object.createOnigScanner (/app/node_modules/shiki/dist/index.js:1932:34)
at g.createOnigScanner (/app/node_modules/vscode-textmate/release/main.js:1:4532)
at new m (/app/node_modules/vscode-textmate/release/main.js:1:47463)
at g.compile (/app/node_modules/vscode-textmate/release/main.js:1:46704)
at g.compileAG (/app/node_modules/vscode-textmate/release/main.js:1:47238)
at u.compileAG (/app/node_modules/vscode-textmate/release/main.js:1:38683)
at l (/app/node_modules/vscode-textmate/release/main.js:1:22586)
Error: \(([^\(\)]|(\(([^\(\)]|\([^\(\)]*\))*\)))*\))|(\[([^\[\]]|(\[([^\[\]]|\[[^\[\]]*\])*\]))*\]))([^=<>]|=[^<]|\<\s*(((const\s+)?[_$[:alpha:]])|(\{([^\{\}]|(\{([^\{\}]|\{[^\{\}]*\})*\}))*\})|(\(([^\(\)]|(\(([^\(\)]|\([^\(\)]*\))*\)))*\))|(\[([^\[\]]|(\[([^\[\]]|\[[^\[\]]*\])*\]))*\]))([^=<>]|=[^<]|\<\s*(((const\s+)?[_$[:alpha:]])|(\{([^\{\}]|(\{([^\{\}]|\{[^\{\}]*\})*\}))*\})|(\(([^\(\)]|(\(([^\(\)]|\([^\(\)]*\))*\)))*\))|(\[([^\[\]]|(\[([^\[\]]|\[[^\[\]]*\])*\]))*\]))([^=<>]|=[^<])*\>)*\>)*>\s*)?                                                                                 # typeparameters
\(\s*(\/\*([^\*]|(\*[^\/]))*\*\/\s*)*(([_$[:alpha:]]|(\{([^\{\}]|(\{([^\{\}]|\{[^\{\}]*\})*\}))*\})|(\[([^\[\]]|(\[([^\[\]]|\[[^\[\]]*\])*\]))*\])|(\.\.\.\s*[_$[:alpha:]]))([^()\'\"\`]|(\(([^\(\)]|(\(([^\(\)]|\([^\(\)]*\))*\)))*\))|(\'([^\'\\]|\\.)*\')|(\"([^\"\\]|\\.)*\")|(\`([^\`\\]|\\.)*\`))*)?\)   # parameters
(\s*:\s*([^<>\(\)\{\}]|\<([^<>]|\<([^<>]|\<[^<>]+\>)+\>)+\>|\([^\(\)]+\)|\{[^\{\}]+\})+)?                                                                        # return type
\s*=>                                                                                               # arrow operator
)
))
))x
at /app/node_modules/vscode-oniguruma/release/main.js:1:3101
at new u (/app/node_modules/vscode-oniguruma/release/main.js:1:3150)
at Object.n.createOnigScanner (/app/node_modules/vscode-oniguruma/release/main.js:1:4984)
at Object.createOnigScanner (/app/node_modules/shiki/dist/index.js:1932:34)
at g.createOnigScanner (/app/node_modules/vscode-textmate/release/main.js:1:4532)
at new m (/app/node_modules/vscode-textmate/release/main.js:1:47463)
at g._resolveAnchors (/app/node_modules/vscode-textmate/release/main.js:1:47329)
at g.compileAG (/app/node_modules/vscode-textmate/release/main.js:1:46972)
at u.compileAG (/app/node_modules/vscode-textmate/release/main.js:1:38683)
at l (/app/node_modules/vscode-textmate/release/main.js:1:22586)
[Error: An error occurred in the Server Components render. The specific message is omitted in production builds to avoid leaking sensitive details. A digest property is included on this error instance which may provide additional details about the nature of the error.] {
digest: '3060834878'
}

Based off the error messages, it's related to the shiki dependency which is used for syntax highlighting of this code snippet. If someone knows why this is happening that would be really helpful!

   "shiki": "^0.14.2"
image

Origin file https://github.com/dillionverma/llm.report/blob/main/lib/markdown-code.ts
Used in this file: https://github.com/dillionverma/llm.report/blob/main/app/(dashboard)/logs/logs-onboarding.tsx

Possible solutions:

  • Add a try catch so it doesn't break the entire site?
  • Remove shiki as dependency overall
  • Try upgrading to latest version hoping its fixed ๐Ÿคž

Allow to opt out of caching for certain requests

Caching is awesome to save on tokens, but there are few scenarios where someone might want to opt out of it.

E.g. allowing to re-generate a response if the user doesn't like it. In this case the prompt is the same as a previous request (so the response would be taken from the cache) but we actually want to obtain a fresh new result.

I would then suggest to allow to opt out of caching on a request basis. Maybe using a specific header for this?

org user dropdown for analytics dashboard not working

image

need to pass user id as url query param to openai api requests like so:

https://api.openai.com/v1/usage?date=2023-08-23&user_public_id=user-ji6LpnyrRSo6

Inspect the network requests on https://platform.openai.com/account/usage website to see how the requests / responses are sent

Need to update the following file to get started and then work backward throughout the app - https://github.com/dillionverma/llm.report/blob/main/lib/services/openai.ts

Add langchain docs for python + js

Add langchain code snippets for python / js

Here's example of js snippet which works

image

Would need to add a new snippet to the in-app installation instructions
image

Can't seem to run in local

Hey,

I did try running your website locally by filling all the fields of the .env.local file.
It seems however that something is wrong, the posthog api key is not considered valid (i am not authorized) and I don't get if it's because I'm missing some flags in my posthog project or what

also it's not clear to me what the withlogging.com website has to do with you, how the api behind it works etc.
I can't find it in the code so I don't understand what happens exactly when I fetch a completion by using your domain rather than the original openai one.

would you mind guiding me on how to use all the features of your saas in a local environment for testing purposes?

Pricing page redirect for non-logged in user

Pricing page should redirect to /login if user not logged in

image

Relevant function to change

const handleCheckout = (plan: Name) => {
console.log("AA");
if (!user) return;
console.log("BB");
if (plan === "Enterprise") {
window.open("https://cal.com/dillionverma/llm-report-demo", "_blank");
} else {
const params = new URLSearchParams({
client_reference_id: user.id,
});
console.log(
plan,
process.env.NODE_ENV,
subscriptionPlans[
process.env.NODE_ENV as "development" | "production" | "test"
]
);
const paymentLink =
subscriptionPlans[
process.env.NODE_ENV as "development" | "production" | "test"
][plan.toLowerCase() as "free" | "pro"]["monthly"];
const url = `${paymentLink}?${params.toString()}`;
window.open(url, "_blank");
}
};

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.