Code Monkey home page Code Monkey logo

llmstack's Introduction

LLMStack

LLMStack is a no-code platform for building generative AI applications, chatbots, agents and connecting them to your data and business processes.

Quickstart | Documentation | Promptly

Overview

Build tailor-made generative AI applications, chatbots and agents that cater to your unique needs by chaining multiple LLMs. Seamlessly integrate your own data and GPT-powered models without any coding experience using LLMStack's no-code builder. Trigger your AI chains from Slack or Discord. Deploy to the cloud or on-premise.

llmstack-quickstart

See full demo video here

Getting Started

Check out our Cloud offering at Promptly or follow the instructions below to deploy LLMStack on your own infrastructure.

LLMStack deployment comes with a default admin account whose credentials are admin and promptly. Be sure to change the password from admin panel after logging in.

Option 1

Install LLMStack using pip:

pip install llmstack

Start LLMStack using the following command:

llmstack

Above commands will install and start LLMStack. It will create .llmstack in your home directory and places the database and config files in it when run for the first time. Once LLMStack is up and running, it should automatically open your browser and point it to localhost:3000.

Option 2

This method uses docker compose to bring up the application containers. Clone this repository or download the latest release. Install docker if not already installed. Copy .env.prod to .env and update SECRET_KEY, CIPHER_SALT and DATABASE_PASSWORD in .env file:

cp .env.prod .env

Run LLMStack using the following command:

./run-llmstack.sh

If you are on Windows, you can use run-llmstack.bat instead

Once LLMStack is up and ready, it should automatically open your browser and point it to localhost:3000. You can also alternatively use docker compose up to manually start the containers and open localhost:3000 to login into the platform. Make sure to wait for the API server to be ready before trying to load LLMStack.

Users of the platform can add their own keys to providers like OpenAI, Cohere, Stability etc., from Settings page. If you want to provide default keys for all the users of your LLMStack instance, you can add them to the .env file. Make sure to restart the containers after adding the keys.

Remember to update POSTGRES_VOLUME, REDIS_VOLUME and WEAVIATE_VOLUME in .env file if you want to persist data across container restarts.

Features

๐Ÿ”— Chain multiple models: LLMStack allows you to chain multiple LLMs together to build complex generative AI applications.

๐Ÿ“Š Use generative AI on your Data: Import your data into your accounts and use it in AI chains. LLMStack allows importing various types (CSV, TXT, PDF, DOCX, PPTX etc.,) of data from a variety of sources (gdrive, notion, websites, direct uploads etc.,). Platform will take care of preprocessing and vectorization of your data and store it in the vector database that is provided out of the box.

๐Ÿ› ๏ธ No-code builder: LLMStack comes with a no-code builder that allows you to build AI chains without any coding experience. You can chain multiple LLMs together and connect them to your data and business processes.

โ˜๏ธ Deploy to the cloud or on-premise: LLMStack can be deployed to the cloud or on-premise. You can deploy it to your own infrastructure or use our cloud offering at Promptly.

๐Ÿš€ API access: Apps or chatbots built with LLMStack can be accessed via HTTP API. You can also trigger your AI chains from Slack or Discord.

๐Ÿข Multi-tenant: LLMStack is multi-tenant. You can create multiple organizations and add users to them. Users can only access the data and AI chains that belong to their organization.

What can you build with LLMStack?

Using LLMStack you can build a variety of generative AI applications, chatbots and agents. Here are some examples:

๐Ÿ“ Text generation: You can build apps that generate product descriptions, blog posts, news articles, tweets, emails, chat messages, etc., by using text generation models and optionally connecting your data. Check out this marketing content generator for example

๐Ÿค– Chatbots: You can build chatbots trained on your data powered by ChatGPT like Promptly Help that is embedded on Promptly website

๐ŸŽจ Multimedia generation: Build complex applications that can generate text, images, videos, audio, etc. from a prompt. This story generator is an example

๐Ÿ—ฃ๏ธ Conversational AI: Build conversational AI systems that can have a conversation with a user. Check out this Harry Potter character chatbot

๐Ÿ” Search augmentation: Build search augmentation systems that can augment search results with additional information using APIs. Sharebird uses LLMStack to augment search results with AI generated answer from their content similar to Bing's chatbot

๐Ÿ’ฌ Discord and Slack bots: Apps built on LLMStack can be triggered from Slack or Discord. You can easily connect your AI chains to Slack or Discord from LLMStack's no-code app editor. Check out our Discord server to interact with one such bot.

Administration

Login to http://localhost:3000/admin using the admin account. You can add users and assign them to organizations in the admin panel.

Cloud Offering

Check out our cloud offering at Promptly. You can sign up for a free account and start building your own generative AI applications.

Documentation

Check out our documentation at llmstack.ai/docs to learn more about LLMStack.

Development

Run the following commands from the root of the repository to bring up the application containers in development mode. Make sure you have docker and npm installed on your system before running these commands.

cd client
npm install
npm run build
cd ..
docker compose -f docker-compose.dev.yml --env-file .env.dev up --build

This will mount the source code into the containers and restart the containers on code changes. Update .env.dev as needed. Please note that LLMStack is available at http://localhost:9000 in development mode.

You can skip running npm install and npm run build if you have already built the client before

For frontend development, you can use npm start to start the development server in client directory. You can also use npm run build to build the frontend and serve it from the backend server.

To update documentation, make changes to web/docs directory and run npm run build in web directory to build the documentation. You can use npm start in web directory to serve the documentation locally.

Contributing

We welcome contributions to LLMStack. Please check out our contributing guide to learn more about how you can contribute to LLMStack.

llmstack's People

Contributors

ajhai avatar vegito22 avatar dependabot[bot] avatar gfcacace avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.