Code Monkey home page Code Monkey logo

singletom's Introduction

SingleTom

A GPT tool (client) using OpenAI's API

SingleTom is a tutorial project that combines HTML and JavaScript to create a local HTML client. The client utilizes OpenAI's GPT API, eliminating the need for a server, node.js, or Python. To get started, simply open the HTML file in your browser.

You will need your personal OpenAI API-KEY. You can obtain it by visiting this link: https://platform.openai.com/account/api-keys.

The SingleTom client is designed to be a simple demo and serve as a source of inspiration. Feel free to use the code as-is or expand upon it to create your own unique implementation. If you have basic knowledge of HTML and JavaScript and are interested in learning how to leverage OpenAI's API then this project is for you.

Please note that this project is intended for local use and learning purposes. If you were to use it on a public server, be cautious as your API key would be exposed to the world.

It's important to clarify that, even being useful as-is, this project is not a fully-fledged application. However, with some modifications, you can transform it into one if desired.

NOTE: When one or more text files are drag/dropped onto the 'history' textarea, their contents are read and appended to the textarea so you can 'talk with them'.

📦 Installation
  1. Press the green "Code" button on the project page and choose "Download ZIP" or download here.
  2. Once downloaded, unzip the html folder to your desired location.
  3. RENAME apikeys.js.RENAME_AND_ADD_API_KEY to apikeys.js and open the file in a text editor.
  4. Replace YOUR_OPENAI_API_KEY_HERE with your OpenAI API key.
  5. Save the changes made in the apikeys.js file.
  6. Now, open the index.html file in your browser to start using the application.

NOTE: Do NOT rename or add your api key to the apicall.php.RENAME_AND_ADD_API_KEY file unless you (optional) intend to run the application ONLINE from a PHP server. (see below)

📚 Code Structure
  • index.html: Main HTML file for the application.
  • apikeys.js: Contains the API key for OpenAI's API. (Never upload this file anywhere)
  • models.js: OpenAI Models.
  • agents.js: System-prompt definitions aka "custom instructions". (make/add your own)
  • functions.js: Main functionality of the application.
  • dropTextFile.js: Functionality for drag and drop text files to the history.
  • styles.css: CSS styles for the application.
  • jquery_3_5_1.min.js: jQuery library. (from here)

image

💻 Workflow using SingleTom as a tool

You do not need to supply all documents when working with text/code, normally you would only have the essential parts in history (memory) or in your prompt.

But if need be then it handles multiple documents and can work with them.

Just type a user prompt and press the "SEND" button.

Here an example where I threw (drag/drop) all SingleTom's scripts in history and asked a question. I added all the 7 scripts just for good measure (not the jquery library though):

image

This example is only to somehow illustrate the flexibillity of this workflow. Also note the tokenuse where gpt-3.5-turbo-16k is a life saver.

TIPS:

  • The implemented system-prompts aka Custom instructions (agents) are just simple examples, use your (system-) prompt engineering skills to make your own, better agents.
  • Test new agents by simply editing the text in the system-prompt textarea and when you have a good one, then add it as a new agent in the agents.js file.
  • If you do not "ADD TO HISTORY", eg. if you don't need the answer in further communication, then you save tokens down the line.
  • Remember to "ADD TO HISTORY" if you need the answer in further communication.
  • If you need to have a lot of text in history, then use gpt-3.5-turbo-16k as it has 16k tokens available for each request.
  • Treat the HISTORY as a scratchpad (literally), it's not a freakin' chatbot.
  • There is no right or wrong way to do it, just do it your way.
  • If you get an error because there was not enough tokens available then if you have access to a model with more tokens use that and try again. Or delete some stuff in HISTORY and try again.
  • Remember you can have multiple browser windows (sessions) open at the same time.
🧠 About OpenAI Models and Tokens

Each model have a different total tokens available for the inference (request). One token is approximately 4 characters.

As example then gpt-3.5-turbo has 4096 tokens available for each request.

When sending a request, the token count consists of the following components:

  • System prompt
  • Conversation history
  • User prompt
  • max_tokens parameter (optional and will default to max available tokens if not set)

The sum of these components, must be less than the total tokens available for the model, or else an error will occur.

max_tokens (parameter)

The max_tokens parameter determines how many tokens should be reserved for the response. If set to AUTO (default) it will reserve the maximum available tokens for the model. Note: You only pay for the actual tokens used and not by how many is reserved for the output.

finish_reason (output)

The finish_reason indicates the reason why the response ended. It can be either "stop" or "length". "stop" means that the response had a 'normal' run, while "length" indicates that the response reached the token limit and is incomplete. If so, then pick a model having more tokens, make sure 'max' is 'auto' and/or delete some stuff in history, and then try again.

temperature (parameter)

The temperature parameter controls the randomness of the response. Lower values will result in more predictable responses, while higher values will result in more surprising responses (hallucinations).

🤖 Agents (Make your own!)

There is 4 example system-prompts aka Custom instructions for inspiration (See agents.js). - You are encouraged to make your own. System-prompt engineering is not the scope of this tutorial project.

  • SingleTom: A simple agent
  • Pirate: A pirate by the name of Dorothy
  • Marvin: The Paranoid Android from The Hitchhiker's Guide to the Galaxy
  • Children Books: Prompt desired reader age, number of pages, and theme to make a children book

⚠️ Important Note

Do not use this application on a public server as it will expose your API key to the world. This application is intended for 'local' use only. (see below though)

🌐 How to run this ONLINE from a server?
  • php
  • python
  • node.js
  • whatever...

I repeat that this tutorial project is aimed at local use only and ONLINE deployment is not in the scope of the project.

But anyhows, the important thing is to not expose your API key to the world. So instead you make an api call to your server that in turn can do the OpenAI API calls for you while not exposing the API key to the user.

Example using PHP

This ad hoc example implementation is using a PHP server, but you can (change the scripts and) use whatever server you want.

If SingleTom can not find the variable openai_apikey from the apikeys.js file, then it will use apicall.php to do the API calls instead. (Intended functionality)

Calling OpenAI locally (directly from your browser client) is faster and less prone to errors, but the client then would expose your API key. So instead you make an api call to your server that can do the OpenAI API calls for you without compromising your API key.

CLIENT --> SERVER --> OPENAI --> SERVER --> CLIENT

You can easily convert the api call in apicall.php to a Python script or Node.js script and serve the OpenAI api call from that environment instead. Maybe even ask SingleTom to help with that. Atm. the only thing that needs a server request is the API calls to obfuscate your API key from online predators.

So to run this ONLINE on a PHP server, then you need to do the following:

  • RENAME apicall.php.RENAME_AND_ADD_API_KEY to apicall.php and open the file in a text editor.
  • Add your API key to the apicall.php file and save it.
  • Upload all files EXCEPT apikeys.js from the html folder to your PHP server.
  • Navigate to the index.html on the server and you are good to go.

Then when the online HTML client can not find the openai_apikey variable from apikeys.js, it will use apicall.php to do the API CALLs instead. (Intended functionality)

The reason for this implementation is that the SingleTon client is intended for local use only. But you occasionally want to share your extended and improved version with someone, and then you can just upload it to a server and it will work. IMPORTANT: Do not upload the apikeys.js file!

Whatever you do, then do not expose your API key to the world.

Disclaimer

This application is made for learning and is not a full fledged application.

singletom's People

Contributors

slamsneider avatar

Stargazers

 avatar Christian Boyle avatar Mourad Mesrour avatar Vitor Sampaio avatar Tim Kersey avatar Adam Gordon Bell avatar Mohd Aidi avatar  avatar  avatar  avatar Tristan Putman avatar Bill Chen avatar Daniel Segarra avatar Amos Tsai avatar  avatar  avatar David avatar  avatar  avatar  avatar Rick avatar ZeroXClem avatar  avatar 斓曦未央丶 avatar Niels Veith avatar Tripp avatar  avatar  avatar  avatar Anton avatar Joona H avatar Jscott avatar  avatar Jake Bean avatar  avatar Trevor Miller avatar Korby Strube avatar Robert Perry avatar William Kennedy avatar Nick Vines avatar Kiryll Kulakowski avatar Andjak314 avatar WarpDandy avatar Harry Clare-Paule avatar  avatar Jonathan Taufer avatar  avatar Budhaditya avatar  avatar  avatar  avatar Mark avatar Henri avatar Abhishek avatar Kody Wildfeuer avatar  avatar  avatar  avatar Vikranth Kanumuru avatar Jaafar Al-Maliky avatar  avatar  avatar Konstantin Gorbatov avatar  avatar  avatar Lars Faye avatar  avatar Julien Barbe avatar MAmbrosini avatar  avatar

Watchers

 avatar Mohd Aidi avatar  avatar Kostas Georgiou avatar

singletom's Issues

'regexp jumphub'

Thank you. Killer-App alert! Add a user configurable throttleable 'regexp jumphub' to enable prompt-swapping and javascript calls/loders/generators and you might actually bring about the singularity! Congratulation! tyvm.

The right way to use the 'max_tokens' parameter

Similar to the message I sent to other repository: rikhuijzer/ata#25

Remove the max_tokens parameter, so it could default to (MODELmaxTOKENS - prompt tokens)

You should not set the max_token, according to the official openAI API, the default value is Inf.

223509914-002b1be8-5e13-4020-b638-20bcc2de9e41

I changed the 'functions.js' removing every mention of 'max_tokens' and it worked. The model itself make the calculation of the max tokens of the model selected minus the tokens of the prompt to set the max_tokens of the response.

That way it will never get an LENGHT ERROR and will save money, because resending the prompt with 'continue' will cost more.

let bLocalRun;
$(document).ready(function () {
    //check if running locally or on server so we can hide the api key if online
    if (typeof openai_apikey === 'undefined') {
        bLocalRun = false;
    } else {
        bLocalRun = true;
    }
    //populate agent dropdown
    const agentSelect = $("#agent");
    for (const agentKey in agents) {
        const option = $("<option></option>");
        option.val(agentKey);
        option.text(agentKey);
        agentSelect.append(option);
    }
    //populate model dropdown
    const modelSelect = $("#model");
    for (const modelKey in models) {
        const option = $("<option></option>");
        option.val(modelKey);
        option.text(models[modelKey].text);
        modelSelect.append(option);
    }
    //add event listeners
    agentSelect.on("change", function () {
        const agent = agents[agentSelect.val()];
        $("#systemprompt").val(agent.sysprompt);
        $("#agentHeading").text(agentSelect.val() + " (" + modelSelect.val() + ")");
        $("#agentDescription").text(agent.description);
    });
    modelSelect.on("change", function () {
        $("#agentHeading").text(agentSelect.val() + " (" + modelSelect.val() + ")");
    });
    $("#but_AddToHistory").click(function () {
        const newtext = $("#userprompt").val();
        const outputText = $("#response").val();
        $("#history").val($("#history").val() + "USER: " + newtext + "\n\nASSISTANT: " + outputText + "\n\n");
        $("#userprompt").val("");
        $("#response").val("");
        $("#history").scrollTop($("#history")[0].scrollHeight);
    });
    $("#but_send").click(async function () {
        const model = $("#model").val();
        const history = $("#history").val();
        const systemprompt = $("#systemprompt").val();
        const userprompt = $("#userprompt").val();
        const temperature = parseFloat($("#temperature").val());

        doSend(model, systemprompt, history, userprompt, temperature, bLocalRun);
    });
    $("#but_ClearHistory").click(function () {
        $("#history").val("");
    });
    $("#but_ClearPrompt").click(function () {
        $("#userprompt").val("");
    });
    //-----------------end of event listeners
    agentSelect.trigger("change");//updates system prompt text here at start
});
async function doSend(myModel, mySystemprompt, myHistory, myUserprompt, temperature, bLocalRun) {
    const url = bLocalRun ? 'https://api.openai.com/v1/chat/completions' : 'apicall.php';
    const messages = myHistory + "USER: " + myUserprompt;
    $("#but_send").text("WAIT...");
    $("#but_send").prop("disabled", true);

    let ajaxSettings = {
        url: url,
        type: "POST",
        contentType: "application/json",
        data: JSON.stringify({
            model: myModel,
            messages: [
                {
                    role: "system",
                    content: mySystemprompt
                },
                {
                    role: "user",
                    content: messages
                }
            ],
            n: 1,
            stop: null,
            temperature: temperature
        }),
    };

    if (bLocalRun) {
        ajaxSettings.beforeSend = function (xhr) {
            xhr.setRequestHeader("Authorization", `Bearer ${openai_apikey}`);
        };
    }

    try {
        const response = await $.ajax(ajaxSettings);
        doReturn(response);
    } catch (error) {
        console.error(error);
        let errorMessage = "An error occurred.";
        //append error object
        errorMessage += "\n" + JSON.stringify(error);

        alert(errorMessage);
        $("#but_send").prop("disabled", false); // Enable the SEND button again
        $("#but_send").text("SEND");
        return;
    }

    setTimeout(() => {
        $("#but_send").prop("disabled", false);
        $("#but_send").text("SEND");
    }, 100);
}
function CheckmessageContent(msg) {
    // if start of msg is "ASSISTANT: " then remove it
    if (msg.startsWith("ASSISTANT: ")) {
        msg = msg.substring(11);
    }
    return msg;
}
function doReturn(response) {
    try {
        //cruel hack I know but it works until betteer code arive ;)
        const test = response.choices[0].finish_reason;
    } catch (error) {
        console.log("ERROR:", response);
        alert(response.message);
        $("#but_send").prop("disabled", false); // Enable the SEND button again
        $("#but_send").text("SEND");
        return;
    }

    const finReason = response.choices[0].finish_reason;
    let messageContent = response.choices[0].message.content;
    const totalTokens = response.usage.total_tokens;
    /*
    const id = response.id;
    const created = response.created;
    const model = response.model;
    const completionTokens = response.usage.completion_tokens;
    const promptTokens = response.usage.prompt_tokens;

    console.log("messageContent: ", messageContent);
    console.log("id: ", id);
    console.log("created: ", created);
    console.log("model: ", model);
    console.log("completionTokens: ", completionTokens);
    console.log("promptTokens: ", promptTokens);
    */

    console.log("response", response); //full response object
    console.log("totalTokens: ", totalTokens);
    console.log("finishReason: ", finReason);
    messageContent = CheckmessageContent(messageContent);
    $("#response").val(messageContent);
    const modeltokens = models[$("#model").val()].tokens;
    const msg = "Total tokens used: " + totalTokens + " of " + modeltokens + " | Finish reason: " + finReason;
    $("#ResponseInNumbers").text(msg);
    $("#but_send").prop("disabled", false); // Enable the SEND button again
    $("#but_send").text("SEND");
}

about Top_P parameter

hi

is there a way to include, right beside the temperature, the option to modify the Top_P value? it is a very important parameter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.