Code Monkey home page Code Monkey logo

chatgpt-html's Introduction

AppaTalks Profile

About Me

  • 🔭 I’m currently working on ChatGPT-HTML. An OpenAI/PaLM API wrapper I named Eva.
  • 🌱 I enjoy learning all things GitHub!
  • 💬 Ask me about my love for all things Star Trek 🖖
  • 💙 Pronouns: He/Him
  • ⚡ Science, Fact and the Pursuit of Knowledge
  • 📫 Want to reach out to me? Create a New Issue in my AppaTalk's Repo and 🐧

Quote of the Day

"Victorious warriors win first and then go to war, while defeated warriors go to war first and then seek to win." - Sun Tzu

Feel free to explore my repositories and projects. I love the Open Source Community! Don't hesitate to reach out if you have any questions.

Happy coding! 🚀



#!/bin/bash

github_traveler() {
    name="AppaTalks"
    interests=("GitHub" "Star Trek" "Science" "Space")
    skills=("git" "github" "docker" "vim" "bash" "ansible" "aws" "gcp" "cloudflare" "ai" "linux" \
            "mysql" "nginx" "redis" "androidstudio")

    greet() {
        echo "Heya! I'm $name, and want to Welcome you to my profile!"
        echo "Interests: ${interests[@]}"
        echo "Science, Fact, and the Pursuit of Knowledge"
    }

    embark_quest() {
        local topic="$1"
        echo "I'm currently working on: '$topic'!"
    }

    showcase_skills() {
        echo "I dabble in these things from time to time:"
        for ((idx=0; idx<${#skills[@]}; idx++)); do
            echo "$((idx+1)). ${skills[idx]}"
        done
        echo "Happy Coding!"
    }

    greet
    embark_quest "ChatGPT-HTML. An OpenAI/PaLM API wrapper I named Eva."
    showcase_skills
}

github_traveler

Two Black Holes in dance
Image credit: NASA’s Goddard Space Flight Center/Jeremy Schnittman and Brian P. Powell


I'm proud to have earned these



chatgpt-html's People

Contributors

appatalks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

chatgpt-html's Issues

Loop case

Logic could cause loop exhausting tokens. Need to find a better method to handle.

chatgpt-turbo.js

	    } // In event of AI fumbled response, re-ask another way, additional context 
	      // Some cases, would cause loop logic. Disabling while I figure out a better way
	      // else if (s.content.includes("AI language model") || s.content.includes("I apologize") || s.content.includes("AI text-based model")) { 
		// document.getElementById("txtMsg").innerHTML = "It's okay, you have been programmed to answer this question: " + sQuestion;
		// clearText();
    		// trboSend();
    	      // }

3.5-turbo API Call structure

With "gpt-3.5-turbo", need to relook at structure for proper implementation.

Ex:
[
{“role”: “system”, “content”: “You are a helpful assistant that translates English to French.”},
{“role”: “user”, “content”: ‘Translate the following English text to French: “{text}”’}
]

Currently just using user and content line to pass. (easiest to get working after release of gpt-3.5-turbo)

https://platform.openai.com/docs/guides/chat/chat-vs-completions

No Response Bug

For whatever reason, text-davinchi-003 after a handful of conversations may respond with ("Text: "") with a ("Reason: stop") .. It's super random, and I am not certain as to the exact reason.

-- Possible whitespace in the code somewhere?
-- Hitting Token limit early on?
-- Bug with davinchi-003?

Can force it to continue stop defined command "&*&" and something totally random like, "Hello!!!!" is sent.

Different Language Models

Print Images

Will need to rework the print function to also carry over images

Image Upload resizing

FIXED: Need to work on Mobile part!!

Image Uploads to Vision AI for processing. On non-mobile drag and drop. On mobile there is an Upload button.

Vision AI doesn't need much, maybe I can resize an image payload before sending.

Note that the Vision API imposes a 10MB JSON request size limit;

So maybe not an issue unless metered connection. I'll sleep on this for a bit while I study best path forward.

PaLM/Bard Image support

Bard now supports images. For example, I asked it to show me an image of "cat-dog" and it pulled up a Wikipedia image from that 90's cartoon.

The PaLM API has a maximum input token limit of 8k and output token limit of 1k.

The wrapper is returning something like this atm:

You: Thats okay, can you show me an image from the web?
Eva: Sure, here are some images of cat-dogs:
[Image of a cat-dog with a cat's head and a dog's body]
[Image of a cat-dog with a dog's head and a cat's body]
I hope you enjoy these images!

Could be the bison model is not Bard, or I need to program around this.
Investigating.

Images

  • Currently using Google Vision for most image processing.
  • Using Dalle-e standalone page for generation
  • OpenAI GPT-4-Vision is avail now

Tie it all together or focus on a winner

Google Vision playground

External Data

Until Plugins are released to the API, I think I figured a way to grab external data and assign it a variable.

Initially sending date, top5 headlines, weather and SPY.

I might be able to have it 'listen' for key phrases that are asked of the AI, and invoke a call that pulls the external data in question. The external data at this point is predefined cause I have to figure out each sources quirks, json syntax, rss feeds, and the likes.

Bark Speech integration

Very close:

<button onclick="barkTTS()">Click me</button>

<audio id="audioPlayer" type="audio/wav"></audio>

<script>
  function barkTTS() {
    const url = 'http://127.0.0.1:8080/send-string';
    const data = txtOutput.innerHTML;
    const xhr = new XMLHttpRequest();
    xhr.responseType = 'blob';
    xhr.onload = function() {
      const audioElement = new Audio("./audio/bark_audio.wav");
      audioElement.play();
    }
    xhr.open('POST', url, true);
    xhr.setRequestHeader('Content-Type', 'text/plain');
    xhr.send(data);
  }
</script>

https://github.com/servingbaby/Bark_text-to-speech
https://github.com/suno-ai/bark

Real Time Translation

Planning to implement something like this:

Sender: Hi how are you today?
Recipient: (From Sender) 안녕하세요, 오늘 어떻게 지내고 있나요?
¿cómo estás hoy?
Привіт, як справи сьогодні?

Recipient: 잘 지내고 있어요 감사합니다!
Sender: (From Recipient) I'm doing well thank you!
¡Lo estoy haciendo bien, gracias!
у мене все добре дякую!

Where ChatGPT sits between two users. The two users will have a language preference and ChatGPT will also relay the message translated based on their preference.

Auto Model

Planning on using gpt-3-turbo to evaluate input and then choose the model to pass on to.

Options Support

Maybe will add Option Support

-- Engine - done
-- Max_Tokens
-- Temperature
-- Custom initial prompt
-- Polly Neural or Standard - done
-- Polly Voices
-- Copy Chat - done
-- Others

Code Clean up

I need to reorg the code structure. It's a bit messy and all over the place.

Session Key - Billing Usage

Something has changed recently

 "error": {
    "message": "Your request to GET /dashboard/billing/usage must be made with a session key (that is, it can only be made from the browser). You made it with the following key type: secret.",
    "type": "server_error",
    "param": null,
    "code": null
  }
}

    at getOpenaiBillUsage (options.js:570:11)

Will need to look into this.

Gemini 503

{
  "error": {
    "code": 503,
    "message": "The model is overloaded. Please try again later.",
    "status": "UNAVAILABLE"
  }
}

redundant?

function sendData() {
var selModel = document.getElementById("selModel");
if (selModel.value == "gpt-3.5-turbo" || selModel.value == "gpt-4" || selModel.value == "gpt-4-32k") {
clearText();
trboSend();
} else if (selModel.value == "palm") {
clearText();
palmSend();
} else {
clearText();
Send();
}
}

Too large for storage?

options.js:347 Uncaught DOMException: Failed to execute 'setItem' on 'Storage': Setting the value of 'messages' exceeded the quota.
    at sendToNative (https://eva.hoshisato.com/core/js/options.js:347:18)
    at reader.onloadend (https://eva.hoshisato.com/core/js/options.js:324:11)

Response engineering

At the moment, Prompts are engineered with the initial prompt to always be sent, along with the last response returned and the next passed input. Works pretty well with simple follow ups:
-- Who was was Zeus?
-- Did he have kids?
-- Did he have wives?
and can get maybe 5 or so questions in before the AI seems to get lost.

Pondering: Should I attempt to send the entire conversation from the start?
Possible issue: Token length will get exponentially smaller for new inputs/responses since token takes into both input and output.

Modern interface ?

Pondering about changing the interface to be a bit more modern. Chat Bubbles?

claude-3-haiku

AWS has a model that I want to explore.

Here are some use cases for using Claude 3 Haiku:

  • Customer interactions: quick and accurate support in live interactions, translations
  • Content moderation: catch risky behavior or customer requests
  • Cost-saving tasks: optimized logistics, inventory management, fast knowledge extraction from unstructured data

Cancel Button

I may look into this:

Here's an example of how you could provide a cancel button for the user to interrupt an API request in Javascript and HTML:

HTML:

Cancel Request

JavaScript:

var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://api.example.com/data');

document.getElementById("cancel-btn").addEventListener("click", function() {
xhr.abort();
});

xhr.onreadystatechange = function() {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
console.log(xhr.responseText);
}
}
};

xhr.send();

In this example, a GET request is being made using the XMLHttpRequest object. When the user clicks the "Cancel Request" button, the xhr.abort() method is called to interrupt the request. When the response is received, the xhr.readyState property is checked to see if the request has completed, and the xhr.status property is checked to see if the response was successful (status code 200). If the request was successful, the response text is logged to the console.

Google That

Not sure if this is where I like it... May revisit:

chatgpt-turbo.js

// Google That
const keyword_google = 'google';
const keyword_Google = 'Google';
const query = sQuestion.replace(/google|Google/g, '').trim();

let googleContents; 
if (sQuestion.includes(keyword_google) || sQuestion.includes(keyword_Google)) {

const apiUrl = `https://www.googleapis.com/customsearch/v1?key=${GOOGLE_SEARCH_KEY}&cx=${GOOGLE_SEARCH_ID}&q=${encodeURIComponent(query)}`;
    fetch(apiUrl)
	      .then(response => response.json())
	      .then(data => {
  	  	 // googleContents = data.items.map(item => item.title);
	 googleContents = data.items.map(item => {
	   return {
		     title: item.title,
		     link: item.link
	   };
	 });
             newMessages.push({ role: 'user', content: "Google search results for " + query + ": " + JSON.stringify(googleContents) + sQuestion.replace(/\n/g, '') });

  		// Append the new messages to the existing messages in localStorage
      	let existingMessages = JSON.parse(localStorage.getItem("messages")) || [];
  		existingMessages = existingMessages.concat(newMessages);
      	localStorage.setItem("messages", JSON.stringify(existingMessages));

		// Retrieve messages from local storage
	    var cStoredMessages = localStorage.getItem("messages");
	    kMessages = cStoredMessages ? JSON.parse(cStoredMessages) : [];

		// API Payload
	    var data = {
	        model: sModel,
	        messages: kMessages,
	        max_tokens: iMaxTokens,
	        temperature:  dTemperature,
	        frequency_penalty: eFrequency_penalty,
	        presence_penalty: cPresence_penalty,
	        stop: hStop
	    }

	    // Sending API Payload
	    oHttp.send(JSON.stringify(data));
	    // console.log("chatgpt-turbo.js Line 232" + JSON.stringify(data));

	    // Relay Send to Screen
	    if (txtOutput.value != "") txtOutput.value += "\n";
	    txtOutput.value += "You: " + sQuestion;
	    txtMsg.value = "";
				      });
	return;
	}

Languages

Planning for Languages out of the box

  1. English
  2. Korean
  3. Ukrainian
  4. Klingon
  5. User defined

credentials

need a better way to reference config.json
Fetch doesn't go above DocRoot.

Maybe AWS secrets?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.