Code Monkey home page Code Monkey logo

Comments (6)

SINAPSA-IC avatar SINAPSA-IC commented on September 26, 2024

gpt4all_i1

from gpt4all.

AndriyMulyar avatar AndriyMulyar commented on September 26, 2024

What is the current prompt and input being used to produce short summary for the conversation name @cebtenzzre ?

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 26, 2024

We append the following hard-coded prompt to the conversation in order to generate the name, which is inappropriate for many of the models we currently support which use different templates:

### Instruction:
Describe response above in three words.
### Response:

I'm not really sure why this doesn't use the current prompt template instead.

from gpt4all.

SINAPSA-IC avatar SINAPSA-IC commented on September 26, 2024

Hello.
I've just seen the summary of a chat as "Advice: Prepare a".

Suggestion:
exclude from such summaries, when at the end of the summary:

  • 1-letter sequences, which generally are of no use in that place
  • even 2 or 3 - if they are not written in capital letters, which would possibly hint at abbreviations like "UK" and "UFO".

from gpt4all.

SINAPSA-IC avatar SINAPSA-IC commented on September 26, 2024

I've also seen the summary of a chat in a language that looks different from that of the dialogue:
"Planification de la"
which looks like Spanish, although it is not - "planification".en is "planificacion".es
while the language of the chat was English, with the LLM - Nous Hermes 2 Mistral DPO.

I cannot think of a sequence in English which would contain, in a similar context of the sentence (planification of something, planning something) the words (in this order) "planification de la".
Weird stuff. Not Earth-shattering, but still.

Also, the rule of three words for the summary might not always be followed, because I've also seen this summary:
Plan-Generator-Output: In this

from gpt4all.

cebtenzzre avatar cebtenzzre commented on September 26, 2024

Suggestion: exclude from such summaries, when at the end of the summary:

The most obvious solution to me is to prompt the LLM correctly in the first place :)
Otherwise, there will be garbage output no matter what we do. The LLM was simply not trained on the format that we have hardcoded.

from gpt4all.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.