Code Monkey home page Code Monkey logo

Comments (10)

AnonymoZ avatar AnonymoZ commented on May 12, 2024 1

You know, I'm tapping into a greater problem as we speak.

Essentially a token is a character. Of course token = ¾ word length, but let's make as if token–character is 1:1.
This means if your total tokens is 300, it means you have 300 characters. Of course, it is actually more than that and let's consider the excess to be a ‘gift’ from GPT. Let's consider the excess as emergency funds and not plan our code beyond 300 characters.

Now how many lines does this make? With 70 character per line on average let's put it at 4.
With 4 lines allowed per program, we have to decide upon a user request to ask ChatGPT to produce the code or not.
If his problem clearly requires 10 lines or more, we will not grant his request and maybe ask him to use his own API key.

But if his problem can be solved in 2–3 lines, we will proceed to ask ChatGPT and hope it falls within quota even if emergency funds have to be used. But how do we know how many lines will a program need?
That's the thing. ChatGPT can count the lines in its code, but only after writing his script.

The best we can do is estimate how many lines the program will have beforehand.
In the previous answer for example in the Obama problem, ChatGPT resolutely estimates the amount of lines the program will take. We could use that as a basis to select which query is appropriate or not.

The problem of ChatGPT's limit has caused many an attempt for circumvention—sadly, no one has passed. That's because there's a trick to ask ChatGPT to answer in chunks.. tell 'em to only give the next chunk when I say, ‘next’. But this only works with textual answers. ChatGPT cannot cut a code snippet into half and output the next when you say, ‘next’.

As we speak I think I know a way of circumventing that, maybe we could combine these techniques for long code. Try this:

Write a program in Python that calculates the sum of the squares of all primes up to 50.
Do not format your code as code.
Do not explain the code using comments.
Do not write any English communication in your answer.

Sometimes you just need to speak a language that it understands! 😉

Furthermore there are deeper problems with your entire concept if Generative-Manim itself.
Let's look at more innocent problems first. Allow GPT to use NumPy instead of Math. This allows arrays which could be more useful to GPT. Why cut access to its resources and force it to solve problems? Also, add in other modules. As NetworkX—I know a lot of peeps like to work with graphs. Add in.. Random. Just in case.

Anyway I don't know if ChatGPT prefers to use ManimGL but it sometimes generated with the use of Mesh(..). Which is not a ManimCE function. The other day it did a Mandelbrot with Mandelbrot(..). I never knew there was a Mandelbrot function in Manim.

I wanted to present another bizarre function but it seems Google Chrome turned off my tab so I lost it. It looked like this:
MoveDotToCenter(parametre)
Definitely does not exist in Manim.

Another suggestion could be to add a tab explaining what you know to work, so people deal less with errors and having to imagine something that they think will work.

from generative-manim.

AnonymoZ avatar AnonymoZ commented on May 12, 2024 1

Yeah, like a list of successful prompts with proven responses.
Honestly, I've had no problems using ChatGPT for ManimCE, but only so long I've specified my code had to be for CE!
Of course, I've had to send back the code because it didn't always realise which was CE which was GL.
Whether you specified GL from the start, whether that would give you flawless code I do not have the expertise to test this, although it is unlikely ChatGPT spews perfect code even in GL.

Even if your code is perfect, ChatGPT will occasionally make mistakes.
The ultimate solution IMO (aside from perfecting the app here and there) is to wait for GPT-4's public release.
GPT-4 assumedly makes less mistakes, but crucially has a bumped limit to up to ~30,000 tokens!
Should well give you more room!

from generative-manim.

360macky avatar 360macky commented on May 12, 2024 1

GPT-4 assumedly makes less mistakes, but crucially has a bumped limit to up to ~30,000 tokens! Should well give you more room!
Yes, definitely 😁 I've been thinking even in creating a ChatGPT plugin since I got access to that also.

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

Wow, yes, you are right! I'm not sure how to solve this, let me try some things...

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

Hi @AnonymoZ! I'm workin on a solution right now. Anyway, if you see any other issues in the app, please open another issue. Thank you so much!

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

The cause of the problem was the token limitation of up to 300 for the response. For example for "swing a double pendulum", GPT-4 would generate:

from manim import *

class GenScene(Scene):
  def construct(self):
    
    
    # Constants
    G = 9.81
    LENGTH1 = 2
    LENGTH2 = 1
    MASS1 = 20
    MASS2 = 10
    ANGLE1 = PI/2
    ANGLE2 = PI/2
    ANGLE1_VELOCITY = 0
    ANGLE2_VELOCITY = 0
    ANGLE1_ACCELERATION = 0
    ANGLE2_ACCELERATION = 0
    dt = 1/60
    
    # Convert degree to radian
    def deg_to_rad(degrees):
    return degrees*DEGREES
    
    # Pendulum
    def update_pendulum(pendulum, angle1, angle2):
      pendulum[0].next_to(ORIGIN, RIGHT)
      pendulum[1].next_to(pendulum[0], DOWN).rotate(angle1, about_point=pendulum[0].get_center())
      pendulum[2].next_to(pendulum[1], DOWN).rotate(angle2, about_point=pendulum[1].get_center())
    
    # Calculation
    def update_angles(angle1, angle2, vel1, vel2, acc1, acc2):
      num #And here is where it stops

I increased the use of tokens up to 400 by default, but if users add their own API Key the number goes up to 1200.

I also added a button to download the generated Python code, in order to debug this kind of cases... So if the animation fails, we can know where it was wrong...

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

I updated the code generated to accept the math package to overcome this complex prompts.

from generative-manim.

AnonymoZ avatar AnonymoZ commented on May 12, 2024

Hey, do you know the conversion from token to characters?
Once we determine how much can ChatGPT output, we'll be able to tell the user beforehand if his request is possible or not.
Also, ask ChatGPT to generate without any comments, and use a maximum of two characters when it names functions and variables. I bet this should save us characters!

You can use this for your estimate (it seems ChatGPT can't count the number of lines without first generating the code):
“Roughly estimate the number of lines of a Python program that calculates how many days Obama has lived. Don't tell me any details what would you do to achieve it. Just give me a number.”

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

Hey, do you know the conversion from token to characters? Once we determine how much can ChatGPT output, we'll be able to tell the user beforehand if his request is possible or not.

I don't understand this so much. You can find the conversion from tokens to characters with the OpenAI tokenizer. Can you give me more details of what you mean please?

Also, ask ChatGPT to generate without any comments, and use a maximum of two characters when it names functions and variables. I bet this should save us characters!

I already ask ChatGPT to generate without comments or explainations in the system instructions (Check utils.py in line 3). Sometimes it does not obey 😅 I don't know how to enforce this...

The variable naming is interesting, since it will save characters too, so I'll add it. Thank you!

from generative-manim.

360macky avatar 360macky commented on May 12, 2024

The problem of ChatGPT's limit has caused many an attempt for circumvention—sadly, no one has passed. That's because there's a trick to ask ChatGPT to answer in chunks.. tell 'em to only give the next chunk when I say, ‘next’. But this only works with textual answers. ChatGPT cannot cut a code snippet into half and output the next when you say, ‘next’.

Yes. It's very hard, dealing with code splitted from ChatGPT.

Let's look at more innocent problems first. Allow GPT to use NumPy instead of Math. This allows arrays which could be more useful to GPT. Why cut access to its resources and force it to solve problems? Also, add in other modules. As NetworkX—I know a lot of peeps like to work with graphs. Add in.. Random. Just in case.

Good idea!, I can add Numpy as a package. Or even just let ChatGPT generate the whole file (with the necessary imports) but this comes with the task of verifying that the required packages are installed before running the code.

Anyway I don't know if ChatGPT prefers to use ManimGL but it sometimes generated with the use of Mesh(..). Which is not a ManimCE function. The other day it did a Mandelbrot with Mandelbrot(..). I never knew there was a Mandelbrot function in Manim.

Do you think this has to do with the old documentation or old version it learned? Remember that ChatGPT has information until 2021. If that was the case, I was thinking of updating a new experiment as many people are doing: Linking it to Langchain so that it learns the latest Manim stable documentation.

Another suggestion could be to add a tab explaining what you know to work, so people deal less with errors and having to imagine something that they think will work.

Is this like a section to let people know what kind of animations they can generate? Or do you mean something else?

from generative-manim.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.