Code Monkey home page Code Monkey logo

chatgpt-next-web's Introduction

Hi there 👋

My Signature (By Machine Learning + AI where we so deep in binary)

$$ \sum_{i=1}^{n} x_i \cdot \text{H0llyW00dzZ}_{i=1}^{tm} x_i + E=mc^2 + \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} $$

My Spotify playlist:

Note

I write code while listening to these tracks.


My Main Programming Language:

Systems:

  • Go

Frontend/Web Development:

  • TypeScript

My Git:

  • GitKraken
  • GIthub Desktop

Git Roll Stats:

VScode Theme:

My GPG Key:


Go Touring

Go Play Ground

My Go Toolkit:

  • gotoolchain auto: Automated management of Go tools.
  • golint & gopls: Essential for code linting and editor integration.
  • deadcode: Removes unused code to maintain a clean codebase.
  • gotests: Automates the generation of test cases.
  • go doc & go fmt: Ensures consistent documentation and code formatting.
  • go vet: Provides in-depth code analysis.
  • gocyclo: Measures and aims to reduce cyclomatic complexity.
  • pprof: A powerful profiling tool for measuring and visualizing the performance characteristics of Go programs, particularly adept at identifying resource-intensive operations.

All these tools are excellent and can help create high-quality Go code with minimal complexity.


gopher

Tip

When writing in Go:

Coding Philosophy:

  • Prioritize error handling first before structuring logs.
  • Strive for simplicity in each function, aiming for a cyclomatic complexity under 5 as a sign of Go programming expertise.
  • As a general rule, maintain a maximum cyclomatic complexity of 10. If you have advanced expertise, aim for a cyclomatic complexity under 5.
  • Emphasize reusable code as it encourages better testing practices, enhances readability for both humans and machines, and aids in minimizing bugs for more reliable code.
  • Utilize constants in Go as a minimalist approach to avoid the pitfalls of stupid hard coding values.

By adhering to these principles, your Go code will stand out as superior when compared to others.

For example, even when using other packages, you'll find that your own package, built upon these principles, is superior. It will be less complex, have fewer bugs, and cause fewer panics compared to other packages that may be unnecessarily complex or prone to issues.

Tip

Another tips, this most important when you are writing in go witthen cyclomatic complexity under 5, especialy when you wanted to push in github for repository

  • Ignore Go test files (e.g, yourfunction_test.go) by adding them to .gitignore.

Because, just think again, why you have to push include go code test files when cyclomatic complexity are under 5 ? so be smart cyclomatic complexity under 5 you are go mastery.

Note

It is important to keep the cyclomatic complexity to a maximum of 10 or lower (ideally under 5). This is because, in Go, unlike Python which may tolerate complex conditional logic (e.g., multiple nested if statements which is bad), functions with a complexity under 10 are more likely to be reusable. This not only aids in testing but also enhances human + machine readability and minimizes bugs (bug-free).

Github Unwrapped

unwrapped-H0llyW00dzZ.1.mp4

⚡ Fun fact


Did You Know? If your Go code resembles a jungle of if statements (think 10+ nested layers – a big no-no!), it's less Go and more Stop-and-ask-for-directions. Flatten those conditionals and let your code run as smoothly as a greased gopher on a slip 'n slide!


Did You Know? If your code is overly stupid complex, it could indicate a need for improvement in your development skills.


Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme Pic of the day

Another Meme of the day

c_lang_gang.mp4

chatgpt-next-web's People

Contributors

actions-user avatar aprilnea avatar cesaryuan avatar clarencedan avatar clarenceyk avatar cyhhao avatar danielgwilson avatar darth-pika-hu avatar dependabot[bot] avatar eltociear avatar fyl080801 avatar gan-xing avatar h0llyw00dzz avatar ilario92 avatar imldy avatar iscandurra avatar isource avatar leedom92 avatar parad1se98 avatar pbrambi avatar pengoosedev avatar quark-zju avatar rugermccarthy avatar stonega avatar tscherrie avatar xiaotianxt avatar yancode avatar yidadaa avatar yorunning avatar yunwuu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

chatgpt-next-web's Issues

[Bug] The "Local Data" interface does not display the quantity at the beginning of "Dialogue," "Mask," and "Prompt" sections.

Describe the bug
The "Local Data" interface does not display the quantity at the beginning of "Dialogue," "Mask," and "Prompt" sections.

Expected behavior
"Dialogue," "Mask," and "Prompt" sections include quantity display.

Screenshots
1700703900992

Deployment

  • Docker
  • [ √ ] Vercel
  • Server

Desktop (please complete the following information):

  • Browser [e.g. chrome, safari]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional Logs
Add any logs about the problem here.

GPT-4 Vision Preview Cutting Off Result Message [Bug]

Bug Description

I'm writing to report a bug I've encountered while using the GPT-4 Vision Preview feature in the app. The feature is cutting off the result message, showing only a single line, and not displaying the full message.

Steps to Reproduce

  1. Open the app and change model to the GPT-4 Vision Preview.
  2. Input a prompt and image in the field.
  3. Click on the "Send" button to see the preview.

Expected Behavior

The GPT-4 Vision Preview should display the full result message, without cutting it off or truncating it.

Screenshots

No response

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

[Question]: Openai Assistant Integration

Problem Description

Hello
Can you please advise me how I can connect an assistant, from https://platform.openai.com/assistants, which I already have set up.
My task is to embed information into my model, based on which the AI will give me answers. So I decided to do this through the openai assistant. But I couldn't figure out how to hook it up in this interface.

Duplicate settings items

🤔The repeated list items appear to have duplicate settings for Temperature, TopP, InputTemplate, and historyMessageCount.

<ListItem
title={Locale.Settings.Temperature.Title}
subTitle={Locale.Settings.Temperature.SubTitle}
>
<InputRange
value={props.modelConfig.temperature?.toFixed(1)}
min="0"
max="1" // lets limit it to 0-1
step="0.1"
onChange={(e) => {
props.updateConfig(
(config) =>
(config.temperature = ModalConfigValidator.temperature(
e.currentTarget.valueAsNumber,
)),
);
}}
></InputRange>
</ListItem>
<ListItem
title={Locale.Settings.TopP.Title}
subTitle={Locale.Settings.TopP.SubTitle}
>
<InputRange
value={(props.modelConfig.top_p ?? 1).toFixed(1)}
min="0"
max="1"
step="0.1"
onChange={(e) => {
props.updateConfig(
(config) =>
(config.top_p = ModalConfigValidator.top_p(
e.currentTarget.valueAsNumber,
)),
);
}}
></InputRange>
</ListItem>
<ListItem
title={Locale.Settings.Temperature.Title}
subTitle={Locale.Settings.Temperature.SubTitle}
>
<InputRange
value={props.modelConfig.temperature?.toFixed(1)}
min="0"
max="1" // lets limit it to 0-1
step="0.1"
onChange={(e) => {
props.updateConfig(
(config) =>
(config.temperature = ModalConfigValidator.temperature(
e.currentTarget.valueAsNumber,
)),
);
}}
></InputRange>
</ListItem>
<ListItem
title={Locale.Settings.TopP.Title}
subTitle={Locale.Settings.TopP.SubTitle}
>
<InputRange
value={(props.modelConfig.top_p ?? 1).toFixed(1)}
min="0"
max="1"
step="0.1"
onChange={(e) => {
props.updateConfig(
(config) =>
(config.top_p = ModalConfigValidator.top_p(
e.currentTarget.valueAsNumber,
)),
);
}}
></InputRange>
</ListItem>

<ListItem
title={Locale.Settings.InputTemplate.Title}
subTitle={Locale.Settings.InputTemplate.SubTitle}
>
<input
type="text"
value={props.modelConfig.template}
onChange={(e) =>
props.updateConfig(
(config) => (config.template = e.currentTarget.value),
)
}
></input>
</ListItem>
</>
)}
<ListItem
title={Locale.Settings.HistoryCount.Title}
subTitle={Locale.Settings.HistoryCount.SubTitle}
>
<InputRange
title={props.modelConfig.historyMessageCount.toString()}
value={props.modelConfig.historyMessageCount}
min="0"
max="64"
step="1"
onChange={(e) =>
props.updateConfig(
(config) => (config.historyMessageCount = e.target.valueAsNumber),
)
}
></InputRange>
</ListItem>
<ListItem
title={Locale.Settings.InputTemplate.Title}
subTitle={Locale.Settings.InputTemplate.SubTitle}
>
<input
type="text"
value={props.modelConfig.template}
onChange={(e) =>
props.updateConfig(
(config) => (config.template = e.currentTarget.value),
)
}
></input>
</ListItem>
<ListItem
title={Locale.Settings.HistoryCount.Title}
subTitle={Locale.Settings.HistoryCount.SubTitle}
>
<InputRange
title={props.modelConfig.historyMessageCount.toString()}
value={props.modelConfig.historyMessageCount}
min="0"
max="64"
step="1"
onChange={(e) =>
props.updateConfig(
(config) => (config.historyMessageCount = e.target.valueAsNumber),
)
}
></InputRange>
</ListItem>

[Bug] [Markdown] Unsolved TypeScript React and Next.js Interpreter [Escape DollarSign from LaTeX]

Describe the bug
The issue involves the TypeScript React and Next.js interpreter regarding the inability to escape the dollar sign from LaTeX. Despite attempts, a solution for this problem has not been found yet. This means that when using LaTeX within the TypeScript React and Next.js environment, the dollar sign cannot be properly escaped, leading to unexpected behavior or errors.

Additional Logs
This just a note for future context

[Bug] CF部署,正常设置环境变量报错

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:
报错内容如下,环境变量设置了BASE_URL 、OPENAI_API_KEY这两个,直接进去提问报错,手动在设置里面设置BASE_URL和key没有报错出现,只有设置环境变量的时候会出现,且Vercel部署没有这个问题
{
"error": true,
"msg": "Access Forbidden"
}

Deployment
cloudflare

Desktop (please complete the following information):
Browser

[Feature] Some suggestions

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Suggestions

  1. Unnecessary addition of too many models can be avoided by keeping only the following. Refer to the official feature Continuous model upgrades
CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-3.5-turbo-16k,+gpt-4,+gpt-4-32k,+gpt-4-1106-preview,+gpt-4-vision-preview,+dall-e-3,+dall-e-3-beta-instruct-vision
  1. Consider merging another repository to enhance plugin functionality (https://github.com/Hk-Gosuto/ChatGPT-Next-Web-LangChain#%E4%B8%BB%E8%A6%81%E5%8A%9F%E8%83%BD)

[Bug] 使用dall-e-3模型时图片渲染问题

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
图片渲染错误
image

Deployment
Vercel

Desktop (please complete the following information):
chrome

Smartphone (please complete the following information):

  • Browser

[Feature] 希望可以增加多种文档上传分析的插件,并且支持联网

为了提高交流效率,我们设立了官方 QQ 群和 QQ 频道,如果你在使用或者搭建过程中遇到了任何问题,请先第一时间加群或者频道咨询解决,除非是可以稳定复现的 Bug 或者较为有创意的功能建议,否则请不要随意往 Issue 区发送低质无意义帖子。

点击加入官方群聊

这个功能与现有的问题有关吗?
如果有关,请在此列出链接或者描述问题。

你想要什么功能或者有什么建议?
尽管告诉我们。

有没有可以参考的同类竞品?
可以给出参考产品的链接或者截图。

其他信息
可以说说你的其他考虑。

Custom Endpoint Groq not Working [Bug]

Bug Description

Screenshot_2024-03-07-22-11-38-803_com android chrome
Just set custom endpoint for groq but its not working

Stuck over there
Screenshot_2024-03-07-22-11-51-623_com android chrome

Steps to Reproduce

  1. Enable custom endpoint
  2. Input groq api key
  3. Set endpoint https://api.groq.com/openai/
  4. Custom model mixtral-8x7b-32768,llama2-70b-4096

Expected Behavior

Working as fine. Get result fast

Screenshots

No response

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.