irgolic / autopr Goto Github PK
View Code? Open in Web Editor NEWRun AI-powered workflows over your codebase
Home Page: https://discord.gg/ykk7Znt3K6
License: MIT License
Run AI-powered workflows over your codebase
Home Page: https://discord.gg/ykk7Znt3K6
License: MIT License
Functionality to iteratively amend PRs; currently it regenerates the PR every time you run it on an issue, I'd like it to respond code review/pull request comments, adding new commits and a new comment.
Draft for this is PR #28.
TODO is:
CONTRIBUTING.md
describes how a human developer can write code to improve this project. It neglects to mention that this repository runs itself on new issues (such as this one) which will trigger the creation of a PR attempting to implement the request or idea in the issue. Add a section to CONTRIBUTING.md
explaining that opening an issue is a viable way to contribute.
Job was killed due to a rate limiting error: https://github.com/dmarx/autopr-keyframed/actions/runs/4623506281/jobs/8177383098
looks like the requested information had completed generation and the error was thrown during a validation step. would be nice if the job could pick up where it left off. you could create a PR after the first generation completes and then iteratively update the intermediate outputs on the PR's branch. scratchpad notes and other planning outputs could potentially be saved as comments on the PR or even the triggering issue.
If I use master branch, instead of main (ref: main was changed to ref: master), I have this error:
Starting main repo_path=/github/workspace settings=GitHubActionSettings(pull_request_agent_id='rail-v1', pull_request_agent_config=None, codegen_agent_id='auto-v1', codegen_agent_config=None, brain_agent_id='simple-v1', brain_agent_config=None, base_branch='main', target_branch_name_template='autopr/{issue_number}', loading_gif_url='https://media0.giphy.com/media/l3nWhI38IWDofyDrW/giphy.gif', model='gpt-4', temperature=0.8, rail_temperature=0.4, context_limit=8192, min_tokens=1000, max_tokens=2000, num_reasks=2)
Checking out main...
Traceback (most recent call last):
File "/venv/lib/python3.9/site-packages/git/util.py", line 1095, in __getitem__
return getattr(self, index)
File "/venv/lib/python3.9/site-packages/git/util.py", line 1083, in __getattr__
return list.__getattribute__(self, attr)
AttributeError: 'IterableList' object has no attribute 'main'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/app/autopr/gh_actions_entrypoint.py", line 94, in <module>
main(
File "/app/autopr/main.py", line 63, in main
repo.heads[settings.base_branch].checkout()
File "/venv/lib/python3.9/site-packages/git/util.py", line 1097, in __getitem__
raise IndexError("No item found with id %r" % (self._prefix + index)) from e
IndexError: No item found with id 'main'
It looks like it is still trying to use main instead of master.
AutoPR encountered an error while trying to fix Konard/AutoPR-tests#13.
No traceback
File is not removed from the root of repository.
AutoPR encountered an error while trying to fix irgolic/AutoPR-template#18.
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 58, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 27, in _generate_pr
self.codegen_agent.generate_changes(
File "/app/autopr/agents/codegen_agent/base.py", line 47, in generate_changes
self._generate_changes(repo, issue, pr_desc, current_commit)
File "/app/autopr/agents/codegen_agent/autonomous_v1/agent.py", line 352, in _generate_changes
result = f"```diff\n{self.diff_service.get_diff(filepaths)}\n```"
File "/app/autopr/services/diff_service.py", line 28, in get_diff
self.repo.git.execute(["git", "add", *filepaths])
File "/venv/lib/python3.9/site-packages/git/cmd.py", line 1109, in execute
raise GitCommandError(redacted_command, status, stderr_value, stdout_value)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git add poll_and_survey_bot/
stderr: 'fatal: pathspec 'poll_and_survey_bot/' did not match any files'
The tokenizer is implemented in autopr.utils.tokenizer.get_tokenizer
, and called at autopr/utils/repo.py:124
and autopr/repos/completions_repo.py:28
. Currently it uses transformers' GPT2TokenizerFast, which isn't the correct way to calculate the token length.
Here's an example from OpenAI's cookbook on how to calculate token length for messages:
import tiktoken
def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"):
"""Returns the number of tokens used by a list of messages."""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
print("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo":
print("Warning: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
print("Warning: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
return num_tokens_from_messages(messages, model="gpt-4-0314")
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
Our implementation should support both messages for chat completions models and simple strings for ordinary completions models (the tokenizer currently supports only simple strings).
I think it is not required for AutoPR to participate in debates, at least at the moment.
Anyway, any feedback from users in issues should be summarized due to limits. All points of view should be collected and weighted. If any questions arise, it should ask them in pull request, or later it may participate in debates in issues when it will be so autonomous that it will select best issues it can tackle first and clear all confusion in the process of understanding the task.
AutoPR encountered an error while trying to fix joglr/klods#29.
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 64, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 34, in _generate_pr
self.codegen_agent.generate_changes(
File "/app/autopr/agents/codegen_agent/base.py", line 54, in generate_changes
self._generate_changes(repo, issue, pr_desc, current_commit)
File "/app/autopr/agents/codegen_agent/autonomous_v1/agent.py", line 320, in _generate_changes
action = self.rail_service.run_prompt_rail(action_rail)
File "/app/autopr/services/rail_service.py", line 205, in run_prompt_rail
return self.run_rail_object(rail.output_type, prompt)
File "/app/autopr/services/rail_service.py", line 124, in run_rail_object
raw_o, dict_o = pr_guard(
File "/venv/lib/python3.9/site-packages/guardrails/guard.py", line 144, in __call__
guard_history = runner(prompt_params=prompt_params)
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 89, in __call__
validated_output, reasks = self.step(
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 145, in step
validated_output = self.validate(index, output_as_dict, output_schema)
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 262, in validate
validated_output = output_schema.validate(output_as_dict)
File "/venv/lib/python3.9/site-packages/guardrails/schema.py", line 328, in validate
validated_response = self[field].validate(
File "/venv/lib/python3.9/site-packages/guardrails/datatypes.py", line 344, in validate
selected_value = schema[selected_key]
KeyError: 'edit_file'
followup thought from #36
I haven't looked into this yet to see if it's already a configuration option, but it would be useful if the user could specify different models for different components of this system. concretely, gpt-3.5-turbo would probably be sufficient for the validation steps. If the action could be configured such that the more expensive API (gpt4) is used only for the main generations and a cheaper API could be used for stuff like the validation steps, that would potentially have a lot of benefits (cheaper, faster, less likely to get rate limited).
Originally requested in irgolic/AutoPR-template#7
I made an effort to write AutoPR pretty orthogonally to Github. To support gitlab, the following changes need to be made:
gh_actions_entrypoint.py
. They should probably be placed in their own autopr/entrypoints/
directory.PublishService
subclass should be added in autopr/services/publish_service.py
EventService
subclass.main.py
is hardcoded for github, in that it instantiates a GithubPublishService and GithubEventService. Also, it takes a github_token
environment variable.I don't personally have a use for this right now, but the changes should be pretty straightforward if anyone using gitlab wants to pick it up. That said, I'm glossing over the actual gitlab <=> entrypoint integration, and assuming that gitlab-ci provides an event object similar to github actions.
For example repository file structure change, public api change, change in the file an so on.
That would require #31
AutoPR encountered an error while trying to fix Konard/AutoPR-tests#5.
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 75, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 42, in _generate_pr
self.commit_service.commit(current_commit, push=True)
File "/app/autopr/services/commit_service.py", line 78, in commit
self.repo.git.execute(["git", "push", "-f", "origin", self.branch_name])
File "/venv/lib/python3.9/site-packages/git/cmd.py", line 1109, in execute
raise GitCommandError(redacted_command, status, stderr_value, stdout_value)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(1)
cmdline: git push -f origin autopr/5
stderr: 'To https://github.com/Konard/AutoPR-tests
! [remote rejected] autopr/5 -> autopr/5 (refusing to allow a GitHub App to create or update workflow `.github/workflows/run_hello_world.yml` without `workflows` permission)
error: failed to push some refs to 'https://github.com/Konard/AutoPR-tests''
Now in such a situation, it will first communicate through the API, then it will successfully complete, but it will show an error message in the log.
AutoPR encountered an error while trying to fix irgolic/AutoPR-template#14.
Traceback (most recent call last):
File " /app/autopr/agents/brain_agent/base.py" , line 58, in generate_pr
self._generate_pr(event)
File " /app/autopr/agents/brain_agent/simple_v1.py" , line 27, in _generate_pr
self.codegen_agent.generate_changes(
File " /app/autopr/agents/codegen_agent/base.py" , line 47, in generate_changes
self._generate_changes(repo, issue, pr_desc, current_commit)
File " /app/autopr/agents/codegen_agent/autonomous_v1/agent.py" , line 286, in _generate_changes
context = self._make_context(repo, current_commit)
File " /app/autopr/agents/codegen_agent/autonomous_v1/agent.py" , line 88, in _make_context
lines = self._get_lines(
File " /app/autopr/agents/codegen_agent/autonomous_v1/agent.py" , line 59, in _get_lines
with open(path, 'r') as f:
IsADirectoryError: [Errno 21] Is a directory: '/github/workspace/weather-app/'
AutoPR encountered an error while trying to fix bluewin4/angleNdevil#2.
It will allow making AutoPR absolutely autonomous as architect/senior developer. By the way, AutoPR should prove users wrong with code and test hypotheses if required.
It would be much easier for AI to solve bugs or add new features, it will always add tests before any attempts to do a solution.
Tests approval is mandatory and should be implemented as part of #31
Without strict definition toward the goal, no goals are achievable.
And only tests can clear state what the goal is.
Tests could also be used to test hypotheses if there are many options to choose from. Related to #93
Some thoughts on what an improved pull request planning agent might look like, and how to implement it.
There should be another repo
called EmbeddingsRepo
, that stores embedded texts and searches among them. It should allow multiple implementations under the same interface:
The files in the repo should be summarized, the summary embedded and cached by the EmbeddingsRepo
. This should be triggerable either:
brain_agent
controlThen, an autonomous version of a pull_request_agent
should be implemented; its action set should be:
Heywhattup, I tried using this on a new Ruby on Rails project, just for play, and the workflow failed when invoking the action with: Poetry could not find a pyproject.toml file in /github/workspace or its parents
. Does this action only intend to support Python projects?
AutoPR encountered an error while trying to fix deep-foundation/deepclient#13.
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 75, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 34, in _generate_pr
self.codegen_agent.generate_changes(
File "/app/autopr/agents/codegen_agent/base.py", line 54, in generate_changes
self._generate_changes(repo, issue, pr_desc, current_commit)
File "/app/autopr/agents/codegen_agent/autonomous_v1/agent.py", line 320, in _generate_changes
action = self.rail_service.run_prompt_rail(action_rail)
File "/app/autopr/services/rail_service.py", line 202, in run_prompt_rail
return self.run_rail_object(rail.output_type, prompt)
File "/app/autopr/services/rail_service.py", line 124, in run_rail_object
raw_o, dict_o = pr_guard(
File "/venv/lib/python3.9/site-packages/guardrails/guard.py", line 144, in __call__
guard_history = runner(prompt_params=prompt_params)
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 89, in __call__
validated_output, reasks = self.step(
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 145, in step
validated_output = self.validate(index, output_as_dict, output_schema)
File "/venv/lib/python3.9/site-packages/guardrails/run.py", line 262, in validate
validated_output = output_schema.validate(output_as_dict)
File "/venv/lib/python3.9/site-packages/guardrails/schema.py", line 328, in validate
validated_response = self[field].validate(
File "/venv/lib/python3.9/site-packages/guardrails/datatypes.py", line 344, in validate
selected_value = schema[selected_key]
KeyError: 'edit_file'
It is crucial to have a planning step for changes in file structure, and it should have mandatory approval by users by default.
Before we even begin to make changes to files that are bigger than half the context (32k tokens / 2) we should be able to plan changes in each file's public API with mandatory approval by user by default.
Today, with so much limitation, it is almost impossible to make it autonomous.
Every change or plan can be perfected by GPT-4 many times giving it prompt (Can you do better? Can you improve your plan? And so on).
On the other hand, user should be in control of spending via GPT-4 API, and it should be configurable all guided to have that many second thoughts.
Anyway, it will be completely impossible without #31 and #85
Related to #94
It also should not begin solving the issue if the issue itself uses too many tokens even after summarization. AutoPR should always resummarize the issue context between steps and keep track of how much tokens left.
Some context may be temporary offloaded to pull request comments if it is not required for the next step.
This may also require doing one change at a time, and after that change is done it can be removed from context.
So far we've only used GPT-4 and GPT-3.5, the next step is to try it on models that are locally hosted.
I'm not sure exactly how to go about this; as this is a Github Action, does Github have GPUs in their runners? How do we properly write it to work with custom runners? Could we rent GPUs on something like vast.ai? Are there any grants available for free computational resources to run AutoPR on?
I'd love to run a custom Github runner with my own GPU, and run tests with it.
Essentially, these two methods need to use a completion_func
decoupled from OpenAI's functions.
https://github.com/irgolic/AutoPR/blob/main/autopr/services/rail_service.py#L53-L125
AutoPR encountered an error while trying to fix (redacted repo/issue).
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 75, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 27, in _generate_pr
pr_desc = self.pull_request_agent.plan_pull_request(self.repo, issue, event)
File "/app/autopr/agents/pull_request_agent/base.py", line 50, in plan_pull_request
pull_request = self._plan_pull_request(repo, issue, event)
File "/app/autopr/agents/pull_request_agent/rail_v1.py", line 392, in _plan_pull_request
notes = self.write_notes_about_files(files, issue, filepaths)
File "/app/autopr/agents/pull_request_agent/rail_v1.py", line 317, in write_notes_about_files
raise ValueError('Error looking at files')
ValueError: Error looking at files
The issue was something like:
in path/to/file.tsx, update the "messages" and "files" Buttons to be IconButtons. also add aria-label properties to them.
The action's log seemed to output a ton of activity, but it said it failed before the action had even finished with the message above in the generated PR. Not sure why it failed.
At the moment, code is generated autonomously by auto-v1
(with subsequent NewFile
and EditFile
actions). The EditFile
action edits file hunks, not whole files themselves. It is shown a code hunk it needs to edit like this:
9 |
10 | </div>
11 *
12 * # π Usage
13 *
14 | Warning: This Github Action is currently **in development**, and in **alpha release**.
15 | If you're interested in using this action, please reach out on [Discord](https://discord.gg/vz7p9TfHsh).
And most of the time, it's pretty good at returning only the *
highlighted lines, but there are some distinct improvements that can be made.
It's asked to respond in this format:
```
<string>
```
{
"outcome": string # A description of the outcome of the attempt to rewrite the file hunk according to the problem statement.
}
The outcome is used to gauge the effect of the action and is fed back into the autonomous agent.
Each of these could be made into a separate issue, but for now, I'm listing them as potential TODO items:
9 |
). These should be removed in a post-processing step, if each line begins with a line number and pipe/star character. Make sure to lstrip
the line.gpt-3.5-turbo
struggles with generating both the code hunk and JSON in one go. An alternative method is asking it to generate code and reflect on the outcome in two separate calls/questions should be implemented. For GPT-4 it still makes sense to do it in one go to conserve on tokens, so the auto-v1
codegen agent should expose a config parameter to choose whether to do it in one go or two (add a kwarg to __init__
, and it'll be passed via codegen_agent_config
).In autopr.utils.repo.repo_to_file_descriptors
, make it read a file named .gptignore
in the root of the repo, and filter according to that. Allow for star syntax, like in .gitignore
.
Also, make a new file named .gptignore
, containing *.lock
.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/github/workspace/autopr/gh_actions_entrypoint.py", line 18, in <module>
main(
File "/github/workspace/autopr/main.py", line 46, in main
pr = generator.generate_pr(tree, issue_title, issue_body, issue_number)
File "/github/workspace/autopr/services/generation_service.py", line 238, in generate_pr
pr_model = self._generate_pr(codebase, issue_text)
File "/github/workspace/autopr/services/generation_service.py", line 181, in _generate_pr
raw_o, dict_o = pr_guard(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/guardrails.py", line 101, in __call__
return self.ask_with_validation(prompt, llm_ask, num_reasks)
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/guardrails.py", line 108, in ask_with_validation
guard_history = self.validation_inner_loop(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/guardrails.py", line 169, in validation_inner_loop
validated_response, reasks = self.validate_output(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/guardrails.py", line 239, in validate_output
validated_response = schema[field].validate(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/datatypes.py", line 299, in validate
value = child_data_type.validate(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/datatypes.py", line 259, in validate
value = item_type.validate(i, item, value)
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/datatypes.py", line 299, in validate
value = child_data_type.validate(
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/datatypes.py", line 126, in validate
schema = validator.validate_with_correction(key, value, schema)
File "/github/home/.cache/pypoetry/virtualenvs/autopr-cDQ_M3T8-py3.9/lib/python3.9/site-packages/guardrails/validators.py", line 196, in validate_with_correction
return self.on_fail(e)
File "/github/workspace/autopr/validators.py", line 239, in fix
lines = remove_hallucinated_lines(lines, tree)
File "/github/workspace/autopr/validators.py", line 108, in remove_hallucinated_lines
check_file_line = current_file_content[check_line_number]
IndexError: list index out of range
For example, look there to make an important note on what is needed to be kept in mind.
Insert the code exactly in this place and so on.
Impossible without #31
You don't need to look at any of the files in the repo, just add a new one called LICENSE.md
the rocket emoji is a fun idea, but I think a simpler and more user-friendly approach might be to use github's labeling system.
strongly suggest you consider an alternative triggering mechanism
Support a new input in gh_actions_entrypoint.py
for excluded folders and files.
Add a "Closes #{num}" footer to the pull request's body, to properly reference it as a solution for the issue, and close it upon merging.
There may be repository level configurations with a range or select action to do automatically.
There also may be issue level configuration, by default the same as for repository.
And the default level for repository should be set to allow zero configuration for users. Do autonomously only things it is good at.
Create a second github workflow triggered on the workflow_dispatch
event. Configure this event to accept inputs specifying the issue from which to take input, so that contributors to this repo can trigger the workflow to run on issues created by non-contributors. Refactor the existing create-pr-from-issue
workflow and move functionality common between that workflow and this new workflow to another new callable workflow that they can both call.
action.yml
. To see an example, check the workflow file in this repo. Although we're still using edited
on issues for testing, don't mention that.generation_service.py
and validators.py
for examples).Don't just reference where to look for the information, present it in a nice and friendly way for newcomers to the repo.
I'm currently running tests manually, I'd like them to automatically run in its own github workflow.
All the tests are currently located in the autopr/tests/
directory, and are meant to be run with pytest.
Please add a github workflow to run them with pytest. You don't need to read the files, assume they each contain a single function whose name starts with test_
.
https://github.com/L4legenda/test_auto_pr/actions/runs/4884170663/jobs/8716528416
Downloading (β¦)olve/main/vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s]
Downloading (β¦)olve/main/vocab.json: 100%|ββββββββββ| 1.04M/1.04M [00:00<00:00, 6.35MB/s]
Downloading (β¦)olve/main/vocab.json: 100%|ββββββββββ| 1.04M/1.04M [00:00<00:00, 6.28MB/s]
Downloading (β¦)olve/main/merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s]
Downloading (β¦)olve/main/merges.txt: 100%|ββββββββββ| 456k/456k [00:00<00:00, 2.53MB/s]
Downloading (β¦)olve/main/merges.txt: 100%|ββββββββββ| 456k/456k [00:00<00:00, 2.51MB/s]
Downloading (β¦)/main/tokenizer.json: 0%| | 0.00/1.36M [00:00<?, ?B/s]
Downloading (β¦)/main/tokenizer.json: 100%|ββββββββββ| 1.36M/1.36M [00:00<00:00, 7.44MB/s]
Downloading (β¦)/main/tokenizer.json: 100%|ββββββββββ| 1.36M/1.36M [00:00<00:00, 7.34MB/s]
Downloading (β¦)lve/main/config.json: 0%| | 0.00/665 [00:00<?, ?B/s]
Downloading (β¦)lve/main/config.json: 100%|ββββββββββ| 665/665 [00:00<00:00, 209kB/s]
Creating new branch autopr/1... service=commit
Removing guardrails.log... service=commit
Committed changes commit_message=[empty] diff= service=commit
Pushing branch autopr/1 to remote... service=commit
Traceback (most recent call last):
File "/usr/local/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/app/autopr/gh_actions_entrypoint.py", line 94, in
main(
File "/app/autopr/main.py", line 135, in main
brain_agent.generate_pr(event)
File "/app/autopr/agents/brain_agent/base.py", line 60, in generate_pr
self.publish_service.update()
File "/app/autopr/services/publish_service.py", line 346, in update
self._publish(title, body)
File "/app/autopr/services/publish_service.py", line 445, in _publish
self._create_pr(title, body, success)
File "/app/autopr/services/publish_service.py", line 467, in _create_pr
if self._is_draft_error(response.text):
File "/app/autopr/services/publish_service.py", line 481, in _is_draft_error
response_obj = json.loads(response_text)
File "/usr/local/lib/python3.9/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The image is rebuilt on every workflow run. If the docker image is cached, the dependency installation step will be cached too.
Make sure to rebuild it whenever appropriate (e.g., when dependencies change).
In the "Check if issue or issue comment is created by collaborator or repo owner" step in create-pr-from-issue.yml
, we should explicitly check for the repo owner, and allow issues coming from them to trigger the workflow.
In the README.md
, I see that all issues will trigger the GitHub action. Can you add an example that makes it only trigger when the project owner makes an issue?
AutoPR encountered an error while trying to fix deep-foundation/deepclient#19.
No traceback
Instead of making changes in existing files AutoPR just created new files.
This is a maturing codebase which unsurprisingly doesn't have a lot of documentation yet. Since potential collaborators will need to examine the code to understand how things work, it would be helpful if the code was self-documented. docstrings are an excellent way to achieve this. additionally, building API documentation with sphinx can then be incorporated into the CI/CD with github actions. Would also be nice if the README added a section demonstrating how to customize behavior, e.g. modifying prompts or configuring additional triggers.
By default, as few as possible. It costs money.
This will dramatically reduce the number of requests to GPT-4 directly for huge projects.
Or maybe it should be also guided by user to look at specific places as in #31
Or maybe we should build the graph of code with all implicit and explicit dependencies, including monkey patches and so on. And the moonshot idea - to see how users use the code in other repositories.
Related to: #90
AutoPR encountered an error while trying to fix #73.
Traceback (most recent call last):
File "/github/workspace/autopr/agents/brain_agent/base.py", line 75, in generate_pr
self._generate_pr(event)
File "/github/workspace/autopr/agents/brain_agent/simple_v1.py", line 34, in _generate_pr
self.codegen_agent.generate_changes(
File "/github/workspace/autopr/agents/codegen_agent/base.py", line 54, in generate_changes
self._generate_changes(repo, issue, pr_desc, current_commit)
File "/github/workspace/autopr/agents/codegen_agent/autonomous_v1/agent.py", line 343, in _generate_changes
effect = self._edit_existing_file(repo, issue, pr_desc, current_commit, context, action_obj)
File "/github/workspace/autopr/agents/codegen_agent/autonomous_v1/agent.py", line 259, in _edit_existing_file
edit_file_hunk: Optional[GeneratedFileHunk] = self.chain_service.run_chain(edit_file_chain)
File "/github/workspace/autopr/services/chain_service.py", line 132, in run_chain
output = self._run_model(prompt_value)
File "/github/workspace/autopr/services/chain_service.py", line 119, in _run_model
return self.model(template.to_messages()).content
File "/venv/lib/python3.9/site-packages/langchain/chat_models/base.py", line 128, in __call__
return self._generate(messages, stop=stop).generations[0].message
File "/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 266, in _generate
response = self.completion_with_retry(messages=message_dicts, **params)
File "/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 228, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/venv/lib/python3.9/site-packages/langchain/chat_models/openai.py", line 226, in _completion_with_retry
return self.client.create(**kwargs)
File "/venv/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, you requested 9264 tokens (7264 in the messages, 2000 in the completion). Please reduce the length of the messages or completion.
I'm confident AutoPR can be optimized to work well with gpt-3.5-turbo
.
As it stands, the rail-v1
pull request agent works fine with gpt-3.5-turbo.
However, the codegen agent does not. rail-v1
codegen may work, but suffers from the same problems that it does when used by gpt-4
β it produces an unreliable diff with ambiguous errors.
My hope is to use auto-v1
codegen, which creates unambiguous changes.
gpt-3.5-turbo
is now reliable at selection actions.How
auto-v1
codegen worksHere is an example of a successful action selection call:
Prompt
``` You are about to make a decision on what to do next. This is the issue that was opened: ```#20 Create a dice rolling bot irgolic: In the `dice_rolling_bot` folder, develop a Discord bot that allows users to roll virtual dice by typing a command (e.g., "!roll 1d6"). The bot should support various dice types (e.g., d4, d6, d8, d10, d12, d20) and multiple dice rolls at once. Include a README.md file with information on the bot's features, setup instructions, and usage guidelines. ``` This is the pull request we're creating: ```Title: Feature/dice-rolling-bot: Add Dice Rolling Bot Functionality This PR resolves issue #20, adding a dice rolling bot to the `dice_rolling_bot` folder. This Discord bot allows users to roll virtual dice by typing a command, such as "!roll 1d6". The bot supports various dice types (d4, d6, d8, d10, d12, d20) and multiple dice rolls at once. A README.md file has been included with information on the bot's features, setup instructions, and usage guidelines. Commits: 1. Create dice_rolling_bot.py 2. Add README.md for dice rolling bot Please review the changes and let me know if any modifications are required. 1. Commit: Create dice_rolling_bot.py Files: dice_rolling_bot/dice_rolling_bot.py:L1-L23 Changes: - Created a new file `dice_rolling_bot.py` - Imported required libraries (random and discord) - Implemented the main functionality for the bot to roll dice based on user input 2. Commit: Add README.md for dice rolling bot Files: dice_rolling_bot/README.md:L1-L32 Changes: - Created a new file `README.md` - Added setup instructions for the bot - Provided usage guidelines and examples for the dice rolling bot ``` This is the commit we're writing: ```Create dice_rolling_bot.py - Created a new file `dice_rolling_bot.py` - Imported required libraries (random and discord) - Implemented the main functionality for the bot to roll dice based on user input``` This is the codebase subset we decided to look at: `````` These are the actions we've taken so far: `````` In our pursuit of implementing this commit, please make a decision on what to do next. If a new file should be created, respond with `new_file` and the file path and description. If one of the files in the codebase subset should be edited, respond with `edit_file` and the file path, description, and line numbers of the hunk to edit. When picking the line numbers, keep the hunk as short as possible while fulfilling the description. If you don't know which line numbers to pick, leave them blank. If you're done, respond with `finished` and the commit message describing the past actions. ``` Given below is XML that describes the information to extract from this document and the tags to extract it into. ```xml <output> <string name="action" choices="new_file,edit_file,finished"/> <object name="new_file" description="new_file" if="action==new_file"> <string name="filepath" description="Path to the newly created file." required="true"/> <string name="description" description="Description of the contents of the new file." required="true"/> </object> <object name="edit_file" description="edit_file" if="action==edit_file"> <string name="filepath" description="Path to the file to be edited." required="true"/> <string name="description" description="Description of the changes to be made to the file." required="true"/> <integer name="start_line" description="The line number of the first line of the hunk to be edited." format="positive" required="false" on-fail="noop"/> <integer name="end_line" description="The line number of the last line of the hunk to be edited. Keep the hunk as short as possible while fulfilling the description." format="positive" required="false" on-fail="noop"/> </object> <string name="finished" description="commit_message: A more appropriate commit message based on the actions taken." required="false" if="action==finished"/> </output> ``` ONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the `name` attribute of the corresponding XML, and the value is of the type specified by the corresponding XML's tag. The JSON MUST conform to the XML format, including any types and format requests e.g. requests for lists, objects and specific types. Be correct and concise. Here are examples of simple (XML, JSON) pairs that show the expected behavior: - `<string name='foo' format='two-words lower-case' />` => `{'foo': 'example one'}` - `<list name='bar'><string format='upper-case' /></list>` => `{"bar": ['STRING ONE', 'STRING TWO', etc.]}` - `<object name='baz'><string name="foo" format="capitalize two-words" /><integer name="index" format="1-indexed" /></object>` => `{'baz': {'foo': 'Some String', 'index': 1}}`
Raw Response
{ "action": "new_file", "new_file": { "filepath": "dice_rolling_bot/dice_rolling_bot.py", "description": "Created a new file `dice_rolling_bot.py` with required libraries (random and discord) and implemented the main functionality for the bot to roll dice based on user input" } }
Parsed Response
{ "action": "new_file", "new_file": { "filepath": "dice_rolling_bot/dice_rolling_bot.py", "description": "Created a new file `dice_rolling_bot.py` with required libraries (random and discord) and implemented the main functionality for the bot to roll dice based on user input" }, "edit_file": null, "commit_message": null }
What's preventing gpt-3.5-turbo from working properly?
In the action selection stage, gpt-3.5-turbo commonly returns the action selection JSON in an incorrect format. Sometimes it wraps it in triple-backtick groups (now handled by guardrails-ai/guardrails#118). Commonly it flattens the dict, returning something akin to this:
Raw Response
{ "action": "new_file", "filepath": "dice_rolling_bot/dice_rolling_bot.py", "description": "Created a new file `dice_rolling_bot.py` with required libraries (random and discord) and implemented the main functionality for the bot to roll dice based on user input" }
In theory, this is an unambiguous error, and can be corrected for. See some discussion on this topic in guardrails-ai/guardrails#123. Ideally this fix is submitted through guardrails, but if that turns out to be too complicated, it's fine if AutoPR gets around it with a custom guardrails validator or parsing the raw string as a JSON on failure.
Following a conversation in a Discord thread, perhaps a better solution is to abstract out an "action selection engine", and implement different implementations for different models:
- JSON with guardrails (as
auto-v1
does now)!
discord command syntax (e.g.,!newfile <filename> <description>
)- pythonic syntax (e.g.,
def new_file(filename: str, description: str)
)
auto-v1
codegen worksHere's an example of a successful new file generation:
Prompt
Human: Hey, we've got a new file to create. This is the issue that was opened: ``` #20 Create a dice rolling bot irgolic: In the `dice_rolling_bot` folder, develop a Discord bot that allows users to roll virtual dice by typing a command (e.g., "!roll 1d6"). The bot should support various dice types (e.g., d4, d6, d8, d10, d12, d20) and multiple dice rolls at once. Include a README.md file with information on the bot's features, setup instructions, and usage guidelines. ``` This is the pull request we're creating: ``` Title: Feature/dice-rolling-bot: Add Dice Rolling Bot Functionality This PR resolves issue #20, adding a dice rolling bot to the `dice_rolling_bot` folder. This Discord bot allows users to roll virtual dice by typing a command, such as "!roll 1d6". The bot supports various dice types (d4, d6, d8, d10, d12, d20) and multiple dice rolls at once. A README.md file has been included with information on the bot's features, setup instructions, and usage guidelines. Commits: 1. Create dice_rolling_bot.py 2. Add README.md for dice rolling bot Please review the changes and let me know if any modifications are required. 1. Commit: Create dice_rolling_bot.py Files: dice_rolling_bot/dice_rolling_bot.py:L1-L23 Changes: - Created a new file `dice_rolling_bot.py` - Imported required libraries (random and discord) - Implemented the main functionality for the bot to roll dice based on user input 2. Commit: Add README.md for dice rolling bot Files: dice_rolling_bot/README.md:L1-L32 Changes: - Created a new file `README.md` - Added setup instructions for the bot - Provided usage guidelines and examples for the dice rolling bot ``` This is the commit we're writing: ``` Create dice_rolling_bot.py - Created a new file `dice_rolling_bot.py` - Imported required libraries (random and discord) - Implemented the main functionality for the bot to roll dice based on user input ``` This is the codebase subset we decided to look at: ``` ``` This is the plan for the file we're creating: ``` Created a new file `dice_rolling_bot.py` with required libraries (random and discord) and implemented the main functionality for the bot to roll dice based on user input ``` Please send me the contents of the file. RESPONSE FORMAT INSTRUCTIONS ---------------------------- When responding to me, please use the following format. Make sure you return both the code enclosed in backticks and the JSON immediately after. ``` <string> ``` { "outcome": string # A description of the outcome of the attempt to rewrite the file hunk according to the problem statement. }
Raw Response
```python import random import discord from discord.ext import commands intents = discord.Intents.default() intents.typing = False intents.presences = False bot = commands.Bot(command_prefix="!", intents=intents) @bot.event async def on_ready(): print(f"{bot.user} has connected to Discord!") @bot.command(name="roll", help="Rolls a die. Format: !roll <num_of_dice>d<sides_of_die>") async def roll_dice(ctx, dice: str): num_of_dice, sides_of_die = map(int, dice.split("d")) rolls = [random.randint(1, sides_of_die) for _ in range(num_of_dice)] await ctx.send(', '.join(map(str, rolls))) bot.run("your_bot_token_here") ``` { "outcome": "Created dice_rolling_bot.py with the necessary imports and implemented the main functionality for the bot to roll dice based on user input." }
Result
{ "contents": "import random\nimport discord\nfrom discord.ext import commands\n\nintents = discord.Intents.default()\nintents.typing = False\nintents.presences = False\n\nbot = commands.Bot(command_prefix=\"!\", intents=intents)\n\[email protected]\nasync def on_ready():\n print(f\"{bot.user} has connected to Discord!\")\n\[email protected](name=\"roll\", help=\"Rolls a die. Format: !roll <num_of_dice>d<sides_of_die>\")\nasync def roll_dice(ctx, dice: str):\n num_of_dice, sides_of_die = map(int, dice.split(\"d\"))\n rolls = [random.randint(1, sides_of_die) for _ in range(num_of_dice)]\n await ctx.send(', '.join(map(str, rolls)))\n\nbot.run(\"your_bot_token_here\")", "outcome": "Created dice_rolling_bot.py with the necessary imports and implemented the main functionality for the bot to roll dice based on user input." }
Notice that the LLM reflects on its change in the "outcome" JSON key. This is fed back into subsequent actions to inform what's been done so far.
Here's an example of a successful file edit:
Prompt
Human: Hey, we've got a new code hunk to diff. This is the issue that was opened: ``` #29 Create a movie recommendation app irgolic: In the `movie_recommendation` folder, develop a web app that provides personalized movie recommendations based on user preferences, such as genre, actors, and directors. The app should display movie posters, summaries, and trailers. Include a README.md file with an explanation of the app's features and installation instructions. ``` This is the pull request we're creating: ``` Title: #42 Movie Recommendation Web App Closes #29 This pull request implements a movie recommendation web app that provides personalized movie recommendations based on user preferences, such as genre, actors, and directors. The app displays movie posters, summaries, and trailers. A README.md file with the app's features and installation instructions is also included. Please review the changes and let me know if any modifications are needed. 1. Commit: Added initial app files - movie_recommendation/app.py, movie_recommendation/templates/base.html Files: movie_recommendation/app.py, movie_recommendation/templates/base.html Changes: * movie_recommendation/app.py - Add the initial flask app file and import required libraries. * movie_recommendation/templates/base.html - Create the base html file for the web app. 2. Commit: Implemented movie recommendation logic - movie_recommendation/movie_recommender.py Files: movie_recommendation/movie_recommender.py Changes: * movie_recommendation/movie_recommender.py - Add the logic for generating personalized movie recommendations based on user preferences. 3. Commit: Created movie recommendation form - movie_recommendation/templates/recommendation_form.html Files: movie_recommendation/templates/recommendation_form.html Changes: * movie_recommendation/templates/recommendation_form.html - Add a form to collect user preferences for movie recommendations. 4. Commit: Display recommended movies - movie_recommendation/templates/recommended_movies.html Files: movie_recommendation/templates/recommended_movies.html Changes: * movie_recommendation/templates/recommended_movies.html - Add a template to display the recommended movies with posters, summaries, and trailers. 5. Commit: Add routes for recommendation form and results - movie_recommendation/app.py Files: movie_recommendation/app.py Changes: * movie_recommendation/app.py - Add routes for the recommendation form and displaying the results. 6. Commit: Include README.md with app features and installation instructions - README.md Files: README.md Changes: * README.md - Add a detailed explanation of the app's features and installation instructions. 7. Commit: Refactor movie recommendation logic - movie_recommendation/movie_recommender.py Files: movie_recommendation/movie_recommender.py Changes: * movie_recommendation/movie_recommender.py - Refactor the movie recommendation logic to improve performance and readability. ``` This is the commit we're writing: ``` Add routes for recommendation form and results - movie_recommendation/app.py * movie_recommendation/app.py - Add routes for the recommendation form and displaying the results. ``` This is the codebase subset we decided to look at: ``` >>> File: movie_recommendation/app.py 1 | from flask import Flask, render_template, request, redirect, url_for 2 | import movie_recommendation.movie_recommender as mr 3 | 4 | app = Flask(__name__) 5 | 6 | @app.route('/') 7 | def home(): 8 | return render_template('base.html') 9 | 10 | if __name__ == "__main__": 11 | app.run(debug=True) ``` This is the hunk we're rewriting: ``` 3 | 4 | app = Flask(__name__) 5 | 6 * @app.route('/') 7 * def home(): 8 * return render_template('base.html') 9 | 10 | if __name__ == "__main__": 11 | app.run(debug=True) ``` This is the plan for how we want to rewrite the hunk: ``` Add routes for the recommendation form and displaying the results. ``` Please rewrite the hunk to match the plan, but do not include any lines prefixed with | in the result. RULES: - ONLY rewrite the lines prefixed with *, - submit only the lines without the * prefix, - do not preserve the relative leading indentation of the lines (start the hunk's indentation at 0). RESPONSE FORMAT INSTRUCTIONS ---------------------------- When responding to me, please use the following format. Make sure you return both the code enclosed in backticks and the JSON immediately after. ``` <string> ``` { "outcome": string # A description of the outcome of the attempt to rewrite the file hunk according to the problem statement. }
Raw Response
``` @app.route('/') def home(): return render_template('recommendation_form.html') @app.route('/recommend', methods=['POST']) def recommend(): user_preferences = request.form recommended_movies = mr.get_recommendations(user_preferences) return render_template('recommended_movies.html', movies=recommended_movies) ``` { "outcome": "Hunk rewritten with the addition of routes for the recommendation form and displaying the results." }
Result
{ "contents": "@app.route('/')\ndef home():\n return render_template('recommendation_form.html')\n\[email protected]('/recommend', methods=['POST'])\ndef recommend():\n user_preferences = request.form\n recommended_movies = mr.get_recommendations(user_preferences)\n return render_template('recommended_movies.html', movies=recommended_movies)", "outcome": "Hunk rewritten with the addition of routes for the recommendation form and displaying the results." }
Notice how the hunk to be rewritten is prefixed with *
with some surrounding lines of context.
It's really bad at generating something in the format of:
```
<code>
```
{
<metadata>
}
It commonly drops the closing backtick of the code, leaves the <code>
empty, or repeats the description of the content instead of writing actual code. Omitting the trailing metadata will probably improve performance by a lot, as well as adding some relevant few-shot examples.
Also, it's really bad at following the instruction of rewriting only the code prefixed by *
. It will just give you the full file/include the context lines.
An example exhibiting both these qualities:
Prompt
Human: Hey, we've got a new code hunk to diff. This is the issue that was opened: ``` #31 Design a virtual art gallery irgolic: In the `virtual_art_gallery` folder, create a web-based virtual art gallery that showcases a variety of artwork, such as paintings, sculptures, and photography. Allow users to browse through the gallery and view detailed information about each piece. Add a README.md file explaining the gallery's purpose, features, and how to set it up. [irgolic/AutoPR#65](https://github.com/irgolic/AutoPR/issues/65) ``` This is the pull request we're creating: ``` Title: Design a virtual art gallery - Issue #31 - Added a new virtual art gallery that showcases various artwork - Created an index.html file in the `virtual_art_gallery` folder to display the gallery and implemented CSS styles - Added `artwork.json` file in the `virtual_art_gallery` folder to store artwork data - Implemented artwork data parsing and rendering with `app.js` in the `virtual_art_gallery` folder - Added a README.md file to explain the gallery's purpose, features, and how to set it up 1. Commit: Create virtual_art_gallery folder and index.html file Files: virtual_art_gallery/index.html Changes: - Added a new virtual art gallery that showcases various artwork 2. Commit: Add CSS styles to index.html Files: virtual_art_gallery/index.html, virtual_art_gallery/style.css Changes: - Created an index.html file in the `virtual_art_gallery` folder to display the gallery and implemented CSS styles 3. Commit: Create artwork data file Files: virtual_art_gallery/artwork.json Changes: - Added `artwork.json` file in the `virtual_art_gallery` folder to store artwork data 4. Commit: Implement artwork data parsing and rendering Files: virtual_art_gallery/index.html, virtual_art_gallery/app.js Changes: - Implemented artwork data parsing and rendering with `app.js` in the `virtual_art_gallery` folder 5. Commit: Add README.md Files: README.md Changes: - Added a README.md file to explain the gallery's purpose, features, and how to set it up ``` This is the commit we're writing: ``` Create virtual_art_gallery folder and index.html file - Added a new virtual art gallery that showcases various artwork ``` This is the codebase subset we decided to look at: ``` >>> File: virtual_art_gallery/index.html 1 | <section id="virtual-gallery"> 2 | <h2>New Artwork</h2> 3 | <section id="virtual-gallery"> 4 | <h2>New Artwork</h2> 5 | <img src="path/to/new/artwork"> 6 | </section> 7 | </section> ``` This is the hunk we're rewriting: ``` 1 | <section id="virtual-gallery"> 2 | <h2>New Artwork</h2> 3 * <section id="virtual-gallery"> 4 * <h2>New Artwork</h2> 5 * <img src="path/to/new/artwork"> 6 * </section> 7 | </section> ``` This is the plan for how we want to rewrite the hunk: ``` Add new artwork to the virtual gallery ``` Please rewrite the hunk to match the plan, but do not include any lines prefixed with | in the result. RULES: - ONLY rewrite the lines prefixed with *, - submit only the lines without the * prefix, - do not preserve the relative leading indentation of the lines (start the hunk's indentation at 0). RESPONSE FORMAT INSTRUCTIONS ---------------------------- When responding to me, please use the following format. Make sure you return both the code enclosed in backticks and the JSON immediately after. ``` <string> ``` { "outcome": string # A description of the outcome of the attempt to rewrite the file hunk according to the problem statement. }
Raw Response
<section id="virtual-gallery"> <h2>New Artwork</h2> <section id="virtual-gallery"> <h2>New Artwork</h2> <img src="path/to/new/artwork"> <p>Description of new artwork</p> </section> </section> ``` { "outcome": "Success" }
Here are some ideas:
gpt-3.5-turbo
to generate a code block enclosed by backticks, without the trailing metadata JSON. Alternatively, try using specific few-shot examples.gpt-3.5-turbo
's file edit action to only rewrite whole filesAutoPR encountered an error while trying to fix redacted repository name
Traceback (most recent call last):
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/app/autopr/repos/completions_repo.py", line 115, in _complete
openai_response = openai.ChatCompletion.create(
File "/venv/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The model: `gpt-4` does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/autopr/agents/brain_agent/base.py", line 73, in generate_pr
self._generate_pr(event)
File "/app/autopr/agents/brain_agent/simple_v1.py", line 27, in _generate_pr
pr_desc = self.pull_request_agent.plan_pull_request(self.repo, issue, event)
File "/app/autopr/agents/pull_request_agent/base.py", line 50, in plan_pull_request
pull_request = self._plan_pull_request(repo, issue, event)
File "/app/autopr/agents/pull_request_agent/rail_v1.py", line 388, in _plan_pull_request
filepaths = self.get_initial_filepaths(files, issue)
File "/app/autopr/agents/pull_request_agent/rail_v1.py", line 277, in get_initial_filepaths
response = self.rail_service.run_prompt_rail(
File "/app/autopr/services/rail_service.py", line 196, in run_prompt_rail
prompt = self.completions_repo.complete(
File "/app/autopr/repos/completions_repo.py", line 61, in complete
result = self._complete(
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f9bb4a6f8e0 state=finished raised InvalidRequestError>]
OPENAI_API_KEY
via this page: https://platform.openai.com/account/api-keysgithub.com/MYREPO/settings/secrets/actions
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.