Code Monkey home page Code Monkey logo

Comments (1)

sweep-ai avatar sweep-ai commented on September 23, 2024

🚀 Here's the PR! #63

See Sweep's progress at the progress dashboard!
💎 Sweep Pro: I'm using GPT-4. You have unlimited GPT-4 tickets. (tracking ID: 7d08fc2191)
Install Sweep Configs: Pull Request

Tip

I can email you next time I complete a pull request if you set up your email here!


Actions (click)

  • ↻ Restart Sweep

Step 1: 🔎 Searching

I found the following snippets in your repository. I will now analyze these snippets and come up with a plan.

Some code snippets I think are relevant in decreasing order of relevance (click to expand). If some file is missing from here, you can mention the path in the ticket description.

from generate.chat_completion import (
AnthropicChat,
AnthropicChatParameters,
AzureChat,
BaichuanChat,
BaichuanChatParameters,
BailianChat,
BailianChatParameters,
ChatCompletionModel,
ChatCompletionOutput,
ChatModelRegistry,
DashScopeChat,
DashScopeChatParameters,
DashScopeMultiModalChat,
DashScopeMultiModalChatParameters,
DeepSeekChat,
DeepSeekChatParameters,
HunyuanChat,
HunyuanChatParameters,
MinimaxChat,
MinimaxChatParameters,
MinimaxLegacyChat,
MinimaxLegacyChatParameters,
MinimaxProChat,
MinimaxProChatParameters,
MoonshotChat,
MoonshotChatParameters,
OpenAIChat,
OpenAIChatParameters,
Prompt,
RemoteChatCompletionModel,
StepFunChat,
StepFunChatParameters,
WenxinChat,
WenxinChatParameters,
YiChat,
YiChatParameters,
ZhipuCharacterChat,
ZhipuCharacterChatParameters,
ZhipuChat,
ZhipuChatParameters,
tool,
)
from generate.highlevel import (
generate_image,
generate_speech,
generate_text,
load_chat_model,
load_image_generation_model,
load_speech_model,
)
from generate.image_generation import (
BaiduImageGeneration,
BaiduImageGenerationParameters,
ImageGenerationModel,
ImageGenerationModelRegistry,
ImageGenerationOutput,
OpenAIImageGeneration,
OpenAIImageGenerationParameters,
QianfanImageGeneration,
QianfanImageGenerationParameters,
ZhipuImageGeneration,
)
from generate.modifiers.hook import AfterGenerateContext, BeforeGenerateContext
from generate.platforms import (
AnthropicSettings,
AzureSettings,
BaichuanSettings,
BaiduCreationSettings,
BailianSettings,
DashScopeSettings,
DeepSeekSettings,
HunyuanSettings,
MinimaxSettings,
MoonshotSettings,
OpenAISettings,
QianfanSettings,
StepFunSettings,
YiSettings,
ZhipuSettings,
)
from generate.text_to_speech import (
MinimaxProSpeech,
MinimaxProSpeechParameters,
MinimaxSpeech,
MinimaxSpeechParameters,
OpenAISpeech,
OpenAISpeechParameters,
SpeechModelRegistry,
TextToSpeechModel,
TextToSpeechOutput,
)
from generate.version import __version__

from __future__ import annotations
from collections import UserDict
from typing import Any, Callable, Generic, MutableMapping, TypeVar
from docstring_parser import parse
from pydantic import TypeAdapter, validate_call
from typing_extensions import NotRequired, ParamSpec, Self, TypedDict
from generate.types import JsonSchema, OrIterable
from generate.utils import ensure_iterable
P = ParamSpec('P')
T = TypeVar('T')
class FunctionJsonSchema(TypedDict):
name: str
parameters: JsonSchema
description: NotRequired[str]
def get_json_schema(function: Callable[..., Any]) -> FunctionJsonSchema:
function_name = function.__name__
docstring = parse(text=function.__doc__ or '')
parameters = TypeAdapter(function).json_schema()
for param in docstring.params:
if (arg_name := param.arg_name) in parameters['properties'] and (description := param.description):
parameters['properties'][arg_name]['description'] = description
parameters['required'] = sorted(k for k, v in parameters['properties'].items() if 'default' not in v)
recusive_remove(parameters, 'additionalProperties')
recusive_remove(parameters, 'title')
json_schema: FunctionJsonSchema = {
'name': function_name,
'description': docstring.short_description or '',
'parameters': parameters,
}
return json_schema
def function(callable_obj: Callable[P, T]) -> Tool[P, T]:
return Tool(callable_obj)
def tool(callable_obj: Callable[P, T]) -> Tool[P, T]:
return Tool(callable_obj)
class Tool(Generic[P, T]): # noqa: N801
def __init__(self, callable_obj: Callable[P, T], name: str | None = None) -> None:
self.callable_obj: Callable[P, T] = validate_call(callable_obj)
self.json_schema: FunctionJsonSchema = get_json_schema(callable_obj)
self._name = name
@property
def name(self) -> str:
return self._name or self.json_schema['name']
@property
def description(self) -> str:
return self.json_schema.get('description', '')
@property
def parameters(self) -> JsonSchema:
return self.json_schema['parameters']
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> T:
return self.callable_obj(*args, **kwargs)
def recusive_remove(obj: Any, remove_key: str) -> None:
"""
Recursively removes a key from a dictionary and all its nested dictionaries.
Args:
dictionary (dict): The dictionary to remove the key from.
remove_key (str): The key to remove from the dictionary.
Returns:
None
"""
if isinstance(obj, dict):
for key in list(obj.keys()):
if key == remove_key:
del obj[key]
else:
recusive_remove(obj[key], remove_key)
class ToolDict(UserDict, MutableMapping[str, Tool]):
def call(self, name: str, *args: Any, **kwargs: Any) -> Any:
return self.data[name](*args, **kwargs)
@classmethod
def from_iterable(cls, tools: OrIterable[Tool]) -> Self:
return cls({tool.name: tool for tool in ensure_iterable(tools)})
class ToolCallMixin:
def add_tools(self, tools: OrIterable[Tool]) -> None:

from __future__ import annotations
from typing import AsyncIterator, ClassVar, Dict, Iterator, List, Literal, Optional, Union
from pydantic import Field, PositiveInt
from typing_extensions import Annotated, Unpack, override
from generate.chat_completion.message import Prompt
from generate.chat_completion.model_output import ChatCompletionOutput, ChatCompletionStreamOutput
from generate.chat_completion.models.openai_like import (
FunctionCallName,
OpenAILikeChat,
OpenAIResponseFormat,
OpenAITool,
OpenAIToolChoice,
convert_to_openai_tool,
)
from generate.chat_completion.tool import FunctionJsonSchema, Tool, ToolCallMixin
from generate.http import (
HttpClient,
)
from generate.model import ModelParameters, RemoteModelParametersDict
from generate.platforms.openai import OpenAISettings
from generate.types import OrIterable, Probability, Temperature
from generate.utils import ensure_iterable
class OpenAIChatParameters(ModelParameters):
temperature: Optional[Temperature] = None
top_p: Optional[Probability] = None
max_tokens: Optional[PositiveInt] = None
functions: Optional[List[FunctionJsonSchema]] = None
function_call: Union[Literal['auto'], FunctionCallName, None] = None
stop: Union[str, List[str], None] = None
presence_penalty: Optional[Annotated[float, Field(ge=-2, le=2)]] = None
frequency_penalty: Optional[Annotated[float, Field(ge=-2, le=2)]] = None
logit_bias: Optional[Dict[int, Annotated[int, Field(ge=-100, le=100)]]] = None
user: Optional[str] = None
response_format: Optional[OpenAIResponseFormat] = None
seed: Optional[int] = None
tools: Optional[List[OpenAITool]] = None
tool_choice: Union[Literal['auto'], OpenAIToolChoice, None] = None
class OpenAIChatParametersDict(RemoteModelParametersDict, total=False):
temperature: Optional[Temperature]
top_p: Optional[Probability]
max_tokens: Optional[PositiveInt]
functions: Optional[List[FunctionJsonSchema]]
function_call: Union[Literal['auto'], FunctionCallName, None]
stop: Union[str, List[str], None]
presence_penalty: Optional[float]
frequency_penalty: Optional[float]
logit_bias: Optional[Dict[int, int]]
user: Optional[str]
response_format: Optional[OpenAIResponseFormat]
seed: Optional[int]
tools: Optional[List[OpenAITool]]
tool_choice: Union[Literal['auto'], OpenAIToolChoice, None]
class OpenAIChat(OpenAILikeChat, ToolCallMixin):
model_type: ClassVar[str] = 'openai'
available_models: ClassVar[List[str]] = [
'gpt-4-turbo-preview',
'gpt-3.5-turbo',
'gpt-4-vision-preview',
]
parameters: OpenAIChatParameters
settings: OpenAISettings
def __init__(
self,
model: str = 'gpt-3.5-turbo',
parameters: OpenAIChatParameters | None = None,
settings: OpenAISettings | None = None,
http_client: HttpClient | None = None,
) -> None:
parameters = parameters or OpenAIChatParameters()
settings = settings or OpenAISettings() # type: ignore
http_client = http_client or HttpClient()
super().__init__(model=model, parameters=parameters, settings=settings, http_client=http_client)
@override
def generate(self, prompt: Prompt, **kwargs: Unpack[OpenAIChatParametersDict]) -> ChatCompletionOutput:
return super().generate(prompt, **kwargs)
@override
async def async_generate(self, prompt: Prompt, **kwargs: Unpack[OpenAIChatParametersDict]) -> ChatCompletionOutput:
return await super().async_generate(prompt, **kwargs)
@override
def stream_generate(
self, prompt: Prompt, **kwargs: Unpack[OpenAIChatParametersDict]
) -> Iterator[ChatCompletionStreamOutput]:
yield from super().stream_generate(prompt, **kwargs)
@override
async def async_stream_generate(
self, prompt: Prompt, **kwargs: Unpack[OpenAIChatParametersDict]
) -> AsyncIterator[ChatCompletionStreamOutput]:
async for stream_output in super().async_stream_generate(prompt, **kwargs):
yield stream_output
@override
def add_tools(self, tools: OrIterable[Tool]) -> None:
new_tools = [convert_to_openai_tool(tool) for tool in ensure_iterable(tools)]
if self.parameters.tools is None:
self.parameters.tools = new_tools
else:

from __future__ import annotations
import base64
import json
import uuid
from abc import ABC
from functools import partial
from typing import Any, Callable, Dict, List, Literal, Type, Union, cast
from typing_extensions import NotRequired, TypedDict, override
from generate.chat_completion.base import RemoteChatCompletionModel
from generate.chat_completion.cost_caculator import GeneralCostCalculator
from generate.chat_completion.message import (
AssistantMessage,
FunctionCall,
FunctionMessage,
ImagePart,
Message,
MessageTypeError,
Prompt,
SystemMessage,
TextPart,
ToolCall,
ToolMessage,
UserMessage,
UserMultiPartMessage,
ensure_messages,
)
from generate.chat_completion.message.core import Messages
from generate.chat_completion.model_output import ChatCompletionOutput, ChatCompletionStreamOutput
from generate.chat_completion.stream_manager import StreamManager
from generate.chat_completion.tool import FunctionJsonSchema, Tool
from generate.http import (
HttpxPostKwargs,
ResponseValue,
)
from generate.model import ModelInfo
from generate.platforms.openai_like import OpenAILikeSettings
class FunctionCallName(TypedDict):
name: str
class OpenAIFunctionCall(TypedDict):
name: str
arguments: str
class OpenAITool(TypedDict):
type: Literal['function']
function: FunctionJsonSchema
class OpenAIToolChoice(TypedDict):
type: Literal['function']
function: FunctionCallName
class OpenAIToolCall(TypedDict):
id: str
type: Literal['function']
function: OpenAIFunctionCall
class OpenAIMessage(TypedDict):
role: str
content: Union[str, None, List[Dict[str, Any]]]
name: NotRequired[str]
function_call: NotRequired[OpenAIFunctionCall]
tool_call_id: NotRequired[str]
tool_calls: NotRequired[List[OpenAIToolCall]]
class OpenAIResponseFormat(TypedDict):
type: Literal['json_object', 'text']
def _to_text_message_dict(role: str, message: Message) -> OpenAIMessage:
if not isinstance(message.content, str):
raise TypeError(f'Unexpected message content: {type(message.content)}')
return {
'role': role,
'content': message.content,
}
def _to_user_multipart_message_dict(message: UserMultiPartMessage) -> OpenAIMessage:
content = []
for part in message.content:
if isinstance(part, TextPart):
content.append({'type': 'text', 'text': part.text})
else:
if isinstance(part, ImagePart):
image_format = part.image_format or 'png'
url: str = f'data:image/{image_format};base64,{base64.b64encode(part.image).decode()}'
image_url_dict = {'url': url}
else:
image_url_dict = {}
image_url_dict['url'] = part.image_url.url
if part.image_url.detail:
image_url_dict['detail'] = part.image_url.detail
image_url_part_dict: dict[str, Any] = {
'type': 'image_url',
'image_url': image_url_dict,
}
content.append(image_url_part_dict)
return {
'role': 'user',
'content': content,
}
def _to_tool_message_dict(message: ToolMessage) -> OpenAIMessage:
return {
'role': 'tool',
'tool_call_id': message.tool_call_id,
'content': message.content,
}
def _to_asssistant_message_dict(message: AssistantMessage) -> OpenAIMessage:
base_dict = {
'role': 'assistant',
'content': message.content or None,
}
if message.tool_calls:
tool_calls = [
{
'id': tool_call.id,
'type': 'function',
'function': {
'name': tool_call.function.name,
'arguments': tool_call.function.arguments,
},
}
for tool_call in message.tool_calls
]
base_dict['tool_calls'] = tool_calls
if message.function_call:
base_dict['function_call'] = {
'name': message.function_call.name,
'arguments': message.function_call.arguments,
}
return cast(OpenAIMessage, base_dict)
def _to_function_message_dict(message: FunctionMessage) -> OpenAIMessage:
return {
'role': 'function',
'name': message.name,
'content': message.content,
}
def convert_to_openai_message(message: Message) -> OpenAIMessage:
to_function_map: dict[Type[Message], Callable[[Any], OpenAIMessage]] = {
SystemMessage: partial(_to_text_message_dict, 'system'),
UserMessage: partial(_to_text_message_dict, 'user'),
AssistantMessage: partial(_to_asssistant_message_dict),
UserMultiPartMessage: _to_user_multipart_message_dict,
ToolMessage: _to_tool_message_dict,
FunctionMessage: _to_function_message_dict,
}
if to_function := to_function_map.get(type(message)):
return to_function(message)
raise MessageTypeError(message, allowed_message_type=tuple(to_function_map.keys()))
def openai_calculate_cost(model_name: str, input_tokens: int, output_tokens: int) -> float | None:
dollar_to_yuan = 7
if model_name in ('gpt-4-1106-preview', 'gpt-4-1106-vision-preview'):
return (0.01 * dollar_to_yuan) * (input_tokens / 1000) + (0.03 * dollar_to_yuan) * (output_tokens / 1000)
if 'gpt-4-turbo' in model_name:
return (0.01 * dollar_to_yuan) * (input_tokens / 1000) + (0.03 * dollar_to_yuan) * (output_tokens / 1000)
if 'gpt-4-32k' in model_name:
return (0.06 * dollar_to_yuan) * (input_tokens / 1000) + (0.12 * dollar_to_yuan) * (output_tokens / 1000)
if 'gpt-4' in model_name:
return (0.03 * dollar_to_yuan) * (input_tokens / 1000) + (0.06 * dollar_to_yuan) * (output_tokens / 1000)
if 'gpt-3.5-turbo' in model_name:
return (0.001 * dollar_to_yuan) * (input_tokens / 1000) + (0.002 * dollar_to_yuan) * (output_tokens / 1000)
if 'moonshot' in model_name:
if '8k' in model_name:
return 0.012 * (input_tokens / 1000) + 0.012 * (output_tokens / 1000)
if '32k' in model_name:
return 0.024 * (input_tokens / 1000) + 0.024 * (output_tokens / 1000)
if '128k' in model_name:
return 0.06 * (input_tokens / 1000) + 0.06 * (output_tokens / 1000)
return None
def _convert_to_assistant_message(message: dict[str, Any]) -> AssistantMessage:
if function_call_dict := message.get('function_call'):
function_call = FunctionCall(
name=function_call_dict.get('name') or '',
arguments=function_call_dict['arguments'],
)
else:
function_call = None
if tool_calls_dict := message.get('tool_calls'):
tool_calls = [
ToolCall(
id=tool_call['id'],
function=FunctionCall(
name=tool_call['function'].get('name') or '',
arguments=tool_call['function']['arguments'],
),
)
for tool_call in tool_calls_dict
]
else:
tool_calls = None
return AssistantMessage(content=message.get('content') or '', function_call=function_call, tool_calls=tool_calls)
def convert_to_openai_tool(tool: Tool) -> OpenAITool:
return OpenAITool(type='function', function=tool.json_schema)
def process_openai_like_model_reponse(response: ResponseValue, model_type: str) -> ChatCompletionOutput:
message = _convert_to_assistant_message(response['choices'][0]['message'])
extra = {'usage': response['usage']}
if system_fingerprint := response.get('system_fingerprint'):
extra['system_fingerprint'] = system_fingerprint
choice = response['choices'][0]
if (finish_reason := choice.get('finish_reason')) is None:
finish_reason = finish_details['type'] if (finish_details := choice.get('finish_details')) else None
try:
if model_type == 'openai':
cost = openai_calculate_cost(
model_name=response['model'],
input_tokens=response['usage']['prompt_tokens'],
output_tokens=response['usage']['completion_tokens'],
)
else:
cost_calculator = GeneralCostCalculator()
cost = cost_calculator.calculate(
model_type=model_type,
model_name=response['model'],
input_tokens=response['usage']['prompt_tokens'],
output_tokens=response['usage']['completion_tokens'],
)
except Exception:
cost = None
return ChatCompletionOutput(
model_info=ModelInfo(task='chat_completion', type=model_type, name=response['model']),
message=message,
finish_reason=finish_reason,
cost=cost,
extra=extra,
)


Step 2: ⌨️ Coding

  • Create generate/examples/highlevel_examples.py7345e23 Edit
Create generate/examples/highlevel_examples.py with contents:
• Create a new Python file named `highlevel_examples.py` in the `generate/examples` directory. If the `examples` directory does not exist, create it within the `generate` directory.
• At the top of `highlevel_examples.py`, import the necessary functions from `generate.highlevel`. Specifically, import `generate_text`, `generate_image`, `generate_speech`, `load_chat_model`, `load_image_generation_model`, and `load_speech_model`.
• Add a function named `example_generate_text()` that demonstrates how to use the `generate_text` function. Include comments within the function to explain the steps and parameters.
• Add a function named `example_generate_image()` that demonstrates how to use the `generate_image` function. Include comments within the function to explain the steps and parameters.
• Add a function named `example_generate_speech()` that demonstrates how to use the `generate_speech` function. Include comments within the function to explain the steps and parameters.
• Add a function named `example_load_models()` that demonstrates how to use the `load_chat_model`, `load_image_generation_model`, and `load_speech_model` functions. Include comments within the function to explain the steps and parameters.
• At the end of the file, add a `if __name__ == "__main__":` block that calls each of the example functions. This allows users to run the examples directly by executing the `highlevel_examples.py` file.
• Ensure that each example function includes error handling and prints the results or relevant information to the console, providing users with clear feedback on the operations performed.
  • Running GitHub Actions for generate/examples/highlevel_examples.pyEdit
Check generate/examples/highlevel_examples.py with contents:

Ran GitHub Actions for 7345e236ee34a5a2605510f5a4128ac1111f3b63:


Step 3: 🔁 Code Review

I have finished reviewing the code for completeness. I did not find errors for sweep/examples_add_an_example_python_file_for.


🎉 Latest improvements to Sweep:
  • New dashboard launched for real-time tracking of Sweep issues, covering all stages from search to coding.
  • Integration of OpenAI's latest Assistant API for more efficient and reliable code planning and editing, improving speed by 3x.
  • Use the GitHub issues extension for creating Sweep issues directly from your editor.

💡 To recreate the pull request edit the issue title or description.
Something wrong? Let us know.

This is an automated message generated by Sweep AI.

from generate.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.