Code Monkey home page Code Monkey logo

openai's Introduction

[Note] This repository is a work in progress and will be updated frequently with changes.

Azure OpenAI Service Samples

This repo is a compilation of useful Azure OpenAI Service resources and code samples to help you get started and accelerate your technology adoption journey.

The Azure OpenAI service provides REST API access to OpenAI's powerful language models on the Azure cloud. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, .NET SDK, or our web-based interface in the Azure OpenAI Studio.

Get started

Prerequisites

  • Azure Account - If you're new to Azure, get an Azure account for free and you'll get some free Azure credits to get started.
  • Azure subscription with access enabled for the Azure OpenAI Service - For more details, see the Azure OpenAI Service documentation on how to get access.
  • Azure OpenAI resource - For these samples, you'll need to deploy models like GPT-3.5 Turbo, GPT 4, DALL-E, and Whisper. See the Azure OpenAI Service documentation for more details on deploying models and model availability.

Project setup

Codespaces

The easiest way to get started is with GitHub Codespaces

Open in GitHub Codespaces

Local

If you prefer to run these samples locally, you'll need to install and configure the following:

Navigating the repo

  • Basic samples: These are small code samples and snippets which complete small sets of actions and can be integrated into the user code.
  • End to end solutions: These are complete solutions for some use cases and industry scenarios. These include appropriate workflows and reference architectures, which can be easily customized and built into full scale production systems.

Additional Resources

Documentation

The Azure OpenAI Service promise

Azure OpenAI Service gives customers advanced language AI with OpenAI GPT-3, Codex, and DALL-E models with the security and enterprise promise of Azure. Azure OpenAI co-develops the APIs with OpenAI, ensuring compatibility and a smooth transition from one to the other.

With Azure OpenAI Service, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI Service offers private networking, regional availability, and responsible AI content filtering.

Important concepts and terminology

Prompts & Completions

The completions endpoint is the core component of the API service. This API provides access to the model's text-in, text-out interface. Users simply need to provide an input prompt containing the English text command, and the model will generate a text completion.

Here's an example of a simple prompt and completion:

Prompt: """ count to 5 in a for loop """

Completion: for i in range(1, 6): print(i)

Tokens

The Azure OpenAI Service and OpenAI Enterprise process text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word “hamburger” gets broken up into the tokens “ham”, “bur” and “ger”, while a short and common word like “pear” is a single token. Many tokens start with a whitespace, for example “ hello” and “ bye”.

The total number of tokens processed in a given request depends on the length of your input, output and request parameters. The quantity of tokens being processed will also affect your response latency and throughput for the models.

Resources

The Azure OpenAI Service is a new product offering on Azure. You can get started with the Azure OpenAI Service the same way as any other Azure product where you create a resource, or instance of the service, in your Azure Subscription.

Deployments

Once you create an Azure OpenAI Service Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.

In-context learning

The models used by the Azure OpenAI Service use natural language instructions and examples provided during the generation call to identify the task being asked and skill required. When you use this approach, the first part of the prompt includes natural language instructions and/or examples of the specific task desired. The model then completes the task by predicting the most probable next piece of text. This technique is known as "in-context" learning. These models aren't retrained during this step but instead give predictions based on the context you include in the prompt.

There are three main approaches for in-context learning: Few-shot, one-shot and zero-shot. These approaches vary based on the amount of task-specific data that is given to the model:

Few-shot: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. The following example shows a few-shot prompt where we provide multiple examples:

Convert the questions to a command:
Q: Ask Constance if we need some bread
A: send-msg `find constance` Do we need some bread?
Q: Send a message to Greg to figure out if things are ready for Wednesday.
A: send-msg `find greg` Is everything ready for Wednesday?
Q: Ask Ilya if we're still having our meeting this evening
A: send-msg `find ilya` Are we still having a meeting this evening?
Q: Contact the ski store and figure out if I can get my skis fixed before I leave on Thursday
A: send-msg `find ski store` Would it be possible to get my skis fixed before I leave on Thursday?
Q: Thank Nicolas for lunch
A: send-msg `find nicolas` Thank you for lunch!
Q: Tell Constance that I won't be home before 19:30 tonight — unmovable meeting.
A: send-msg `find constance` I won't be home before 19:30 tonight. I have a meeting I can't move.
Q: Tell John that I need to book an appointment at 10:30
A: 

The number of examples typically range from 0 to 100 depending on how many can fit in the maximum input length for a single prompt. Maximum input length can vary depending on the specific models you use. Few-shot learning enables a major reduction in the amount of task-specific data required for accurate predictions. This approach will typically perform less accurately than a fine-tuned model.

One-shot: This case is the same as the few-shot approach except only one example is provided.

Zero-shot: In this case, no examples are provided to the model and only the task request is provided.

Models

The service provides users access to several different models. Each model provides a different capability and price point.

GPT-4 models are the latest available models. These models are currently in preview. For access, existing Azure OpenAI Service customers can apply by filling out this form.

The GPT-3 base models are known as Davinci, Curie, Babbage, and Ada in decreasing order of capability and increasing order of speed.

The Codex series of models is a descendant of GPT-3 and has been trained on both natural language and code to power natural language to code use cases. Learn more about each model on our models concept page.

The following table describes model families currently available in Azure OpenAI Service. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown.

Model family Description
GPT-4 A set of models that improve on GPT-3.5 and can understand as well as generate natural language and code. These models are currently in preview.
GPT-3 A series of models that can understand and generate natural language. This includes the new ChatGPT model (preview).
Codex A series of models that can understand and generate code, including translating natural language to code.
Embeddings A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search.

To learn more visit Azure OpenAI Service models.

Responsible AI with the Azure OpenAI Service

At Microsoft, we're committed to the advancement of AI driven by principles that put people first. Generative models such as the ones available in Azure OpenAI Service have significant potential benefits, but without careful design and thoughtful mitigations, such models have the potential to generate incorrect or even harmful content. Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases, incorporating Microsoft’s principles for responsible AI use, building content filters to support customers, and providing responsible AI implementation guidance to onboarded customers.

More details on the RAI guidelines for the Azure OpenAI Service can be found here.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

openai's People

Contributors

achandmsft avatar alexandair avatar bchuecos avatar cjromb avatar colombod avatar fosteramanda avatar hennachng avatar hyoshioka0128 avatar kevinh48264 avatar kristapratico avatar luisquintanilla avatar maryamariyan avatar microsoftopensource avatar mishrapratyush avatar mrbullwinkle avatar nawanas avatar paragagrawal11 avatar shshubhe avatar solbell2 avatar ykbryan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai's Issues

Authorization Failed Error When Prompting Azure OpenAI UI

Please provide us with the following information:
I'm encountering an authorization error when attempting to prompt my chatbot locally on my system. Once, I send a request or a question, the UI runs for like 10 seconds and returns an error. The error appears to be related to Azure Search service authorization.

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Open Command Prompt.
Activate the Conda environment:
Navigate to the project directory: cd
Log in to Azure CLI: az login.
Select the appropriate subscription and tenant.
Run the python script: python .py

Any log messages given by the failure

Authorization failed

Expected/desired behavior

Prompt the chatbot UI interface and get a text response as you would in chatGPT. See attached screenshot of the error message i get instead on the user interface.
Screenshot 2024-07-08 162737

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

OS: Windows 11

Mention any other details that might be useful

Environment Summary
Name: azure-core
Version: 1.30.2
Summary: Microsoft Azure Core Library for Python
Home-page: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/core/azure-core
Author: Microsoft Corporation
Author-email: [email protected]
License: MIT License
Location: c:\users\t-grbabalola\appdata\local\anaconda3\envs\chatosp\lib\site-packages
Requires: requests, six, typing-extensions
Required-by: azure-identity, azure-search-documents, msrest
Thanks! We'll be in touch soon.

On Macbook: 'prepackage' hook failed with exit code: '127'

Please provide us with the following information:

After making the changes as mentioned in the git repo readme when I run the following command the Macbook
azd up

I am getting the below error:

Executing prepackage hook => /var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1
/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1: pwsh: command not found

ERROR: failed running pre hooks: 'prepackage' hook failed with exit code: '127', Path: '/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1'. : exit code: 127

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

azd up on MacBook VSCode

Any log messages given by the failure

Executing prepackage hook => /var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1
/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1: pwsh: command not found

ERROR: failed running pre hooks: 'prepackage' hook failed with exit code: '127', Path: '/var/folders/t1/6k5yrx055dnf88pf4b12v6f40000gp/T/azd-prepackage-3025571478.ps1'. : exit code: 127

Expected/desired behavior

It should deploy the repo

OS and Version?

MacOS M2 chip

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Deployment template validation failed incorrect segment lengths

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

{"code":"InvalidTemplate","message":"Deployment template validation failed: 'The template resource 'cog-634zta4uccph4/' for type 'Microsoft.CognitiveServices/accounts/deployments' at line '1' and column '1710' has incorrect segment lengths. A nested resource type must have identical number of segments as its resource name. A root resource type must have segment length one greater than its resource name. Please see https://aka.ms/arm-syntax-resources for usage details.'.","additionalInfo":[{"type":"TemplateViolation","info":{"lineNumber":1,"linePosition":1710,"path":"properties.template.resources[1].type"}}]}

Expected/desired behavior

OS and Version?

Windows 11with VS code

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

prepdata.ps1 is not doing proper error handling

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

run azd up against the AOAISearchDemo code. when there are failures you are not bubbling up the failure.

Any log messages given by the failure

Look at Line 65...this is but one example...it's everywhere in that script...

if ($process.ExitCode -ne 0) {
Write-Host ""
Write-Warning "Installing post-deployment dependencies failed with non-zero exit code $LastExitCode."
Write-Host ""
exit $process.ExitCode
}

You are printing 0 basically, even when there is a failure.

Expected/desired behavior

should throw the correct error

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

https://xxxxxxx.openai.azure.com/openai/deployments/gpt-4-32k/chat/completions?api-version=2023-11-01-preview suddenly stop working

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [x] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I was using azure OPENAI from last few months. I see a trends that after every month's 1st day a new apiversion can be used. For example current month is 11 so I can use https://xxxxxxx.openai.azure.com/openai/deployments/gpt-4-32k/chat/completions?api-version=2023-11-01-preview.

In this url 11 denote the mm and 01 is first day of month. It was working good in first few day and suddenly it's stopped working. I see 10-01-preview is working but 11-01-preview is give error like resource not found. Is there any specific reason why this api-version suddenly removed.

Any log messages given by the failure

Expected/desired behavior

API was working fine earlier as expected but suddenly start giving error "resources not found". Earlier I am getting proper response.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Error making request to Open AI completions endpoint

Please provide us with the following information:

This issue is for a: Error making request to Open AI completions endpoint when deploying and testing AOAI_Virtual_Assistant

All the apps are individually up and running but when chatting with the bot webapp the orchestrator is not able to access the open ai completion endpoint and below error is displayed in the command prompt. If anyone resolved this, please provide info.

Any log messages given by the failure

400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview
[2024-04-02 10:26:07,000] ERROR in app: Exception on /query [POST]
Traceback (most recent call last):
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\api\client.py", line 32, in completions
response.raise_for_status()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://casggpt-4.openai.azure.com/openai/deployments/GPT4/completions?api-version=2024-02-15-preview

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\flask\app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\main.py", line 24, in run_flow
agent_response = orchestrator.run_query(conversation, user_id, conversation_id, query)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\orchestrator.py", line 50, in run_query
classification = self.topic_classifier.run(query)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\tasks\topic_classifier.py", line 24, in run
response = topic_classifier.generate_dialog(classifier_payload)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\model_manager.py", line 58, in generate_dialog
response_choice = self.client.completions(text_prompt, self.model_params)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 325, in iter
raise retry_exc.reraise()
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init_.py", line 158, in reraise
raise self.last_attempt.result()
File "C:\Users\ArjunM\AppData\Local\Programs\Python\Python39\lib\concurrent\futures_base.py", line 433, in result
return self.__get_result()
File "C:\Users\ArjunM\AppData\Local\Programs\Python\Python39\lib\concurrent\futures_base.py", line 389, in __get_result
raise self.exception
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\myenv\lib\site-packages\tenacity_init
.py", line 382, in call
result = fn(*args, **kwargs)
File "F:\Arjun\Open_AI_Practice\AOAI_Virtual_Assistant\openai\End_to_end_Solutions\AOAIVirtualAssistant\src\botapp\cognition\openai\api\client.py", line 40, in completions
raise Exception("Error making request to Open AI completions endpoint.")
Exception: Error making request to Open AI completions endpoint.

Expected/desired behavior

OS and Version?

Windows

Versions

10

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Add OpenAI .NET Cookbook Samples

How to use streaming in Azure OpenAI Assistants API ?

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. pip install openai
  2. input the codes below
from dotenv import load_dotenv
from typing_extensions import override
from openai import AzureOpenAI, AssistantEventHandler

load_dotenv()

api_key = os.environ.get("AZURE_OPENAI_API_KEY")
api_version = os.environ.get("OPENAI_API_VERSION")
azure_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
assistant_id = os.environ.get("AZURE_OPENAI_ENDPOINT")

class EventHandler(AssistantEventHandler):
    @override
    def on_text_created(self, text) -> None:
        print(f"\nassistant > ", end="", flush=True)

    @override
    def on_text_delta(self, delta, snapshot):
        print(delta.value, end="", flush=True)

    def on_tool_call_created(self, tool_call):
        print(f"\nassistant > {tool_call.type}\n", flush=True)

    def on_tool_call_delta(self, delta, snapshot):
        if delta.type == 'code_interpreter':
            if delta.code_interpreter.input:
                print(delta.code_interpreter.input, end="", flush=True)
            if delta.code_interpreter.outputs:
                print(f"\n\noutput >", flush=True)
                for output in delta.code_interpreter.outputs:
                    if output.type == "logs":
                        print(f"\n{output.logs}", flush=True)


client = AzureOpenAI(api_key, api_version, azure_endpoint)

thread = client.beta.threads.create(
        messages=[]
    )

client.beta.threads.messages.create(
        thread_id=thread.id,
        role="user",
        content="here are some messages..."
    )

with client.beta.threads.runs.stream(
            thread_id=thread.id,
            assistant_id=assistant_id,
            event_handler=EventHandler()
        ) as stream:
      stream.until_done()  

Any log messages given by the failure

Traceback (most recent call last):
  File "/Users/admin/project/aoai-assistant-demo/main.py", line 47, in <module>
    with client.beta.threads.runs.stream(
            thread_id=thread.id,
            assistant_id=assistant_id,
            event_handler=EventHandler()
        ) as stream:
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/lib/streaming/_assistants.py", line 444, in __enter__
    self.__stream = self.__api_request()
                    ^^^^^^^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1213, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 902, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 993, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'stream'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Expected/desired behavior

Expect the codes to work fine.

OS and Version?

macOS Sonoma 14.0

Versions

python 3.12
openai 1.16.2

Mention any other details that might be useful


Parallel tool calling using AzureChatOpenAI

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

from operator import itemgetter
from typing import Dict, List, Union

from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_core.tools import tool

from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings, ChatOpenAI

from langchain_core.messages import AIMessage
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableMap,
RunnablePassthrough,
)
import os

llm = getllm(0.7)

@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int

@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int

@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent

llm = AzureChatOpenAI(config)

tools = [multiply, exponentiate, add]
llm_with_tools = llm.bind_tools(tools)
tool_map = {tool.name: tool for tool in tools}

def call_tools(msg: AIMessage) -> Runnable:
tool_map = {tool.name: tool for tool in tools}
tool_calls = msg.tool_calls.copy()
for tool_call in tool_calls:
tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"])
return tool_calls

chain = llm_with_tools | call_tools

input_text = "What's 23 times 7, and what's five times 18 and add a million plus a billion and cube thirty-seven"

result = chain.invoke(input_text)

Any log messages given by the failure

Expected/desired behavior

Having multiple tools called (parallel calling)

OS and Version?

Linux

Versions

Mention any other details that might be useful

I was wondering if Azure did support langchain AzureChatOpenAI to handle parallel tool calling. Even when specifying the model name, the chat_completions always uses gpt-4-32k for some reason. When I use directly ChatOpenAI, parallel tool calling works.
Anyone having this issue?


Thanks! We'll be in touch soon.

Chaining function calls

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Hello, for the time being, it seems that the model can determine which function to call but it seems to be limited to only one function, unless I missed something. In the scenario where I split the calculator into four distinct functions (add, divide, substract, multiply) and I would input the following query:

Calculate the total of 10/2 multiplied by 3

The model will determine that divide must be called, but it will not understand that multiply must also be called. Is it something that will be available?

Expected/desired behavior

{ "role": "assistant", "function_calls":[ { "name": "divide", "arguments": "{\n\"num1\": 10,\n\"num2\": 2\n}" }, { "name": "multiply", "arguments": "{\n\"num1\": 5,\n\"num2\": 3\n}" } ] }


Thanks! We'll be in touch soon.

Specified scale type 'Standard' of account deployment is not supported by GPT4 or GPT35TURBO

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

When deploying End_to_end_Solutions
AOAISearchDemo application, I ran with the below issue when I ran azd up command after following the steps of starting from the scratch -"https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo#starting-from-scratch"

The template deployment 'openai' is not valid according to the validation procedure.
The specified scale type 'Standard' of account deployment is not supported by the model

I tried with both gpt35turbo and gpt4. Please let me how to fix this deployment error

Every resource except openai got deployed successfully

Any log messages given by the failure

Expected/desired behavior

The application should get deployed successfully on Azure infrastructure. Once the app is up, I should be able to query the application.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Window10

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Intermittent RestError: read ECONNRESET Error when Sending Messages to OpenAI Deployment

I am facing an intermittent issue with the @azure/[email protected] library using the [email protected]. While attempting to send messages to an OpenAI deployment, I sporadically encounter the ECONNRESET error. Despite multiple attempts, this problem persists. Below is my code snippet along with the error message:

code :

for (let attempt = 0, keyIdx = 0; attempt < retries; attempt++, keyIdx++) {
  const openai = await this.createAPI();

  const messages = [
    {
      role: 'system',
      content: prompt,
    },
    { role: 'user', content: content },
  ];
  try {
    let res = '';
    const result = await openai.getChatCompletions(
      OPENAI_DEPLOYMENT_ID,
      messages,
    );
    for (const choice of result.choices) {
      console.log(choice.message.content);
      res += choice.message.content;
    }

    return res;
  } catch (error) {
    if (attempt === retries - 1) {
      console.log('Failed after maximum attempts', error);
      throw error;
    }
  }
}

Error Message:

Failed after maximum attempts RestError: read ECONNRESET 
{
  "name": "RestError",
  "code": "ECONNRESET",
  "request": {
    "url": "https://[xxx].openai.azure.com/openai/deployments/[openai-model]/chat/completions?api-version=2023-08-01-preview",
    "headers": {
      // Request headers
    },
    "method": "POST",
    "timeout": 0,
    "disableKeepAlive": false,
    // ...
  },
  "message": "read ECONNRESET"
}

Expected Resolution: I am seeking assistance in resolving the intermittent ECONNRESET error, which is impeding my ability to consistently send messages to the OpenAI deployment. I need help identifying the root cause of this issue and obtaining a solution.

Additional Information:
I am using Node.js v18.15.0.
The version of the @azure/openai library I am using is 1.0.0-beta.2.
Thank you for your support!

This issue is for a: (mark with an x)

- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 11 & macOS

Versions

1.0.0-beta.2

Mention any other details that might be useful


Thanks! We'll be in touch soon.

How to use streaming in Azure OpenAI Assistants API ?

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. pip install openai
  2. input the codes below
from dotenv import load_dotenv
from typing_extensions import override
from openai import AzureOpenAI, AssistantEventHandler

load_dotenv()

api_key = os.environ.get("AZURE_OPENAI_API_KEY")
api_version = os.environ.get("OPENAI_API_VERSION")
azure_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
assistant_id = os.environ.get("AZURE_OPENAI_ENDPOINT")

class EventHandler(AssistantEventHandler):
    @override
    def on_text_created(self, text) -> None:
        print(f"\nassistant > ", end="", flush=True)

    @override
    def on_text_delta(self, delta, snapshot):
        print(delta.value, end="", flush=True)

    def on_tool_call_created(self, tool_call):
        print(f"\nassistant > {tool_call.type}\n", flush=True)

    def on_tool_call_delta(self, delta, snapshot):
        if delta.type == 'code_interpreter':
            if delta.code_interpreter.input:
                print(delta.code_interpreter.input, end="", flush=True)
            if delta.code_interpreter.outputs:
                print(f"\n\noutput >", flush=True)
                for output in delta.code_interpreter.outputs:
                    if output.type == "logs":
                        print(f"\n{output.logs}", flush=True)


client = AzureOpenAI(api_key=api_key, api_version=api_version, azure_endpoint=azure_endpoint)

thread = client.beta.threads.create(
        messages=[]
    )

client.beta.threads.messages.create(
        thread_id=thread.id,
        role="user",
        content="here are some messages..."
    )

with client.beta.threads.runs.stream(
            thread_id=thread.id,
            assistant_id=assistant_id,
            event_handler=EventHandler()
        ) as stream:
      stream.until_done()  

Any log messages given by the failure

Traceback (most recent call last):
  File "/Users/admin/project/aoai-assistant-demo/main.py", line 47, in <module>
    with client.beta.threads.runs.stream(
            thread_id=thread.id,
            assistant_id=assistant_id,
            event_handler=EventHandler()
        ) as stream:
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/lib/streaming/_assistants.py", line 444, in __enter__
    self.__stream = self.__api_request()
                    ^^^^^^^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1213, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 902, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/admin/project/aoai-assistant-demo/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 993, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unknown parameter: 'stream'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Expected/desired behavior

Expect the codes to work fine.

OS and Version?

macOS Sonoma 14.0

Versions

python 3.12
openai 1.16.2

Mention any other details that might be useful


Results are very poor

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

hello, I followed the steps to index over 100 our own PDF documents into Azure Cognitive Search. The answers through Bring Your Own Data in the Chat playground I got for pretty simple questions are simply wrong.

Any log messages given by the failure

No log messages because the system functions.

Expected/desired behavior

A more precise answer to my questions.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?) Windows 10 but I am using Azure AI Studio

Versions

10

Mention any other details that might be useful


Thanks! We'll be in touch soon.

AttributeError: type object 'AzureOpenAI' has no attribute 'metadata'

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- metadata function is not available in azureopen ai
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

pip install openai1.14.1, call metadata method in azureopen ai object

Any log messages given by the failure

AttributeError: type object 'AzureOpenAI' has no attribute 'metadata'

Expected/desired behavior

couldnt load metadata method in azure openai

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Surface Australia On-Site service and repair.pdf file path too long, unnable to clone the repo

Please provide us with the following information:

I forked this repository, and when I tried to clone it with GitHub desktop, I got an error because this file's path is too long. I tried to download the zip file, and I got the same error, but I was able to skip the file when extracted the repo from the zip file
azure-samples-openai-main/End_to_end_Solutions/AOAISearchDemo/data/surface_device_documentation/Commercial service & repair/Service & repair options/Service & repair features/Surface Australia On-Site service and repair.pdf

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Fork this repo
  2. In your repo click Code
  3. Open with GitHub repo
  4. Durin the clone process you will get an error
  5. If you download the zip file you get the same error but you can skip the file

Any log messages given by the failure

Cloning into 'C:\azure-samples-openai\azure-samples-openai'...
remote: Enumerating objects: 2030, done.
remote: Counting objects: 100% (835/835), done.
remote: Compressing objects: 100% (360/360), done.
remote: Total 2030 (delta 474), reused 654 (delta 378), pack-reused 1195
Receiving objects: 100% (2030/2030), 195.65 MiB | 35.65 MiB/s, done.
Resolving deltas: 100% (851/851), done.
error: unable to create file End_to_end_Solutions/AOAISearchDemo/data/surface_device_documentation/Commercial service & repair/Service & repair options/Service & repair features/Surface Australia On-Site service and repair.pdf: Filename too long
Updating files: 100% (623/623), done.
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'

Would you like to retry cloning ?

Expected/desired behavior

I should clone without a problem

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Windows 11 PRO

azd version?

run azd version and copy paste here.

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Expose X-Ratelimit-* headers on ChatCompletion API requests

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Hi,

I'm using openai-python library v1.3.5 with the AsyncAzureOpenAI client.

I'd like to get the rate limiting information in the response headers, see this issue for more context: openai/openai-python#416 (comment)

But the response headers do not seem to contain this information when using Azure OpenAI.

Does the Azure OpenAI API returns this information? If yes, how to retrieve the information?

Thanks

Missing usage information in npm azure/openai client response

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [X] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

On a Javascript application, install the package @azure/openai with npm install @azure/openai
This installs version v1.0.0-beta.5
Within the script, include:

const { OpenAIClient, AzureKeyCredential } = require("@azure/openai");

const client = new OpenAIClient(CHATGPT_EP, new AzureKeyCredential(CHATGPT_KEY));
const deploymentId = "DEPLOYMENT";
const data = await client.getChatCompletions(deploymentId, CHAT_LIST);

console.log(data);

Any log messages given by the failure

image

Expected/desired behavior

I expected the response to include have a field of "usage" (with all the tokens consumption details) as presented in example response in:
https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#example-response-2

OS and Version?

Windows Chrome

Versions

Package version is v1.0.0-beta.5 (azure/openai on npm)

Mention any other details that might be useful

I tried asking in the Microsoft Learn forum but was asked to raise an issue on the github repo, I'm not even sure if this is the right place.
However, I hope to know if it is still possible to somehow get that usage details included within the response without manually adding another tokenization package locally.
Appreciate your assistance.

【Azure openai deploy】 The specified scale type 'Standard' of account deployment is not supported by the model 'gpt-4'.

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error

Any log messages given by the failure

I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error

Expected/desired behavior

I am following the example below for deployment https://github.com/Azure-Samples/openai/tree/main/End_to_end_Solutions/AOAISearchDemo When I run the command Run【azd up】 I get the following error

OS and Version?

Windows 11

Versions

Windows 11

Mention any other details that might be useful

1

I opened the link to see the details of the error, here is a screenshot,Is there anyone who can help me with this?

2

3

4

status message
{
"status": "Failed",
"error": {
"code": "InvalidTemplateDeployment",
"message": "The template deployment 'openai' is not valid according to the validation procedure. The tracking id is 'c858cf6f-91ce-47cb-86a3-3e39d171e2c3'. See inner errors for details.",
"details": [
{
"code": "InvalidResourceProperties",
"message": "The specified scale type 'Standard' of account deployment is not supported by the model 'gpt-4'."
}
]
}
}

Not able to clone the repository

Hi,
For some reason I'm not able to clone the report, I tried using the desktop app or the URL with no success. My user is gbissio, can you please help me?

Thank you!

azd template pointing to wrong git repo

Please provide us with the following information:

  • documentation issue or request

**### Minimal steps to reproduce**
>run azd init -t AOAISearchDemo

**### Any log messages given by the failure**
>ERROR: init from template repository: fetching template: failed to clone repository https://github.com/Azure-Samples/AOAISearchDemo: exit code: 128, stdout: , stderr: Cloning into 'C:\Users\xxxxx\AppData\Local\Temp\az-dev-template2468296xxx'...
remote: Repository not found.
fatal: repository 'https://github.com/Azure-Samples/AOAISearchDemo/' not found

### Expected/desired behavior
> template needs to deploy resources successfully and clone the repo.

### OS and Version?
> Windows 11

### Versions
>

### Mention any other details that might be useful

> ---------------------------------------------------------------
> Thanks! We'll be in touch soon.

Azure AI Studio View Code doesn't compile

I don't know if this is the correct place to report this, but I was trying to test Chat Completions since GPT35 doesn't support completions. When you click view code, you are given this code

Response<ChatCompletions> responseWithoutStream = await client.GetChatCompletionsAsync(
    "GPT35",
      new ChatCompletionsOptions()
      {
        Messages =
        {
           new ChatMessage(ChatRole.System, @"You are an AI assistant that helps people find information."),

        },
        Temperature = (float)0.7,
        MaxTokens = 800,
        NucleusSamplingFactor = (float)0.95,
        FrequencyPenalty = 0,
        PresencePenalty = 0,
  });

  ChatCompletions response = responseWithoutStream.Value;

ChatMessage does not exist. I searched and found ChatRequestMessage, is that what this is supposed to be?

Thanks

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Go to Azure AI Studio - Chat Playground.
  2. Click View Code - The code sample given doesn't compile (the NuGet package is added and everything else works - meaning all other references are known except ChatMessage class.

I just noticed I have beta12 installed, and the code expects beta5. My guess is the code should be updated if this class ChatMessage did not make it to beta12.

Any log messages given by the failure

image

Expected/desired behavior

View Code should compile with the latest betas if you want people to be up to date on your current code base.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

I have Windows 11. You should probably update your bug report to list MS latest Windows offering.

Versions

As mentioned above, I have Azure.AI.OpenAI beta12 installed. This is probably different in beta5

image

Mention any other details that might be useful

I will try beta5 and see if that compiles. It is just a little frustrating to someone trying to learn this when the sample code doesn't compile.


Thanks! We'll be in touch soon.

"sentiment_aspects": null

Hi, I'm trying to run this demo, but unfortunately getting the below-mentioned error,

"not enough values to unpack (expected 2, got 0)" It is due to no value in "sentiment_aspects" variable.

can anyone guide me regarding this?
Please note I'm following the exact guidelines provided in the readme file.

Thanks

[E2ESolutions/AOAISearchDemo] Missing prerequisites in documentation

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [X] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

git clone
cd openai/End_to_end_Solutions/AOAISearchDemo
azd auth login --tenant-id
azd up

Any log messages given by the failure

Running "populate_sql.py"
Connecting to SQL Server
Traceback (most recent call last):
File ".\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepopulate\populate_sql.py", line 15, in
cnxn = pyodbc.connect(args.sql_connection_string)
pyodbc.InterfaceError: ('IM002', '[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) (SQLDriverConnect)')

Expected/desired behavior

The azd up finishes successfully. Documentation is missing prerequisites for latest ODBC Driver. I had ODBC v17 but it couldn't find it as SQL connection string was expectign ODBC v18

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Windows 11

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

ERROR: failed running pre hooks: 'preprovision' hook failed with exit code: '1',

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

When I run the azd up on a starting from scratch installation instructions from the following URL... https://github.com/Azure-Samples/openai/blob/main/End_to_end_Solutions/AOAISearchDemo/README.md, it goes through the steps and then generates an error, see section below. Nothing is generated in Azure. How can i review the logs to see if additional error information is generated?

Any log messages given by the failure

C:\VSCode\AOAISearchDemo\openai\End_to_end_Solutions\AOAISearchDemo>azd up
Executing prepackage hook => C:\Users\xxxxxx\AppData\Local\Temp\azd-prepackage-637729247.ps1

up to date, audited 123 packages in 2s

11 packages are looking for funding
run npm fund for details

2 vulnerabilities (1 moderate, 1 high)

To address all issues, run:
npm audit fix

Run npm audit for details.

[email protected] build
tsc && vite build

vite v4.2.2 building for production...
✓ 1247 modules transformed.
../backend/static/assets/github-fab00c2d.svg 0.96 kB
../backend/static/index.html 1.15 kB
../backend/static/assets/index-d82ced22.css 7.84 kB │ gzip: 2.23 kB
../backend/static/assets/index-be339f02.js 620.18 kB │ gzip: 203.99 kB │ map: 5,818.53 kB

(!) Some chunks are larger than 500 kBs after minification. Consider:

Packaging services (azd package)

(✓) Done: Packaging service backend

  • Package Output: C:\Users\xxxxxx\AppData\Local\Temp\azddeploy960295300.zip

(✓) Done: Packaging service data

  • Package Output: C:\Users\xxxxxx\AppData\Local\Temp\azddeploy1198653294.zip
    Executing preprovision hook => C:\Users\xxxxxx\AppData\Local\Temp\azd-preprovision-76052707.ps1
    WARNING: TenantId '16b3c013-d300-468d-ac64-7eda0820b6d3' contains more than one active subscription. First one will be selected for further use. To select another subscription, use Set-AzContext.

Error: accepts 2 arg(s), received 1

ERROR: failed running pre hooks: 'preprovision' hook failed with exit code: '1', Path: 'C:\Users\xxxxxx\AppData\Local\Temp\azd-preprovision-76052707.ps1'. : exit code: 1

Expected/desired behavior

A successful provision of the environment.

OS and Version?

Windows 11

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Unable to launch code in webapp

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

deploy code to webapp

Any log messages given by the failure

2023-07-08T02:05:04.538815817Z from backend.contracts.error import OutOfScopeException, UnauthorizedDBAccessException
2023-07-08T02:05:04.538820117Z ModuleNotFoundError: No module named 'backend'

Expected/desired behavior

I have it running locally and confirmed it works prior to deployment

OS and Version?

Windows 10

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

[E2ESolutions/AOAISearchDemo] Missing Cosmos DB Endpoint in KeyVault secrets

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

git clone
cd openai/End_to_end_Solutions/AOAISearchDemo
azd auth login --tenant-id
azd up

Any log messages given by the failure

Running "prepopulate.py"
usage: prepopulate.py [-h] [--entities_path ENTITIES_PATH] [--permissions_path PERMISSIONS_PATH] [--cosmos_db_endpoint COSMOS_DB_ENDPOINT]
[--cosmos_db_key COSMOS_DB_KEY] [--cosmos_db_name COSMOS_DB_NAME]
[--cosmos_db_entities_container_name COSMOS_DB_ENTITIES_CONTAINER_NAME]
[--cosmos_db_permissions_container_name COSMOS_DB_PERMISSIONS_CONTAINER_NAME]
prepopulate.py: error: argument --cosmos_db_endpoint: expected one argument

Expected/desired behavior

The azd up finishes successfully. Azure Cosmos DB Endpoint secret is not created as keyvault secret. See file .\End_to_end_Solutions\AOAISearchDemo\infra\core\database\cosmos-database.bicep

Missing piece:

module azureCosmosKeySecret '../keyvault/keyvault_secret.bicep' = if(addKeysToVault) {
name: 'AZURE-COSMOS-ENDPOINT'
params: {
keyVaultName: keyVaultName
secretName: 'AZURE-COSMOS-ENDPOINT'
secretValue: account.properties.documentEndpoint
}
}

OS and Version?

Windows 11

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

[Failed to index data] azd auth token timed out after 10 seconds for /End_to_end_Solutions/AOAISearchDemo/scripts/prepdata.ps1

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

azd auth login --tenant-id 'REDACTED' or azd auth login --client-id 'REDACTED' --client-secret 'REDACTED' --tenant-id 'REDACTED'
azd up

The indexing of docs eventually fails because of fetching the token timeout. I have tried both UPN and SPN but none of them worked.

Any log messages given by the failure

Indexing sections from 'Surface Deployment Accelerator.pdf' into search index 'gptkbindex'
Indexed 4 sections, 4 succeeded
Processing 'REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo/data/surface_device_documentation/Deploy & manage\Automate deployment\Upgrade Surface devices to Windows 10 with MDT.pdf'
Uploading blob for page 0 -> Upgrade Surface devices to Windows 10 with MDT-0.pdf
Ex
tracting text from 'REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo/data/surface_device_documentation/Deploy & manage\Automate deployment\Upgrade Surface devices to Windows 10 with MDT.pdf' using Azure Form Recognizer
AzureDeveloperCliCredential.get_token failed: Failed to invoke the Azure Developer CLI
Unable to retrieve continuation token: cannot pickle '_io.BufferedReader' object
Traceback (most recent call last):
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts.venv\lib\site-packages\azure\identity_credentials\azd_cli.py", line 153, in _run_command
return subprocess.check_output(args, **kwargs)
File "REDACTED\anaconda3\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "REDACTED\anaconda3\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "REDACTED\anaconda3\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "REDACTED\anaconda3\lib\subprocess.py", line 1530, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['cmd', '/c', 'azd auth token --output json --scope https://cognitiveservices.azure.com/.default --tenant-id REDACTED']' timed out after 10 seconds

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepdocs.py", line 285, in
page_map = get_document_text(filename)
File "REDACTED\AOAISearchDemo\End_to_end_Solutions\AOAISearchDemo\scripts\prepdocs.py", line 155, in get_document_text
form_recognizer_results = poller.result()

Expected/desired behavior

azd up finished successfully

OS and Version?

Windows 11

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.