Code Monkey home page Code Monkey logo

aiproxyswift's Introduction

About

Use this package to add AIProxy support to your iOS and macOS apps. AIProxy lets you depend on AI APIs safely without building your own backend. Five levels of security are applied to keep your API key secure and your AI bill predictable:

  • Certificate pinning
  • DeviceCheck verification
  • Split key encryption
  • Per user rate limits
  • Per IP rate limits

Installation

How to add this package as a dependency to your Xcode project

  1. From within your Xcode project, select File > Add Package Dependencies

    Add package dependencies
  2. Punch github.com/lzell/aiproxyswift into the package URL bar, and select the 'main' branch as the dependency rule. Alternatively, you can choose specific releases if you'd like to have finer control of when your dependency gets updated.

    Set package rule
  3. Add an AIPROXY_DEVICE_CHECK_BYPASS env variable to Xcode. This token is provided to you in the AIProxy developer dashboard, and is necessary for the iOS simulator to communicate with the AIProxy backend.

    • Type cmd shift , to open up the "Edit Schemes" menu (or Product > Scheme > Edit Scheme)

    • Select Run in the sidebar

    • Select Arguments from the top nav

    • Add to the "Environment Variables" section an env variable with name AIPROXY_DEVICE_CHECK_BYPASS and value that we provided you in the AIProxy dashboard

      Add env variable

The AIPROXY_DEVICE_CHECK_BYPASS is intended for the simulator only. Do not let it leak into a distribution build of your app (including a TestFlight distribution). If you follow the steps above, then the constant won't leak because env variables are not packaged into the app bundle.

See the FAQ for more details on the DeviceCheck bypass constant.

How to update the package

  • If you set the dependency rule to main in step 2 above, then you can ensure the package is up to date by right clicking on the package and selecting 'Update Package'

    Update package version
  • If you selected a version-based rule, inspect the rule in the 'Package Dependencies' section of your project settings:

    Update package rule

    Once the rule is set to include the release version that you'd like to bring in, Xcode should update the package automatically. If it does not, right click on the package in the project tree and select 'Update Package'.

Example usage

Along with the snippets below, which you can copy and paste into your Xcode project, we also offer full demo apps to jump-start your development. Please see the AIProxyBootstrap repo.

Get a non-streaming chat completion from OpenAI:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [.init(role: "system", content: .text("hello world"))]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

Get a streaming chat completion from OpenAI:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
let requestBody = OpenAIChatCompletionRequestBody(
    model: "gpt-4o",
    messages: [.init(role: "user", content: .text("hello world"))]
)

do {
    let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
    for try await chunk in stream {
        print(chunk.choices.first?.delta.content ?? "")
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

Send a multi-modal chat completion request to OpenAI:

On macOS, use NSImage(named:) in place of UIImage(named:)

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
guard let image = UIImage(named: "myImage") else {
    print("Could not find an image named 'myImage' in your app assets")
    return
}

guard let imageURL = AIProxy.openAIEncodedImage(image: image) else {
    print("Could not convert image to OpenAI's imageURL format")
    return
}

do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .init(
                role: "system",
                content: .text("Tell me what you see")
            ),
            .init(
                role: "user",
                content: .parts(
                    [
                        .text("What do you see?"),
                        .imageURL(imageURL)
                    ]
                )
            )
        ]
    ))
    print(response.choices.first?.message.content ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to generate an image with DALLE

This snippet will print out the URL of an image generated with dall-e-3:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = OpenAICreateImageRequestBody(
        prompt: "a skier",
        model: "dall-e-3"
    )
    let response = try await openAIService.createImageRequest(body: requestBody)
    print(response.data.first?.url ?? "")
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to ensure OpenAI returns JSON as the chat message content

Use responseFormat and specify in the prompt that OpenAI should return JSON only:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await service.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .init(
                role: "system",
                content: .text("Return valid JSON only")
            ),
            .init(
                role: "user",
                content: .text("Return alice and bob in a list of names")
            )
        ],
        responseFormat: .type("json_object")
    ))
    print(response.choices.first?.message.content)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to get word-level timestamps in an audio transcription

  1. Record an audio file in quicktime and save it as "helloworld.m4a"

  2. Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type cmd-opt-0 to open the inspect panel, and view Target Membership

  3. Run this snippet:

    import AIProxy
    
    let openAIService = AIProxy.openAIService(
        partialKey: "partial-key-from-your-developer-dashboard",
        serviceURL: "service-url-from-your-developer-dashboard"
    )
    do {
        let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")!
        let requestBody = OpenAICreateTranscriptionRequestBody(
            file: try Data(contentsOf: url),
            model: "whisper-1",
            responseFormat: "verbose_json",
            timestampGranularities: [.word, .segment]
        )
        let response = try await openAIService.createTranscriptionRequest(body: requestBody)
        if let words = response.words {
            for word in words {
                print("\(word.word) from \(word.start) to \(word.end)")
            }
        }
    }  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
        print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
    } catch {
        print(error.localizedDescription)
    }
    

How to send an Anthropic message request

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            AnthropicInputMessage(content: [.text("hello world")], role: .user)
        ],
        model: "claude-3-5-sonnet-20240620"
    ))
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to send an image to Anthropic

Use UIImage in place of NSImage for iOS apps:

import AIProxy

guard let image = NSImage(named: "marina") else {
    print("Could not find an image named 'marina' in your app assets")
    return
}

guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.8) else {
    print("Could not convert image to jpeg")
    return
}

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            AnthropicInputMessage(content: [
                .text("Provide a very short description of this image"),
                .image(mediaType: .jpeg, data: jpegData.base64EncodedString())
            ], role: .user)
        ],
        model: "claude-3-5-sonnet-20240620"
    ))
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to use the tools API with Anthropic

import AIProxy

let anthropicService = AIProxy.anthropicService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let requestBody = AnthropicMessageRequestBody(
        maxTokens: 1024,
        messages: [
            .init(
                content: [.text("What is nvidia's stock price?")],
                role: .user
            )
        ],
        model: "claude-3-5-sonnet-20240620",
        tools: [
            .init(
                description: "Call this function when the user wants a stock symbol",
                inputSchema: [
                    "type": "object",
                    "properties": [
                        "ticker": [
                            "type": "string",
                            "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
                        ]
                    ],
                    "required": ["ticker"]
                ],
                name: "get_stock_symbol"
            )
        ]
    )
    let response = try await anthropicService.messageRequest(body: requestBody)
    for content in response.content {
        switch content {
        case .text(let message):
            print("Claude sent a message: \(message)")
        case .toolUse(id: _, name: let toolName, input: let toolInput):
            print("Claude used a tool \(toolName) with input: \(toolInput)")
        }
    }
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to generate an image with Stability.ai

In the snippet below, replace NSImage with UIImage if you are building on iOS. For a SwiftUI example, see this gist

import AIProxy

let service = AIProxy.stabilityAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)
do {
    let body = StabilityAIUltraRequestBody(prompt: "Lighthouse on a cliff overlooking the ocean")
    let response = try await service.ultraRequest(body: body)
    let image = NSImage(data: response.imageData)
    // Do something with `image`
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to create translations using DeepL

import AIProxy

let service = AIProxy.deepLService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard"
)

do {
    let body = DeepLTranslateRequestBody(targetLang: "ES", text: ["hello world"])
    let response = try await service.translateRequest(body: body)
    // Do something with `response.translations`
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print("Could not create translation: \(error.localizedDescription)")
}

How to fetch the weather with OpenMeteo

This pattern is slightly different than the others, because OpenMeteo has an official lib that we'd like to rely on. To run the snippet below, you'll need to add AIProxySwift and OpenMeteoSDK to your Xcode project. Add OpenMeteoSDK:

  • In Xcode, go to File > Add Package Dependences
  • Enter the package URL https://github.com/open-meteo/sdk
  • Choose your dependency rule (e.g. the main branch for the most up-to-date package)

Next, use AIProxySwift's core functionality to get a URLRequest and URLSession, and pass those into the OpenMeteoSDK:

import AIProxy
import OpenMeteoSdk

do {
    let request = try await AIProxy.request(
        partialKey: "partial-key-from-your-aiproxy-developer-dashboard",
        serviceURL: "service-url-from-your-aiproxy-developer-dashboard",
        proxyPath: "/v1/forecast?latitude=52.52&longitude=13.41&hourly=temperature_2m&format=flatbuffers"
    )
    let session = AIProxy.session()
    let responses = try await WeatherApiResponse.fetch(request: request, session: session)
    // Do something with `responses`. For a usage example, follow these instructions:
    // 1. Navigate to https://open-meteo.com/en/docs
    // 2. Scroll to the 'API response' section
    // 3. Tap on Swift
    // 4. Scroll to 'Usage'
    print(responses)
} catch {
    print("Could not fetch the weather: \(error.localizedDescription)")
}

Specify your own clientID to annotate requests

If your app already has client or user IDs that you want to annotate AIProxy requests with, pass a second argument to the provider's service initializer. For example:

let openAIService = AIProxy.openAIService(
    partialKey: "partial-key-from-your-developer-dashboard",
    serviceURL: "service-url-from-your-developer-dashboard",
    clientID: "<your-id>"
)

Requests that are made using openAIService will be annotated on the AIProxy backend, so that when you view top users, or the timeline of requests, your client IDs will be familiar.

If you do not have existing client or user IDs, no problem! Leave the clientID argument out, and we'll generate IDs for you. See AIProxyIdentifier.swift if you would like to see ID generation specifics.

Troubleshooting

No such module 'AIProxy' error

Occassionally, Xcode fails to automatically add the AIProxy library to your target's dependency list. If you receive the No such module 'AIProxy' error, first ensure that you have added the package to Xcode using the Installation steps. Next, select your project in the Project Navigator (cmd-1), select your target, and scroll to the Frameworks, Libraries, and Embedded Content section. Tap on the plus icon:

Add library dependency

And add the AIProxy library:

Select the AIProxy dependency

macOS network sandbox

If you encounter the error

networkd_settings_read_from_file Sandbox is preventing this process from reading networkd settings file at "/Library/Preferences/com.apple.networkd.plist", please add an exception.

Modify your macOS project settings by tapping on your project in the Xcode project tree, then select Signing & Capabilities and enable Outgoing Connections (client)

'async' call in a function that does not support concurrency

If you use the snippets above and encounter the error

'async' call in a function that does not support concurrency

it is because we assume you are in a structured concurrency context. If you encounter this error, you can use the escape hatch of wrapping your snippet in a Task {}.

Requests to AIProxy fail in iOS XCTest UI test cases

If you'd like to do UI testing and allow the test cases to execute real API requests, you must set the AIPROXY_DEVICE_CHECK_BYPASS env variable in your test plan and forward the env variable from the test case to the host simulator (Apple does not do this by default, which I consider a bug). Here is how to set it up:

  • Set the AIPROXY_DEVICE_CHECK_BYPASS env variable in your test environment:

    • Open the scheme editor at Product > Scheme > Edit Scheme

    • Select Test

    • Tap through to the test plan

      Select test plan
    • Select Configurations > Environment Variables

      Select env variables
    • Add the AIPROXY_DEVICE_CHECK_BYPASS env variable with your value

      Enter env variable value
  • Important Edit your test cases to forward on the env variable to the host simulator:

func testExample() throws {
    let app = XCUIApplication()
    app.launchEnvironment = [
        "AIPROXY_DEVICE_CHECK_BYPASS": ProcessInfo.processInfo.environment["AIPROXY_DEVICE_CHECK_BYPASS"]!
    ]
    app.launch()
}

FAQ

What is the AIPROXY_DEVICE_CHECK_BYPASS constant?

AIProxy uses Apple's DeviceCheck to ensure that requests received by the backend originated from your app on a legitimate Apple device. However, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to constantly build and run on device during development, AIProxy provides a way to skip the DeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets the token, they can make requests to your AIProxy project without including a DeviceCheck token, and thus remove one level of protection.

What is the aiproxyPartialKey constant?

This constant is intended to be included in the distributed version of your app. As the name implies, it is a partial representation of your OpenAI key. Specifically, it is one half of an encrypted version of your key. The other half resides on AIProxy's backend. As your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used to fulfill the request to OpenAI.

Community contributions

Contributions are welcome! In order to contribute, we require that you grant AIProxy an irrevocable license to use your contributions as we see fit. Please read CONTRIBUTIONS.md for details

Contribution style guidelines

In codable representations, fields that are required by the API should be above fields that are optional. Within the two groups (required and optional) all fields should be alphabetically ordered.

aiproxyswift's People

Contributors

lzell avatar toddham avatar

Stargazers

 avatar  avatar Huong Do avatar  avatar Parthasarathy avatar Rudrank Riyam avatar Vincent Peng avatar Dani Plata avatar Vincent Peng avatar 권동영 avatar  avatar Görkem Güclü avatar Wayne Dahlberg avatar Raja V avatar Krishna avatar Antonio J. Martinez avatar 1amageek avatar  avatar Hai Phan avatar Marcelo Perretta avatar Kellyxiaowei avatar Ahn Jung Min avatar Kunwar Sahni avatar  avatar Olivier  avatar Jordan Morgan avatar Akino avatar Eduard Belianink avatar Mario Saputra avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.