About
Use this package to add AIProxy support to your iOS and macOS apps. AIProxy lets you depend on AI APIs safely without building your own backend. Five levels of security are applied to keep your API key secure and your AI bill predictable:
- Certificate pinning
- DeviceCheck verification
- Split key encryption
- Per user rate limits
- Per IP rate limits
-
From within your Xcode project, select
File > Add Package Dependencies
-
Punch
github.com/lzell/aiproxyswift
into the package URL bar, and select the 'main' branch as the dependency rule. Alternatively, you can choose specific releases if you'd like to have finer control of when your dependency gets updated. -
Add an
AIPROXY_DEVICE_CHECK_BYPASS
env variable to Xcode. This token is provided to you in the AIProxy developer dashboard, and is necessary for the iOS simulator to communicate with the AIProxy backend.-
Type
cmd shift ,
to open up the "Edit Schemes" menu (orProduct > Scheme > Edit Scheme
) -
Select
Run
in the sidebar -
Select
Arguments
from the top nav -
Add to the "Environment Variables" section an env variable with name
AIPROXY_DEVICE_CHECK_BYPASS
and value that we provided you in the AIProxy dashboard
-
The AIPROXY_DEVICE_CHECK_BYPASS
is intended for the simulator only. Do not let it leak into
a distribution build of your app (including a TestFlight distribution). If you follow the steps above,
then the constant won't leak because env variables are not packaged into the app bundle.
See the FAQ for more details on the DeviceCheck bypass constant.
-
If you set the dependency rule to
main
in step 2 above, then you can ensure the package is up to date by right clicking on the package and selecting 'Update Package' -
If you selected a version-based rule, inspect the rule in the 'Package Dependencies' section of your project settings:
Once the rule is set to include the release version that you'd like to bring in, Xcode should update the package automatically. If it does not, right click on the package in the project tree and select 'Update Package'.
Along with the snippets below, which you can copy and paste into your Xcode project, we also offer full demo apps to jump-start your development. Please see the AIProxyBootstrap repo.
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let response = try await openAIService.chatCompletionRequest(body: .init(
model: "gpt-4o",
messages: [.init(role: "system", content: .text("hello world"))]
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
let requestBody = OpenAIChatCompletionRequestBody(
model: "gpt-4o",
messages: [.init(role: "user", content: .text("hello world"))]
)
do {
let stream = try await openAIService.streamingChatCompletionRequest(body: requestBody)
for try await chunk in stream {
print(chunk.choices.first?.delta.content ?? "")
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
On macOS, use NSImage(named:)
in place of UIImage(named:)
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
guard let image = UIImage(named: "myImage") else {
print("Could not find an image named 'myImage' in your app assets")
return
}
guard let imageURL = AIProxy.openAIEncodedImage(image: image) else {
print("Could not convert image to OpenAI's imageURL format")
return
}
do {
let response = try await openAIService.chatCompletionRequest(body: .init(
model: "gpt-4o",
messages: [
.init(
role: "system",
content: .text("Tell me what you see")
),
.init(
role: "user",
content: .parts(
[
.text("What do you see?"),
.imageURL(imageURL)
]
)
)
]
))
print(response.choices.first?.message.content ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
This snippet will print out the URL of an image generated with dall-e-3
:
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let requestBody = OpenAICreateImageRequestBody(
prompt: "a skier",
model: "dall-e-3"
)
let response = try await openAIService.createImageRequest(body: requestBody)
print(response.data.first?.url ?? "")
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
Use responseFormat
and specify in the prompt that OpenAI should return JSON only:
import AIProxy
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let response = try await service.chatCompletionRequest(body: .init(
model: "gpt-4o",
messages: [
.init(
role: "system",
content: .text("Return valid JSON only")
),
.init(
role: "user",
content: .text("Return alice and bob in a list of names")
)
],
responseFormat: .type("json_object")
))
print(response.choices.first?.message.content)
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
-
Record an audio file in quicktime and save it as "helloworld.m4a"
-
Add the audio file to your Xcode project. Make sure it's included in your target: select your audio file in the project tree, type
cmd-opt-0
to open the inspect panel, and viewTarget Membership
-
Run this snippet:
import AIProxy let openAIService = AIProxy.openAIService( partialKey: "partial-key-from-your-developer-dashboard", serviceURL: "service-url-from-your-developer-dashboard" ) do { let url = Bundle.main.url(forResource: "helloworld", withExtension: "m4a")! let requestBody = OpenAICreateTranscriptionRequestBody( file: try Data(contentsOf: url), model: "whisper-1", responseFormat: "verbose_json", timestampGranularities: [.word, .segment] ) let response = try await openAIService.createTranscriptionRequest(body: requestBody) if let words = response.words { for word in words { print("\(word.word) from \(word.start) to \(word.end)") } } } catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) { print("Received non-200 status code: \(statusCode) with response body: \(responseBody)") } catch { print(error.localizedDescription) }
import AIProxy
let anthropicService = AIProxy.anthropicService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [.text("hello world")], role: .user)
],
model: "claude-3-5-sonnet-20240620"
))
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
Use UIImage
in place of NSImage
for iOS apps:
import AIProxy
guard let image = NSImage(named: "marina") else {
print("Could not find an image named 'marina' in your app assets")
return
}
guard let jpegData = AIProxy.encodeImageAsJpeg(image: image, compressionQuality: 0.8) else {
print("Could not convert image to jpeg")
return
}
let anthropicService = AIProxy.anthropicService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let response = try await anthropicService.messageRequest(body: AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
AnthropicInputMessage(content: [
.text("Provide a very short description of this image"),
.image(mediaType: .jpeg, data: jpegData.base64EncodedString())
], role: .user)
],
model: "claude-3-5-sonnet-20240620"
))
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
import AIProxy
let anthropicService = AIProxy.anthropicService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let requestBody = AnthropicMessageRequestBody(
maxTokens: 1024,
messages: [
.init(
content: [.text("What is nvidia's stock price?")],
role: .user
)
],
model: "claude-3-5-sonnet-20240620",
tools: [
.init(
description: "Call this function when the user wants a stock symbol",
inputSchema: [
"type": "object",
"properties": [
"ticker": [
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
]
],
"required": ["ticker"]
],
name: "get_stock_symbol"
)
]
)
let response = try await anthropicService.messageRequest(body: requestBody)
for content in response.content {
switch content {
case .text(let message):
print("Claude sent a message: \(message)")
case .toolUse(id: _, name: let toolName, input: let toolInput):
print("Claude used a tool \(toolName) with input: \(toolInput)")
}
}
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
In the snippet below, replace NSImage with UIImage if you are building on iOS. For a SwiftUI example, see this gist
import AIProxy
let service = AIProxy.stabilityAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let body = StabilityAIUltraRequestBody(prompt: "Lighthouse on a cliff overlooking the ocean")
let response = try await service.ultraRequest(body: body)
let image = NSImage(data: response.imageData)
// Do something with `image`
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print(error.localizedDescription)
}
import AIProxy
let service = AIProxy.deepLService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard"
)
do {
let body = DeepLTranslateRequestBody(targetLang: "ES", text: ["hello world"])
let response = try await service.translateRequest(body: body)
// Do something with `response.translations`
} catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
print("Could not create translation: \(error.localizedDescription)")
}
This pattern is slightly different than the others, because OpenMeteo has an official lib that we'd like to rely on. To run the snippet below, you'll need to add AIProxySwift and OpenMeteoSDK to your Xcode project. Add OpenMeteoSDK:
- In Xcode, go to
File > Add Package Dependences
- Enter the package URL
https://github.com/open-meteo/sdk
- Choose your dependency rule (e.g. the
main
branch for the most up-to-date package)
Next, use AIProxySwift's core functionality to get a URLRequest and URLSession, and pass those into the OpenMeteoSDK:
import AIProxy
import OpenMeteoSdk
do {
let request = try await AIProxy.request(
partialKey: "partial-key-from-your-aiproxy-developer-dashboard",
serviceURL: "service-url-from-your-aiproxy-developer-dashboard",
proxyPath: "/v1/forecast?latitude=52.52&longitude=13.41&hourly=temperature_2m&format=flatbuffers"
)
let session = AIProxy.session()
let responses = try await WeatherApiResponse.fetch(request: request, session: session)
// Do something with `responses`. For a usage example, follow these instructions:
// 1. Navigate to https://open-meteo.com/en/docs
// 2. Scroll to the 'API response' section
// 3. Tap on Swift
// 4. Scroll to 'Usage'
print(responses)
} catch {
print("Could not fetch the weather: \(error.localizedDescription)")
}
If your app already has client or user IDs that you want to annotate AIProxy requests with, pass a second argument to the provider's service initializer. For example:
let openAIService = AIProxy.openAIService(
partialKey: "partial-key-from-your-developer-dashboard",
serviceURL: "service-url-from-your-developer-dashboard",
clientID: "<your-id>"
)
Requests that are made using openAIService
will be annotated on the AIProxy backend, so that
when you view top users, or the timeline of requests, your client IDs will be familiar.
If you do not have existing client or user IDs, no problem! Leave the clientID
argument
out, and we'll generate IDs for you. See AIProxyIdentifier.swift
if you would like to see
ID generation specifics.
Occassionally, Xcode fails to automatically add the AIProxy library to your target's dependency
list. If you receive the No such module 'AIProxy'
error, first ensure that you have added
the package to Xcode using the Installation steps.
Next, select your project in the Project Navigator (cmd-1
), select your target, and scroll to
the Frameworks, Libraries, and Embedded Content
section. Tap on the plus icon:
![Add library dependency](https://private-user-images.githubusercontent.com/35940/347908540-438e2bbb-688c-49bc-aa2a-ea85806818d5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjM3ODU3NzAsIm5iZiI6MTcyMzc4NTQ3MCwicGF0aCI6Ii8zNTk0MC8zNDc5MDg1NDAtNDM4ZTJiYmItNjg4Yy00OWJjLWFhMmEtZWE4NTgwNjgxOGQ1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODE2VDA1MTc1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTZmMjI0ZDg3MjMzZjRlNWIxZjAwNmFhMmVmNDExNWY3YTY2YjdhYjA0MDJmYmY5ZDcwY2JmMzYwZDEzYTEzZWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.LMpepQpUl3SP1i3nV29eYx6YccrDVT4wvvh0O2M57-I)
And add the AIProxy library:
![Select the AIProxy dependency](https://private-user-images.githubusercontent.com/35940/347907457-f015a181-9591-435c-a37f-6fb0c8c5050c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjM3ODU3NzAsIm5iZiI6MTcyMzc4NTQ3MCwicGF0aCI6Ii8zNTk0MC8zNDc5MDc0NTctZjAxNWExODEtOTU5MS00MzVjLWEzN2YtNmZiMGM4YzUwNTBjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA4MTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwODE2VDA1MTc1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg5YTlkZWMyZTFlZGZlOTFmYjFkMWM0NmNhZmM3MDRiNTI2YjM3NWEyNjJhNjk5MTI1OGIxOWQ1MWM1NjY5NGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.mYWo5ejAOP8KywtJi6im8aCDkh57WqQc_kqjbJMLJcs)
If you encounter the error
networkd_settings_read_from_file Sandbox is preventing this process from reading networkd settings file at "/Library/Preferences/com.apple.networkd.plist", please add an exception.
Modify your macOS project settings by tapping on your project in the Xcode project tree, then
select Signing & Capabilities
and enable Outgoing Connections (client)
If you use the snippets above and encounter the error
'async' call in a function that does not support concurrency
it is because we assume you are in a structured concurrency context. If you encounter this
error, you can use the escape hatch of wrapping your snippet in a Task {}
.
If you'd like to do UI testing and allow the test cases to execute real API requests, you must
set the AIPROXY_DEVICE_CHECK_BYPASS
env variable in your test plan and forward the env
variable from the test case to the host simulator (Apple does not do this by default, which I
consider a bug). Here is how to set it up:
-
Set the
AIPROXY_DEVICE_CHECK_BYPASS
env variable in your test environment: -
Important Edit your test cases to forward on the env variable to the host simulator:
func testExample() throws {
let app = XCUIApplication()
app.launchEnvironment = [
"AIPROXY_DEVICE_CHECK_BYPASS": ProcessInfo.processInfo.environment["AIPROXY_DEVICE_CHECK_BYPASS"]!
]
app.launch()
}
AIProxy uses Apple's DeviceCheck to ensure that requests received by the backend originated from your app on a legitimate Apple device. However, the iOS simulator cannot produce DeviceCheck tokens. Rather than requiring you to constantly build and run on device during development, AIProxy provides a way to skip the DeviceCheck integrity check. The token is intended for use by developers only. If an attacker gets the token, they can make requests to your AIProxy project without including a DeviceCheck token, and thus remove one level of protection.
This constant is intended to be included in the distributed version of your app. As the name implies, it is a partial representation of your OpenAI key. Specifically, it is one half of an encrypted version of your key. The other half resides on AIProxy's backend. As your app makes requests to AIProxy, the two encrypted parts are paired, decrypted, and used to fulfill the request to OpenAI.
Contributions are welcome! In order to contribute, we require that you grant AIProxy an irrevocable license to use your contributions as we see fit. Please read CONTRIBUTIONS.md for details
In codable representations, fields that are required by the API should be above fields that are optional. Within the two groups (required and optional) all fields should be alphabetically ordered.