ServiceStack.Swift client library rewritten for Swift 6
As part of the release of AI Server we've upgraded all generic service client libraries to support multiple file uploads with API requests to take advantage of AI Server APIs that accept file uploads like Image to Image, Speech to Text or its FFmpeg Image and Video Transforms.
ServiceStack.Swift rewritten for Swift 6​
ServiceStack.Swift received the biggest upgrade, which was also rewritten to take advantage of Swift 6 features, including Swift promises which replaced the previous PromiseKit dependency - making it now dependency-free!
For example you can request a Speech to Text
transcription by sending an audio file to the SpeechToText
API using the new postFilesWithRequest
method:
Calling AI Server to transcribe an Audio Recording​
let client = JsonServiceClient(baseUrl: "https://openai.servicestack.net")
client.bearerToken = apiKey
let request = SpeechToText()
request.refId = "uniqueUserIdForRequest"
let response = try client.postFilesWithRequest(request:request,
file:UploadFile(fileName:"audio.mp3", data:mp3Data, fieldName:"audio"))
Inspect.printDump(response)
Async Upload Files with API Example​
Alternatively use the new postFileWithRequestAsync
method to call the API asynchronously
using Swift 6 Concurrency
new async/await feature:
let response = try await client.postFileWithRequestAsync(request:request,
file:UploadFile(fileName:"audio.mp3", data:mp3Data, fieldName:"audio"))
Inspect.printDump(response)
Multiple file upload with API Request examples​
Whilst the postFilesWithRequest
methods can be used to upload multiple files with an API Request. e.g:
let request = WatermarkVideo()
request.position = .BottomRight
let response = try client.postFilesWithRequest(request: request,
files: [
UploadFile(fileName: "video.mp4", data:videoData, fieldName:"video"),
UploadFile(fileName: "mark.jpg", data:imgData, fieldName:"watermark")
])
Async Example:
let response = try await client.postFilesWithRequestAsync(request: request,
files: [
UploadFile(fileName: "video.mp4", data:videoData, fieldName:"video"),
UploadFile(fileName: "mark.jpg", data:imgData, fieldName:"watermark")
])
Sending typed Open AI Chat Ollama Requests with Swift​
Even if you're not running AI Server you can still use its typed DTOs to call any compatible Open AI Chat Compatible API like a self-hosted Ollama API.
To call an Ollama endpoint from Swift:
- Include
ServiceStack
package in your projectsPackage.swift
dependencies: [
.package(url: "https://github.com/ServiceStack/ServiceStack.Swift.git",
Version(6,0,0)..<Version(7,0,0)),
],
- Download AI Server's Swift DTOs:
npx get-dtos swift https://openai.servicestack.net
You'll then be able to call Ollama by sending the OpenAI Chat compatible OpenAiChatCompletion
Request DTO with the JsonServiceClient
:
import Foundation
import ServiceStack
let ollamaBaseUrl = "http://localhost:11434"
let client = JsonServiceClient(baseUrl:ollamaBaseUrl)
let request = OpenAiChatCompletion()
request.model = "mixtral:8x22b"
let msg = OpenAiMessage()
msg.role = "user"
msg.content = "What's the capital of France?"
request.messages = [msg]
request.max_tokens = 50
let result:OpenAiChatResponse = try await client.postAsync(
"/v1/chat/completions", request:request)