A production-ready reference app demonstrating the RunAnywhere Swift SDK capabilities for on-device AI. This app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβall running locally on your device.
Important: This sample app consumes the RunAnywhere Swift SDK as a local Swift package. Before opening this project, you must first build the SDKβs native libraries.
# 1. Navigate to the Swift SDK directory
cd runanywhere-sdks/sdk/runanywhere-swift
# 2. Run the setup script (~5-15 minutes on first run)
# This builds the native C++ frameworks and sets testLocal=true
./scripts/build-swift.sh --setup
# 3. Navigate to this sample app
cd ../../examples/ios/RunAnywhereAI
# 4. Open in Xcode
open RunAnywhereAI.xcodeproj
# 5. If Xcode shows package errors, reset caches:
# File > Packages > Reset Package Caches
# 6. Build and Run (β+R)
This sample app uses Package.swift to reference the local Swift SDK:
This Sample App β Local Swift SDK (sdk/runanywhere-swift/)
β
Local XCFrameworks (sdk/runanywhere-swift/Binaries/)
β
Built by: ./scripts/build-swift.sh --setup
The build-swift.sh --setup script:
runanywhere-commonssdk/runanywhere-swift/Binaries/testLocal = true in the SDKβs Package.swiftrunanywhere-commons):
cd sdk/runanywhere-swift
./scripts/build-swift.sh --local --build-commons
Download the app from the App Store to try it out.
This sample app demonstrates the full power of the RunAnywhere SDK:
| Feature | Description | SDK Integration |
|---|---|---|
| AI Chat | Interactive LLM conversations with streaming responses | RunAnywhere.generateStream() |
| Thinking Mode | Support for models with <think>...</think> reasoning |
Thinking tag parsing |
| Real-time Analytics | Token speed, generation time, inference metrics | MessageAnalytics |
| Speech-to-Text | Voice transcription with batch & live modes | RunAnywhere.transcribe() |
| Text-to-Speech | Neural voice synthesis with Piper TTS | RunAnywhere.synthesize() |
| Voice Assistant | Full STT β LLM β TTS pipeline with auto-detection | Voice Pipeline API |
| Model Management | Download, load, and manage multiple AI models | RunAnywhere.downloadModel() |
| Storage Management | View storage usage and delete models | RunAnywhere.storageInfo() |
| Offline Support | All features work without internet | On-device inference |
| Cross-Platform | Runs on iOS, iPadOS, and macOS | Universal app |
The app follows modern Apple architecture patterns:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SwiftUI Views β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββ β
β β Chat β β STT β β TTS β β Voice β βSettingsβ β
β β View β β View β β View β β View β β View β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ βββββ¬βββββ β
βββββββββΌβββββββββββββΌβββββββββββββΌβββββββββββββΌββββββββββββΌβββββββ€
β βΌ βΌ βΌ βΌ βΌ β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββ β
β β LLM β β STT β β TTS β β Voice β βSettingsβ β
β βViewModel β βViewModel β βViewModel β β ViewModelβ βViewModel
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ βββββ¬βββββ β
βββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄ββββββββββββ΄βββββββ€
β β
β RunAnywhere Swift SDK β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Core APIs (generate, transcribe, synthesize, pipeline) β β
β β EventBus (LLMEvent, STTEvent, TTSEvent, ModelEvent) β β
β β Model Management (download, load, unload, delete) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββ΄βββββββββββββββββββ β
β βΌ βΌ β
β βββββββββββββββββββ βββββββββββββββββββ β
β β LlamaCPP β β ONNX Runtime β β
β β (LLM/GGUF) β β (STT/TTS) β β
β βββββββββββββββββββ βββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
@Observable, SwiftUI observes changesRunAnywhereAIApp.swift handles SDK initializationAppColors, AppTypography, AppSpacingRunAnywhereAI/
βββ RunAnywhereAI/
β βββ App/
β β βββ RunAnywhereAIApp.swift # Entry point, SDK initialization
β β βββ ContentView.swift # Tab navigation, main UI structure
β β
β βββ Core/
β β βββ DesignSystem/
β β β βββ AppColors.swift # Color palette
β β β βββ AppSpacing.swift # Spacing constants
β β β βββ Typography.swift # Font styles
β β βββ Models/
β β β βββ AppTypes.swift # Shared data models
β β β βββ MarkdownDetector.swift # Markdown parsing utilities
β β βββ Services/
β β βββ ModelManager.swift # Model lifecycle management
β β
β βββ Features/
β β βββ Chat/
β β β βββ Models/
β β β β βββ Message.swift # Chat message model
β β β βββ ViewModels/
β β β β βββ LLMViewModel.swift # Chat logic, streaming
β β β β βββ LLMViewModel+Generation.swift
β β β β βββ LLMViewModel+Analytics.swift
β β β βββ Views/
β β β βββ ChatInterfaceView.swift # Main chat UI
β β β βββ MessageBubbleView.swift # Message rendering
β β β βββ ConversationListView.swift
β β β
β β βββ Voice/
β β β βββ SpeechToTextView.swift # STT UI with waveform
β β β βββ STTViewModel.swift # Batch & live transcription
β β β βββ TextToSpeechView.swift # TTS UI with playback
β β β βββ TTSViewModel.swift # Synthesis & audio playback
β β β βββ VoiceAssistantView.swift # Full voice pipeline UI
β β β βββ VoiceAgentViewModel.swift # STTβLLMβTTS orchestration
β β β
β β βββ Models/
β β β βββ ModelSelectionSheet.swift # Model picker UI
β β β βββ ModelListViewModel.swift # Download & load logic
β β β
β β βββ Storage/
β β β βββ StorageView.swift # Storage management UI
β β β βββ StorageViewModel.swift # Storage info, cache clearing
β β β
β β βββ Settings/
β β βββ CombinedSettingsView.swift # Settings & storage UI
β β
β βββ Helpers/
β β βββ AdaptiveLayout.swift # Cross-platform layout helpers
β β βββ CodeBlockMarkdownRenderer.swift
β β βββ InlineMarkdownRenderer.swift
β β βββ SmartMarkdownRenderer.swift
β β
β βββ Resources/
β βββ Assets.xcassets/ # App icons, images
β βββ RunAnywhereConfig-Debug.plist
β βββ RunAnywhereConfig-Release.plist
β
βββ RunAnywhereAITests/ # Unit tests
βββ RunAnywhereAIUITests/ # UI tests
βββ docs/screenshots/ # App screenshots
βββ scripts/
β βββ build_and_run_ios_sample.sh # Build automation
βββ Package.swift # SPM dependency manifest
βββ README.md # This file
# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/ios/RunAnywhereAI
# Open in Xcode
open RunAnywhereAI.xcodeproj
β+R# Build and run on simulator
./scripts/build_and_run_ios_sample.sh simulator "iPhone 16 Pro"
# Build and run on device
./scripts/build_and_run_ios_sample.sh device
The SDK is initialized in RunAnywhereAIApp.swift:
import RunAnywhere
import LlamaCPPRuntime
import ONNXRuntime
@main
struct RunAnywhereAIApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.task {
await initializeSDK()
}
}
private func initializeSDK() async {
// Initialize SDK (development mode - no API key needed)
try RunAnywhere.initialize()
// Register AI backends
LlamaCPP.register(priority: 100) // LLM backend (GGUF models)
ONNX.register(priority: 100) // STT/TTS backend
// Register models
RunAnywhere.registerModel(
id: "smollm2-360m-q8_0",
name: "SmolLM2 360M Q8_0",
url: URL(string: "https://huggingface.co/...")!,
framework: .llamaCpp,
memoryRequirement: 500_000_000
)
}
}
// Download with progress tracking
for try await progress in RunAnywhere.downloadModel("smollm2-360m-q8_0") {
print("Download: \(Int(progress.percentage * 100))%")
}
// Load into memory
try await RunAnywhere.loadModel("smollm2-360m-q8_0")
// Generate with streaming
let result = try await RunAnywhere.generateStream(
prompt,
options: LLMGenerationOptions(maxTokens: 512, temperature: 0.7)
)
for try await token in result.stream {
// Display token in real-time
displayToken(token)
}
// Get final analytics
let metrics = try await result.result.value
print("Speed: \(metrics.performanceMetrics.tokensPerSecond) tok/s")
// Load STT model
try await RunAnywhere.loadSTTModel("sherpa-onnx-whisper-tiny.en")
// Transcribe audio bytes
let transcription = try await RunAnywhere.transcribe(audioData)
print("Transcription: \(transcription.text)")
// Load TTS voice
try await RunAnywhere.loadTTSModel("vits-piper-en_US-lessac-medium")
// Synthesize speech
let result = try await RunAnywhere.synthesize(
text,
options: TTSOptions(rate: 1.0, pitch: 1.0)
)
// result.audioData contains WAV audio bytes
// Configure voice pipeline
let config = ModularPipelineConfig(
components: [.vad, .stt, .llm, .tts],
stt: VoiceSTTConfig(modelId: "sherpa-onnx-whisper-tiny.en"),
llm: VoiceLLMConfig(modelId: "smollm2-360m-q8_0", maxTokens: 256),
tts: VoiceTTSConfig(modelId: "vits-piper-en_US-lessac-medium")
)
// Process voice through full pipeline
let pipeline = try await RunAnywhere.createVoicePipeline(config: config)
for try await event in pipeline.process(audioStream: audioStream) {
switch event {
case .transcription(let text):
print("User said: \(text)")
case .llmResponse(let response):
print("AI response: \(response)")
case .synthesis(let audio):
playAudio(audio)
}
}
ChatInterfaceView.swift)What it demonstrates:
<think>...</think> tags)Key SDK APIs:
RunAnywhere.generateStream() β Streaming generationRunAnywhere.generate() β Non-streaming generationRunAnywhere.cancelGeneration() β Stop generationSpeechToTextView.swift)What it demonstrates:
Key SDK APIs:
RunAnywhere.loadSTTModel() β Load Whisper modelRunAnywhere.transcribe() β Batch transcriptionTextToSpeechView.swift)What it demonstrates:
Key SDK APIs:
RunAnywhere.loadTTSModel() β Load TTS modelRunAnywhere.synthesize() β Generate speech audioVoiceAssistantView.swift)What it demonstrates:
Key SDK APIs:
CombinedSettingsView.swift)What it demonstrates:
Key SDK APIs:
RunAnywhere.storageInfo() β Get storage detailsRunAnywhere.deleteModel() β Remove downloaded modelxcodebuild test -project RunAnywhereAI.xcodeproj -scheme RunAnywhereAI -destination 'platform=iOS Simulator,name=iPhone 16 Pro'
xcodebuild test -project RunAnywhereAI.xcodeproj -scheme RunAnywhereAIUITests -destination 'platform=iOS Simulator,name=iPhone 16 Pro'
The app uses os.log for structured logging. Filter by subsystem in Console.app:
subsystem:com.runanywhere.RunAnywhereAI
| Category | Description |
|---|---|
RunAnywhereAIApp |
SDK initialization, model registration |
LLMViewModel |
LLM generation, streaming |
STTViewModel |
Speech transcription |
TTSViewModel |
Speech synthesis |
VoiceAgentViewModel |
Voice pipeline |
ModelListViewModel |
Model downloads, loading |
| Configuration | Description |
|---|---|
Debug |
Development build with verbose logging |
Release |
Optimized build for distribution |
#if DEBUG
// Development mode - uses local backend, no API key needed
try RunAnywhere.initialize()
#else
// Production mode - requires API key and backend URL
try RunAnywhere.initialize(
apiKey: "your_api_key",
baseURL: "https://api.runanywhere.ai",
environment: .production
)
#endif
| Model | Size | Memory | Description |
|---|---|---|---|
| SmolLM2 360M Q8_0 | ~400MB | 500MB | Fast, lightweight chat |
| Qwen 2.5 0.5B Q6_K | ~500MB | 600MB | Multilingual, efficient |
| LFM2 350M Q4_K_M | ~200MB | 250MB | LiquidAI, ultra-compact |
| LFM2 350M Q8_0 | ~400MB | 400MB | LiquidAI, higher quality |
| Llama 2 7B Chat Q4_K_M | ~4GB | 4GB | Powerful, larger model |
| Mistral 7B Instruct Q4_K_M | ~4GB | 4GB | High quality responses |
| Model | Size | Description |
|---|---|---|
| Sherpa Whisper Tiny (EN) | ~75MB | English transcription |
| Model | Size | Description |
|---|---|---|
| Piper US English (Medium) | ~65MB | Natural American voice |
| Piper British English (Medium) | ~65MB | British accent |
If you encounter sandbox errors during build:
./scripts/fix_pods_sandbox.sh
For Swift macro issues:
defaults write com.apple.dt.Xcode IDESkipMacroFingerprintValidation -bool YES
See CONTRIBUTING.md for guidelines.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/ios/RunAnywhereAI
# Open in Xcode
open RunAnywhereAI.xcodeproj
# Make changes and test
# Run tests in Xcode (β+U)
# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature
# Open Pull Request
This project is licensed under the Apache License 2.0 - see LICENSE for details.