A production-ready reference app demonstrating the RunAnywhere React Native SDK capabilities for on-device AI. This cross-platform app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβall running locally on your device.
Important: This sample app consumes the RunAnywhere React Native SDK as local workspace dependencies. Before opening this project, you must first build the SDKβs native libraries.
# 1. Navigate to the React Native SDK directory
cd runanywhere-sdks/sdk/runanywhere-react-native
# 2. Run the setup script (~15-20 minutes on first run)
# This builds the native C++ frameworks/libraries and enables local mode
./scripts/build-react-native.sh --setup
# 3. Navigate to this sample app
cd ../../examples/react-native/RunAnywhereAI
# 4. Install dependencies
npm install
# 5. For iOS: Install pods
cd ios && pod install && cd ..
# 6a. Run on iOS
npx react-native run-ios
# 6b. Or run on Android
npx react-native run-android
# Or open in VS Code / Cursor and run from there
This sample appβs package.json uses workspace dependencies to reference the local React Native SDK packages:
This Sample App β Local RN SDK packages (sdk/runanywhere-react-native/packages/)
β
Local XCFrameworks/JNI libs (in each package's ios/ and android/ directories)
β
Built by: ./scripts/build-react-native.sh --setup
The build-react-native.sh --setup script:
runanywhere-commonspackages/*/ios/Binaries/ and packages/*/ios/Frameworks/.so files to packages/*/android/src/main/jniLibs/.testlocal marker files (enables local library consumption)runanywhere-commons):
cd sdk/runanywhere-react-native
./scripts/build-react-native.sh --local --rebuild-commons
Download the app from the App Store or Google Play Store to try it out.
This sample app demonstrates the full power of the RunAnywhere React Native SDK:
| Feature | Description | SDK Integration |
|---|---|---|
| AI Chat | Interactive LLM conversations with streaming responses | RunAnywhere.generateStream() |
| Conversation Management | Create, switch, and delete chat conversations | ConversationStore |
| Real-time Analytics | Token speed, generation time, inference metrics | Message analytics display |
| Speech-to-Text | Voice transcription with batch & live modes | RunAnywhere.transcribeFile() |
| Text-to-Speech | Neural voice synthesis with Piper TTS | RunAnywhere.synthesize() |
| Voice Assistant | Full STT β LLM β TTS pipeline | Voice pipeline orchestration |
| Model Management | Download, load, and manage multiple AI models | RunAnywhere.downloadModel() |
| Storage Management | View storage usage and delete models | RunAnywhere.getStorageInfo() |
| Offline Support | All features work without internet | On-device inference |
| Cross-Platform | Single codebase for iOS and Android | React Native + Nitrogen/Nitro |
The app follows modern React Native architecture patterns with a multi-package SDK structure:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β React Native UI Layer β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββββββββ β
β β Chat β β STT β β TTS β β Voice β β Settings β β
β β Screen β β Screen β β Screen β β Assistantβ β Screen β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ βββββββββ¬βββββββββ β
βββββββββΌβββββββββββββΌβββββββββββββΌβββββββββββββΌββββββββββββββββΌβββββββββββ€
β β β β β β β
β ββββββΌβββββββββββββΌβββββββββββββΌβββββββββββββΌββββββββββββββββΌβββββββββ β
β β @runanywhere/core (TypeScript API) β β
β β RunAnywhere.initialize(), loadModel(), generate(), etc. β β
β ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββ β
β β β
β βββββββββββββββββββββββββΌββββββββββββββββββββββββ β
β β β β β
β ββββββββΌβββββββ ββββββββΌβββββββ ββββββββΌβββββββ β
β β@runanywhere β β@runanywhere β β Native β β
β β /llamacpp β β /onnx β β Bridges β β
β β (LLM/GGUF) β β (STT/TTS) β β (JSI/Nitro)β β
β ββββββββ¬βββββββ ββββββββ¬βββββββ ββββββββ¬βββββββ β
βββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββ€
β β β β β
β ββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββ
β β runanywhere-commons (C++) ββ
β β Core inference engine, model management ββ
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
RunAnywhereAI/
βββ App.tsx # App entry, SDK initialization, model registration
βββ index.js # React Native entry point
βββ package.json # Dependencies and scripts
βββ tsconfig.json # TypeScript configuration
β
βββ src/
β βββ screens/
β β βββ ChatScreen.tsx # LLM chat with streaming & conversation management
β β βββ ChatAnalyticsScreen.tsx # Message analytics and performance metrics
β β βββ ConversationListScreen.tsx # Conversation history management
β β βββ STTScreen.tsx # Speech-to-text with batch/live modes
β β βββ TTSScreen.tsx # Text-to-speech synthesis & playback
β β βββ VoiceAssistantScreen.tsx # Full STT β LLM β TTS pipeline
β β βββ SettingsScreen.tsx # Model & storage management
β β
β βββ components/
β β βββ chat/
β β β βββ ChatInput.tsx # Message input with send button
β β β βββ MessageBubble.tsx # Message display with analytics
β β β βββ TypingIndicator.tsx # AI thinking animation
β β β βββ index.ts # Component exports
β β βββ common/
β β β βββ ModelStatusBanner.tsx # Shows loaded model and framework
β β β βββ ModelRequiredOverlay.tsx # Prompts model selection
β β β βββ index.ts
β β βββ model/
β β βββ ModelSelectionSheet.tsx # Model picker with download progress
β β βββ index.ts
β β
β βββ navigation/
β β βββ TabNavigator.tsx # Bottom tab navigation (5 tabs)
β β
β βββ stores/
β β βββ conversationStore.ts # Zustand store for chat persistence
β β
β βββ theme/
β β βββ colors.ts # Color palette matching iOS design
β β βββ typography.ts # Font styles and text variants
β β βββ spacing.ts # Layout constants and dimensions
β β
β βββ types/
β β βββ chat.ts # Message and conversation types
β β βββ model.ts # Model info and framework types
β β βββ settings.ts # Settings and storage types
β β βββ voice.ts # Voice pipeline types
β β βββ index.ts # Root navigation types
β β
β βββ utils/
β βββ AudioService.ts # Native audio recording abstraction
β
βββ ios/
β βββ RunAnywhereAI/
β β βββ AppDelegate.swift # iOS app delegate
β β βββ NativeAudioModule.swift # Native audio recording/playback
β β βββ Images.xcassets/ # iOS app icons and images
β βββ Podfile # CocoaPods dependencies
β βββ RunAnywhereAI.xcworkspace/ # Xcode workspace
β
βββ android/
βββ app/
β βββ src/main/
β β βββ java/.../MainActivity.kt
β β βββ res/ # Android resources
β β βββ AndroidManifest.xml
β βββ build.gradle
βββ settings.gradle
# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/react-native/RunAnywhereAI
# Install JavaScript dependencies
npm install
# Install iOS dependencies
cd ios && pod install && cd ..
# Start Metro bundler
npm start
# In another terminal, run on iOS
npx react-native run-ios
# Or run on a specific simulator
npx react-native run-ios --simulator="iPhone 15 Pro"
# Start Metro bundler
npm start
# In another terminal, run on Android
npx react-native run-android
# iOS - Build and run
npx react-native run-ios --mode Release
# Android - Build and run
npx react-native run-android --mode release
The SDK is initialized in App.tsx with a two-phase initialization pattern:
import { RunAnywhere, SDKEnvironment, ModelCategory } from '@runanywhere/core';
import { LlamaCPP } from '@runanywhere/llamacpp';
import { ONNX, ModelArtifactType } from '@runanywhere/onnx';
// Phase 1: Initialize SDK
await RunAnywhere.initialize({
apiKey: '', // Empty in development mode
baseURL: 'https://api.runanywhere.ai',
environment: SDKEnvironment.Development,
});
// Phase 2: Register backends and models
LlamaCPP.register();
await LlamaCPP.addModel({
id: 'smollm2-360m-q8_0',
name: 'SmolLM2 360M Q8_0',
url: 'https://huggingface.co/prithivMLmods/SmolLM2-360M-GGUF/...',
memoryRequirement: 500_000_000,
});
ONNX.register();
await ONNX.addModel({
id: 'sherpa-onnx-whisper-tiny.en',
name: 'Sherpa Whisper Tiny (ONNX)',
url: 'https://github.com/RunanywhereAI/sherpa-onnx/releases/...',
modality: ModelCategory.SpeechRecognition,
artifactType: ModelArtifactType.TarGzArchive,
memoryRequirement: 75_000_000,
});
// Download with progress tracking
await RunAnywhere.downloadModel(modelId, (progress) => {
console.log(`Download: ${(progress.progress * 100).toFixed(1)}%`);
});
// Load LLM model into memory
const success = await RunAnywhere.loadModel(modelPath);
// Check if model is loaded
const isLoaded = await RunAnywhere.isModelLoaded();
// Generate with streaming
const streamResult = await RunAnywhere.generateStream(prompt, {
maxTokens: 1000,
temperature: 0.7,
});
let fullResponse = '';
for await (const token of streamResult.stream) {
fullResponse += token;
// Update UI in real-time
updateMessage(fullResponse);
}
// Get final metrics
const result = await streamResult.result;
console.log(`Speed: ${result.tokensPerSecond} tok/s`);
console.log(`Latency: ${result.latencyMs}ms`);
const result = await RunAnywhere.generate(prompt, {
maxTokens: 256,
temperature: 0.7,
});
console.log('Response:', result.text);
console.log('Tokens:', result.tokensUsed);
console.log('Model:', result.modelUsed);
// Load STT model
await RunAnywhere.loadSTTModel(modelPath, 'whisper');
// Check if loaded
const isLoaded = await RunAnywhere.isSTTModelLoaded();
// Transcribe audio file
const result = await RunAnywhere.transcribeFile(audioPath, {
language: 'en',
});
console.log('Transcription:', result.text);
console.log('Confidence:', result.confidence);
// Load TTS voice model
await RunAnywhere.loadTTSModel(modelPath, 'piper');
// Synthesize speech
const result = await RunAnywhere.synthesize(text, {
voice: 'default',
rate: 1.0,
pitch: 1.0,
volume: 1.0,
});
// result.audio contains base64-encoded float32 PCM
// result.sampleRate, result.numSamples, result.duration
// 1. Record audio using AudioService
const audioPath = await AudioService.startRecording();
// 2. Stop and get audio
const { uri } = await AudioService.stopRecording();
// 3. Transcribe
const sttResult = await RunAnywhere.transcribeFile(uri, { language: 'en' });
// 4. Generate LLM response
const llmResult = await RunAnywhere.generate(sttResult.text, {
maxTokens: 500,
temperature: 0.7,
});
// 5. Synthesize speech
const ttsResult = await RunAnywhere.synthesize(llmResult.text);
// 6. Play audio (using native audio module)
// Get available models
const models = await RunAnywhere.getAvailableModels();
const downloaded = await RunAnywhere.getDownloadedModels();
// Get storage info
const storage = await RunAnywhere.getStorageInfo();
console.log('Used:', storage.usedSpace);
console.log('Free:', storage.freeSpace);
console.log('Models:', storage.modelsSize);
// Delete a model
await RunAnywhere.deleteModel(modelId);
// Clear cache
await RunAnywhere.clearCache();
await RunAnywhere.cleanTempFiles();
ChatScreen.tsx)What it demonstrates:
Key SDK APIs:
RunAnywhere.generateStream() β Streaming generationRunAnywhere.loadModel() β Load LLM modelRunAnywhere.isModelLoaded() β Check model statusRunAnywhere.getAvailableModels() β List modelsSTTScreen.tsx)What it demonstrates:
Key SDK APIs:
RunAnywhere.loadSTTModel() β Load Whisper modelRunAnywhere.isSTTModelLoaded() β Check STT model statusRunAnywhere.transcribeFile() β Transcribe audio fileAudioServiceTTSScreen.tsx)What it demonstrates:
Key SDK APIs:
RunAnywhere.loadTTSModel() β Load TTS modelRunAnywhere.isTTSModelLoaded() β Check TTS model statusRunAnywhere.synthesize() β Generate speech audioNativeAudioModule (iOS)VoiceAssistantScreen.tsx)What it demonstrates:
Key SDK APIs:
AudioService.startRecording() / stopRecording()SettingsScreen.tsx)What it demonstrates:
Key SDK APIs:
RunAnywhere.getAvailableModels() β List all modelsRunAnywhere.getDownloadedModels() β List downloaded modelsRunAnywhere.downloadModel() β Download with progressRunAnywhere.deleteModel() β Remove modelRunAnywhere.getStorageInfo() β Storage metricsRunAnywhere.clearCache() β Clear temporary files# ESLint check
npm run lint
# ESLint with auto-fix
npm run lint:fix
npm run typecheck
# Check formatting
npm run format
# Auto-fix formatting
npm run format:fix
npm run unused
# Full clean (removes node_modules and Pods)
npm run clean
# Just reinstall pods
npm run pod-install
The app uses console.warn with tags for debugging:
# iOS: View logs in Xcode console or use:
npx react-native log-ios
# Android: View logs with:
npx react-native log-android
# Or filter with adb:
adb logcat -s ReactNative:D
| Tag | Description |
|---|---|
[App] |
SDK initialization, model registration |
[ChatScreen] |
LLM generation, model loading |
[STTScreen] |
Speech transcription, audio recording |
[TTSScreen] |
Speech synthesis, audio playback |
[VoiceAssistant] |
Voice pipeline orchestration |
[Settings] |
Storage info, model management |
# Reset Metro cache
npx react-native start --reset-cache
# Clear watchman
watchman watch-del-all
For production builds, configure via environment variables:
# Create .env file (git-ignored)
RUNANYWHERE_API_KEY=your-api-key
RUNANYWHERE_BASE_URL=https://api.runanywhere.ai
| Model | Size | Memory | Description |
|---|---|---|---|
| SmolLM2 360M Q8_0 | ~400MB | 500MB | Fast, lightweight chat |
| Qwen 2.5 0.5B Q6_K | ~500MB | 600MB | Multilingual, efficient |
| LFM2 350M Q4_K_M | ~200MB | 250MB | LiquidAI, ultra-compact |
| LFM2 350M Q8_0 | ~350MB | 400MB | LiquidAI, higher quality |
| Llama 2 7B Chat Q4_K_M | ~4GB | 4GB | Powerful, larger model |
| Mistral 7B Instruct Q4_K_M | ~4GB | 4GB | High quality responses |
| Model | Size | Description |
|---|---|---|
| Sherpa Whisper Tiny (EN) | ~75MB | English transcription |
| Model | Size | Description |
|---|---|---|
| Piper US English (Medium) | ~65MB | Natural American voice |
| Piper British English (Medium) | ~65MB | British accent |
See CONTRIBUTING.md for guidelines.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/react-native/RunAnywhereAI
# Install dependencies
npm install
cd ios && pod install && cd ..
# Create feature branch
git checkout -b feature/your-feature
# Make changes and test
npm run lint
npm run typecheck
npm run ios # or npm run android
# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature
# Open Pull Request
This project is licensed under the Apache License 2.0 - see LICENSE for details.