runanywhere-sdks

RunAnywhere AI - React Native Example

RunAnywhere Logo

Download on the App Store Get it on Google Play

iOS 15.1+ Android 7.0+ React Native 0.81 TypeScript 5.5 License

A production-ready reference app demonstrating the RunAnywhere React Native SDK capabilities for on-device AI. This cross-platform app showcases how to build privacy-first, offline-capable AI features with LLM chat, speech-to-text, text-to-speech, and a complete voice assistant pipelineβ€”all running locally on your device.


πŸš€ Running This App (Local Development)

Important: This sample app consumes the RunAnywhere React Native SDK as local workspace dependencies. Before opening this project, you must first build the SDK’s native libraries.

First-Time Setup

# 1. Navigate to the React Native SDK directory
cd runanywhere-sdks/sdk/runanywhere-react-native

# 2. Run the setup script (~15-20 minutes on first run)
#    This builds the native C++ frameworks/libraries and enables local mode
./scripts/build-react-native.sh --setup

# 3. Navigate to this sample app
cd ../../examples/react-native/RunAnywhereAI

# 4. Install dependencies
npm install

# 5. For iOS: Install pods
cd ios && pod install && cd ..

# 6a. Run on iOS
npx react-native run-ios

# 6b. Or run on Android
npx react-native run-android

# Or open in VS Code / Cursor and run from there

How It Works

This sample app’s package.json uses workspace dependencies to reference the local React Native SDK packages:

This Sample App β†’ Local RN SDK packages (sdk/runanywhere-react-native/packages/)
                          ↓
              Local XCFrameworks/JNI libs (in each package's ios/ and android/ directories)
                          ↑
           Built by: ./scripts/build-react-native.sh --setup

The build-react-native.sh --setup script:

  1. Downloads dependencies (ONNX Runtime, Sherpa-ONNX)
  2. Builds the native C++ libraries from runanywhere-commons
  3. Copies XCFrameworks to packages/*/ios/Binaries/ and packages/*/ios/Frameworks/
  4. Copies JNI .so files to packages/*/android/src/main/jniLibs/
  5. Creates .testlocal marker files (enables local library consumption)

After Modifying the SDK


Try It Now

Download on the App Store      Get it on Google Play

Download the app from the App Store or Google Play Store to try it out.


Screenshots

RunAnywhere AI Chat Interface


Features

This sample app demonstrates the full power of the RunAnywhere React Native SDK:

Feature Description SDK Integration
AI Chat Interactive LLM conversations with streaming responses RunAnywhere.generateStream()
Conversation Management Create, switch, and delete chat conversations ConversationStore
Real-time Analytics Token speed, generation time, inference metrics Message analytics display
Speech-to-Text Voice transcription with batch & live modes RunAnywhere.transcribeFile()
Text-to-Speech Neural voice synthesis with Piper TTS RunAnywhere.synthesize()
Voice Assistant Full STT β†’ LLM β†’ TTS pipeline Voice pipeline orchestration
Model Management Download, load, and manage multiple AI models RunAnywhere.downloadModel()
Storage Management View storage usage and delete models RunAnywhere.getStorageInfo()
Offline Support All features work without internet On-device inference
Cross-Platform Single codebase for iOS and Android React Native + Nitrogen/Nitro

Architecture

The app follows modern React Native architecture patterns with a multi-package SDK structure:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         React Native UI Layer                            β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚   Chat   β”‚ β”‚   STT    β”‚ β”‚   TTS    β”‚ β”‚  Voice   β”‚ β”‚    Settings    β”‚ β”‚
β”‚  β”‚  Screen  β”‚ β”‚  Screen  β”‚ β”‚  Screen  β”‚ β”‚ Assistantβ”‚ β”‚     Screen     β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚       β”‚            β”‚            β”‚            β”‚               β”‚          β”‚
β”‚  β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚              @runanywhere/core (TypeScript API)                     β”‚ β”‚
β”‚  β”‚     RunAnywhere.initialize(), loadModel(), generate(), etc.         β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                 β”‚                                        β”‚
β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”               β”‚
β”‚         β”‚                       β”‚                       β”‚               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”        β”‚
β”‚  β”‚@runanywhere β”‚         β”‚@runanywhere β”‚         β”‚   Native    β”‚        β”‚
β”‚  β”‚  /llamacpp  β”‚         β”‚    /onnx    β”‚         β”‚   Bridges   β”‚        β”‚
β”‚  β”‚  (LLM/GGUF) β”‚         β”‚  (STT/TTS)  β”‚         β”‚  (JSI/Nitro)β”‚        β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚         β”‚                       β”‚                       β”‚               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚
β”‚  β”‚                    runanywhere-commons (C++)                         β”‚β”‚
β”‚  β”‚              Core inference engine, model management                 β”‚β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Architecture Decisions


Project Structure

RunAnywhereAI/
β”œβ”€β”€ App.tsx                           # App entry, SDK initialization, model registration
β”œβ”€β”€ index.js                          # React Native entry point
β”œβ”€β”€ package.json                      # Dependencies and scripts
β”œβ”€β”€ tsconfig.json                     # TypeScript configuration
β”‚
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ screens/
β”‚   β”‚   β”œβ”€β”€ ChatScreen.tsx            # LLM chat with streaming & conversation management
β”‚   β”‚   β”œβ”€β”€ ChatAnalyticsScreen.tsx   # Message analytics and performance metrics
β”‚   β”‚   β”œβ”€β”€ ConversationListScreen.tsx # Conversation history management
β”‚   β”‚   β”œβ”€β”€ STTScreen.tsx             # Speech-to-text with batch/live modes
β”‚   β”‚   β”œβ”€β”€ TTSScreen.tsx             # Text-to-speech synthesis & playback
β”‚   β”‚   β”œβ”€β”€ VoiceAssistantScreen.tsx  # Full STT β†’ LLM β†’ TTS pipeline
β”‚   β”‚   └── SettingsScreen.tsx        # Model & storage management
β”‚   β”‚
β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ chat/
β”‚   β”‚   β”‚   β”œβ”€β”€ ChatInput.tsx         # Message input with send button
β”‚   β”‚   β”‚   β”œβ”€β”€ MessageBubble.tsx     # Message display with analytics
β”‚   β”‚   β”‚   β”œβ”€β”€ TypingIndicator.tsx   # AI thinking animation
β”‚   β”‚   β”‚   └── index.ts              # Component exports
β”‚   β”‚   β”œβ”€β”€ common/
β”‚   β”‚   β”‚   β”œβ”€β”€ ModelStatusBanner.tsx # Shows loaded model and framework
β”‚   β”‚   β”‚   β”œβ”€β”€ ModelRequiredOverlay.tsx # Prompts model selection
β”‚   β”‚   β”‚   └── index.ts
β”‚   β”‚   └── model/
β”‚   β”‚       β”œβ”€β”€ ModelSelectionSheet.tsx # Model picker with download progress
β”‚   β”‚       └── index.ts
β”‚   β”‚
β”‚   β”œβ”€β”€ navigation/
β”‚   β”‚   └── TabNavigator.tsx          # Bottom tab navigation (5 tabs)
β”‚   β”‚
β”‚   β”œβ”€β”€ stores/
β”‚   β”‚   └── conversationStore.ts      # Zustand store for chat persistence
β”‚   β”‚
β”‚   β”œβ”€β”€ theme/
β”‚   β”‚   β”œβ”€β”€ colors.ts                 # Color palette matching iOS design
β”‚   β”‚   β”œβ”€β”€ typography.ts             # Font styles and text variants
β”‚   β”‚   └── spacing.ts                # Layout constants and dimensions
β”‚   β”‚
β”‚   β”œβ”€β”€ types/
β”‚   β”‚   β”œβ”€β”€ chat.ts                   # Message and conversation types
β”‚   β”‚   β”œβ”€β”€ model.ts                  # Model info and framework types
β”‚   β”‚   β”œβ”€β”€ settings.ts               # Settings and storage types
β”‚   β”‚   β”œβ”€β”€ voice.ts                  # Voice pipeline types
β”‚   β”‚   └── index.ts                  # Root navigation types
β”‚   β”‚
β”‚   └── utils/
β”‚       └── AudioService.ts           # Native audio recording abstraction
β”‚
β”œβ”€β”€ ios/
β”‚   β”œβ”€β”€ RunAnywhereAI/
β”‚   β”‚   β”œβ”€β”€ AppDelegate.swift         # iOS app delegate
β”‚   β”‚   β”œβ”€β”€ NativeAudioModule.swift   # Native audio recording/playback
β”‚   β”‚   └── Images.xcassets/          # iOS app icons and images
β”‚   β”œβ”€β”€ Podfile                       # CocoaPods dependencies
β”‚   └── RunAnywhereAI.xcworkspace/    # Xcode workspace
β”‚
└── android/
    β”œβ”€β”€ app/
    β”‚   β”œβ”€β”€ src/main/
    β”‚   β”‚   β”œβ”€β”€ java/.../MainActivity.kt
    β”‚   β”‚   β”œβ”€β”€ res/                   # Android resources
    β”‚   β”‚   └── AndroidManifest.xml
    β”‚   └── build.gradle
    └── settings.gradle

Quick Start

Prerequisites

Clone & Install

# Clone the repository
git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks/examples/react-native/RunAnywhereAI

# Install JavaScript dependencies
npm install

# Install iOS dependencies
cd ios && pod install && cd ..

Run on iOS

# Start Metro bundler
npm start

# In another terminal, run on iOS
npx react-native run-ios

# Or run on a specific simulator
npx react-native run-ios --simulator="iPhone 15 Pro"

Run on Android

# Start Metro bundler
npm start

# In another terminal, run on Android
npx react-native run-android

Run via Command Line

# iOS - Build and run
npx react-native run-ios --mode Release

# Android - Build and run
npx react-native run-android --mode release

SDK Integration Examples

Initialize the SDK

The SDK is initialized in App.tsx with a two-phase initialization pattern:

import { RunAnywhere, SDKEnvironment, ModelCategory } from '@runanywhere/core';
import { LlamaCPP } from '@runanywhere/llamacpp';
import { ONNX, ModelArtifactType } from '@runanywhere/onnx';

// Phase 1: Initialize SDK
await RunAnywhere.initialize({
  apiKey: '',  // Empty in development mode
  baseURL: 'https://api.runanywhere.ai',
  environment: SDKEnvironment.Development,
});

// Phase 2: Register backends and models
LlamaCPP.register();
await LlamaCPP.addModel({
  id: 'smollm2-360m-q8_0',
  name: 'SmolLM2 360M Q8_0',
  url: 'https://huggingface.co/prithivMLmods/SmolLM2-360M-GGUF/...',
  memoryRequirement: 500_000_000,
});

ONNX.register();
await ONNX.addModel({
  id: 'sherpa-onnx-whisper-tiny.en',
  name: 'Sherpa Whisper Tiny (ONNX)',
  url: 'https://github.com/RunanywhereAI/sherpa-onnx/releases/...',
  modality: ModelCategory.SpeechRecognition,
  artifactType: ModelArtifactType.TarGzArchive,
  memoryRequirement: 75_000_000,
});

Download & Load a Model

// Download with progress tracking
await RunAnywhere.downloadModel(modelId, (progress) => {
  console.log(`Download: ${(progress.progress * 100).toFixed(1)}%`);
});

// Load LLM model into memory
const success = await RunAnywhere.loadModel(modelPath);

// Check if model is loaded
const isLoaded = await RunAnywhere.isModelLoaded();

Stream Text Generation

// Generate with streaming
const streamResult = await RunAnywhere.generateStream(prompt, {
  maxTokens: 1000,
  temperature: 0.7,
});

let fullResponse = '';
for await (const token of streamResult.stream) {
  fullResponse += token;
  // Update UI in real-time
  updateMessage(fullResponse);
}

// Get final metrics
const result = await streamResult.result;
console.log(`Speed: ${result.tokensPerSecond} tok/s`);
console.log(`Latency: ${result.latencyMs}ms`);

Non-Streaming Generation

const result = await RunAnywhere.generate(prompt, {
  maxTokens: 256,
  temperature: 0.7,
});

console.log('Response:', result.text);
console.log('Tokens:', result.tokensUsed);
console.log('Model:', result.modelUsed);

Speech-to-Text

// Load STT model
await RunAnywhere.loadSTTModel(modelPath, 'whisper');

// Check if loaded
const isLoaded = await RunAnywhere.isSTTModelLoaded();

// Transcribe audio file
const result = await RunAnywhere.transcribeFile(audioPath, {
  language: 'en',
});

console.log('Transcription:', result.text);
console.log('Confidence:', result.confidence);

Text-to-Speech

// Load TTS voice model
await RunAnywhere.loadTTSModel(modelPath, 'piper');

// Synthesize speech
const result = await RunAnywhere.synthesize(text, {
  voice: 'default',
  rate: 1.0,
  pitch: 1.0,
  volume: 1.0,
});

// result.audio contains base64-encoded float32 PCM
// result.sampleRate, result.numSamples, result.duration

Voice Pipeline (STT β†’ LLM β†’ TTS)

// 1. Record audio using AudioService
const audioPath = await AudioService.startRecording();

// 2. Stop and get audio
const { uri } = await AudioService.stopRecording();

// 3. Transcribe
const sttResult = await RunAnywhere.transcribeFile(uri, { language: 'en' });

// 4. Generate LLM response
const llmResult = await RunAnywhere.generate(sttResult.text, {
  maxTokens: 500,
  temperature: 0.7,
});

// 5. Synthesize speech
const ttsResult = await RunAnywhere.synthesize(llmResult.text);

// 6. Play audio (using native audio module)

Model Management

// Get available models
const models = await RunAnywhere.getAvailableModels();
const downloaded = await RunAnywhere.getDownloadedModels();

// Get storage info
const storage = await RunAnywhere.getStorageInfo();
console.log('Used:', storage.usedSpace);
console.log('Free:', storage.freeSpace);
console.log('Models:', storage.modelsSize);

// Delete a model
await RunAnywhere.deleteModel(modelId);

// Clear cache
await RunAnywhere.clearCache();
await RunAnywhere.cleanTempFiles();

Key Screens Explained

1. Chat Screen (ChatScreen.tsx)

What it demonstrates:

Key SDK APIs:

2. Speech-to-Text Screen (STTScreen.tsx)

What it demonstrates:

Key SDK APIs:

3. Text-to-Speech Screen (TTSScreen.tsx)

What it demonstrates:

Key SDK APIs:

4. Voice Assistant Screen (VoiceAssistantScreen.tsx)

What it demonstrates:

Key SDK APIs:

5. Settings Screen (SettingsScreen.tsx)

What it demonstrates:

Key SDK APIs:


Development

Run Linting

# ESLint check
npm run lint

# ESLint with auto-fix
npm run lint:fix

Run Type Checking

npm run typecheck

Run Formatting

# Check formatting
npm run format

# Auto-fix formatting
npm run format:fix

Check for Unused Code

npm run unused

Clean Build

# Full clean (removes node_modules and Pods)
npm run clean

# Just reinstall pods
npm run pod-install

Debugging

Enable Verbose Logging

The app uses console.warn with tags for debugging:

# iOS: View logs in Xcode console or use:
npx react-native log-ios

# Android: View logs with:
npx react-native log-android

# Or filter with adb:
adb logcat -s ReactNative:D

Common Log Tags

Tag Description
[App] SDK initialization, model registration
[ChatScreen] LLM generation, model loading
[STTScreen] Speech transcription, audio recording
[TTSScreen] Speech synthesis, audio playback
[VoiceAssistant] Voice pipeline orchestration
[Settings] Storage info, model management

Metro Bundler Issues

# Reset Metro cache
npx react-native start --reset-cache

# Clear watchman
watchman watch-del-all

Configuration

Environment Variables

For production builds, configure via environment variables:

# Create .env file (git-ignored)
RUNANYWHERE_API_KEY=your-api-key
RUNANYWHERE_BASE_URL=https://api.runanywhere.ai

iOS Specific

Android Specific


Supported Models

LLM Models (LlamaCpp/GGUF)

Model Size Memory Description
SmolLM2 360M Q8_0 ~400MB 500MB Fast, lightweight chat
Qwen 2.5 0.5B Q6_K ~500MB 600MB Multilingual, efficient
LFM2 350M Q4_K_M ~200MB 250MB LiquidAI, ultra-compact
LFM2 350M Q8_0 ~350MB 400MB LiquidAI, higher quality
Llama 2 7B Chat Q4_K_M ~4GB 4GB Powerful, larger model
Mistral 7B Instruct Q4_K_M ~4GB 4GB High quality responses

STT Models (ONNX/Whisper)

Model Size Description
Sherpa Whisper Tiny (EN) ~75MB English transcription

TTS Models (ONNX/Piper)

Model Size Description
Piper US English (Medium) ~65MB Natural American voice
Piper British English (Medium) ~65MB British accent

Known Limitations


Contributing

See CONTRIBUTING.md for guidelines.

Development Setup

# Fork and clone
git clone https://github.com/YOUR_USERNAME/runanywhere-sdks.git
cd runanywhere-sdks/examples/react-native/RunAnywhereAI

# Install dependencies
npm install
cd ios && pod install && cd ..

# Create feature branch
git checkout -b feature/your-feature

# Make changes and test
npm run lint
npm run typecheck
npm run ios  # or npm run android

# Commit and push
git commit -m "feat: your feature description"
git push origin feature/your-feature

# Open Pull Request

License

This project is licensed under the Apache License 2.0 - see LICENSE for details.


Support