Drop our SDK into your iOS, Android, Web, React Native, or Flutter app. A 30-second selfie video becomes heart rate, HRV, SpO₂, respiratory rate, stress, and 25+ other markers — all computed on-device. Nothing leaves the phone except the final JSON.
Every SDK exposes the same scan.start() / scan.onResults() surface — port-once, ship everywhere.
Swift Package & CocoaPods. iOS 14+. ARM64. Uses AVFoundation, CoreML, Accelerate. 14 MB binary.
Swift · SPM · PodGradle / Maven Central. API 24+. ARM64 & x86_64. CameraX + TFLite + NNAPI. 16 MB binary.
Kotlin · AARWASM + WebGL compute. Chrome, Safari, Edge, Firefox. getUserMedia camera. 2 MB gzipped.
TypeScript · npmTurbo Module bridge. iOS + Android shared API. TypeScript types included. Hermes compatible.
RN 0.71+Dart plugin, pub.dev published. Method-channel bridge. Null-safe, AndroidX, iOS simulator supported.
Flutter 3.xPrefer the cloud path? POST a video / image and we return the same JSON. Useful for kiosks & batch.
REST · OpenAPINo ML expertise required. No backend to deploy. Your users open the camera, hold still, and 30 seconds later you get clean vitals.
// iOS · Swift import SmartScanPro let scan = SSPScan(apiKey: "ssp_live_...") scan.start(in: view) { result in print(result.heartRate) // 72 bpm print(result.hrv) // 48 ms print(result.spo2) // 97 % print(result.respiration) // 16 rpm print(result.stress) // "low" }
Small binaries, low CPU, no network dependency during the scan. Inference is fully local.
Free tier gives you unlimited on-device scans for up to 1,000 unique users. Upgrade when you cross the line.