The SmartScanPro face-scan API reads subtle colour changes in your skin from any front-camera video to estimate heart rate, HRV, respiratory rate, SpO₂, stress, wellness-band BP, heart age, and more — without a cuff, strap, or wearable.
Every marker comes with a per-field confidence score. Markers labelled wellness are indicative only — not a substitute for a certified medical device.
The entire pipeline runs on-device. Only the final structured output is transmitted — never raw video or face imagery.
User opens your app, grants camera permission, and stays still for 30 seconds. The SDK handles lighting detection and face alignment.
On-device pipeline detects the face, tracks regions of interest (cheeks, forehead), and isolates the rPPG waveform from skin pixels at 30 fps.
Heart rate, HRV, respiratory rate, SpO₂, stress and BP are derived via signal processing and a compact on-device neural net. Per-field confidences emitted.
Only the final JSON lands at /api/v1/face-scan/results. Webhook fires. FHIR Bundle emitted. Raw video is discarded.
With every heartbeat, blood volume in the capillaries of your face fluctuates. That fluctuation causes a sub-perceptible colour shift in skin pixels — especially in the green channel.
Our pipeline tracks this colour signal across dozens of face regions, filters out motion and lighting noise, and reconstructs the pulsatile waveform. From that waveform, standard signal-processing methods yield heart rate, HRV, and respiratory rate; a downstream neural net extends to stress, blood-pressure proxies, and composite wellness scores.
Drop the SDK in, bind it to a video element, call startScan(). The SDK handles camera permissions, quality feedback, and posts the final vitals to your backend. Same adapter contract across Web, iOS, Android, React Native, and Flutter.
import { FaceScan } from "@smartscanpro/face-scan"; const scan = new FaceScan({ apiKey: process.env.SMARTSCAN_KEY, videoElement: document.querySelector("#camera"), onProgress: (p) => console.log(p.elapsedSeconds), onQuality: (q) => console.log(q.signalQuality) }); // Runs entirely on-device. Posts only the final structured result. const result = await scan.start({ durationSeconds: 30 }); console.log(result.data.heart_rate); // { value: 72, unit: "bpm", confidence: 0.94 } console.log(result.data.hrv_sdnn); // { value: 48, unit: "ms", confidence: 0.87 } console.log(result.data.respiratory_rate); // { value: 16, unit: "/min" } console.log(result.fhir.entry); // FHIR R4 Observations, LOINC-coded
This isn't a policy promise. It's how the system is built — the SDK runs rPPG locally and only transmits numeric results.
Face detection, ROI tracking, and rPPG signal extraction all run in-browser or in-app. Our servers never see a face frame.
The POST body is a ~2 KB JSON payload of vitals + confidence scores. No images. No waveforms (unless you explicitly enable hybrid mode).
PHI isolation per project. BAA on Enterprise. Region-pinning for US / EU / India. Audit logs exportable to Splunk or Datadog.
Customer scans never feed our model training pipelines. Improvements ship via consented, de-identified research datasets only.
Enterprise plans can opt into "process-and-forget" — even the structured result is deleted once your webhook confirms receipt.
Bring-your-own KMS key on Enterprise. We never hold plaintext PHI outside of transient processing.
Clinicians can see a patient's HR, HRV, stress, and wellness BP during a video consult — no shipped hardware, no app clunk.
A 30-second face scan during the digital application surfaces cardiovascular signals that support underwriting decisions.
Employees take a 30-second scan every Monday. Dashboards track population stress and recovery trends without wearables.
Coaches see HRV, recovery readiness, and ANS balance before a session — and fatigue indicators after — in ten lines of app code.
Honest matrix of SmartScanPro vs. the dedicated rPPG vendors. Public data where available; estimated where not. Updated as competitors publish.
| Vendor | Per-scan price | Markers | On-device | FHIR output | Device OCR too | Free tier |
|---|---|---|---|---|---|---|
SmartScanPro |
$0.15 – $0.30 | 30+ | 10 / mo | |||
Shen AI |
$0.25 – $1.00 | 30+ | Demo only | |||
Binah.ai |
Custom / $20k+/yr | 20+ | ||||
Nuralogix Anura |
Custom | 100+ | ||||
FaceHeart |
Custom | 15+ |
Nuralogix leads on raw marker count (100+ including lipid proxies); we cover the clinically meaningful ~30 markers and add device OCR + FHIR output for roughly half the per-scan cost. Customers who need the long-tail research markers still go to Nuralogix; everyone else ships faster with SmartScanPro.
Every SmartScanPro plan includes face-scan credits. Stack bolt-on packs when you exceed your tier, or upgrade to Enterprise for unlimited scans.
10 free face scans a month, forever. Paste in 20 lines of SDK code. Ship a scan flow to production this week. Upgrade only when you're ready.