On-device SDK · GA

30 seconds. No device. 30+ vitals.

The SmartScanPro face-scan API reads subtle colour changes in your skin from any front-camera video to estimate heart rate, HRV, respiratory rate, SpO₂, stress, wellness-band BP, heart age, and more — without a cuff, strap, or wearable.

On-device processing HIPAA-ready FHIR-native output
9:41 SCANNING
Heart rate
72 bpm
HRV
48 ms
SpO₂
98 %
Resp. rate
16 /min
Stress
Low
Wellness BP
118/76
Markers extracted

30+ physiological markers, four domains.

Every marker comes with a per-field confidence score. Markers labelled wellness are indicative only — not a substitute for a certified medical device.

Cardiovascular
Heart & circulation
  • Heart rate bpm
  • HRV (SDNN) ms
  • HRV (RMSSD) ms
  • HRV (pNN50) %
  • LF / HF ratio ratio
  • Pulse regularity score
  • Wellness BP systolic
  • Wellness BP diastolic
  • Heart age (est.) yrs
  • CV risk score 0–100
Respiratory
Breath & oxygenation
  • Respiratory rate /min
  • Breathing regularity score
  • Breath-hold capability derived
  • SpO₂ (est.)
  • Inspiration / expiration ratio I:E
  • Respiratory depth index index
Autonomic & stress
ANS balance
  • Stress index Low/Med/High
  • Sympathetic activity index
  • Parasympathetic activity index
  • ANS balance ratio
  • Baseline recovery score 0–100
  • Mental stress score 0–100
  • Arousal level calm/alert
General wellness
Composite scores
  • Biological age (est.) yrs
  • Fitness proxy 0–100
  • Recovery readiness 0–100
  • Sleep-quality proxy 0–100
  • Signal quality 0–1
  • Scan duration sec
  • Confidence overall 0–1
How it works

Four steps from camera to canonical JSON.

The entire pipeline runs on-device. Only the final structured output is transmitted — never raw video or face imagery.

01

Capture

User opens your app, grants camera permission, and stays still for 30 seconds. The SDK handles lighting detection and face alignment.

02

Extract signal

On-device pipeline detects the face, tracks regions of interest (cheeks, forehead), and isolates the rPPG waveform from skin pixels at 30 fps.

03

Compute vitals

Heart rate, HRV, respiratory rate, SpO₂, stress and BP are derived via signal processing and a compact on-device neural net. Per-field confidences emitted.

04

Post result

Only the final JSON lands at /api/v1/face-scan/results. Webhook fires. FHIR Bundle emitted. Raw video is discarded.

The science

What remote photoplethysmography actually does.

With every heartbeat, blood volume in the capillaries of your face fluctuates. That fluctuation causes a sub-perceptible colour shift in skin pixels — especially in the green channel.

Our pipeline tracks this colour signal across dozens of face regions, filters out motion and lighting noise, and reconstructs the pulsatile waveform. From that waveform, standard signal-processing methods yield heart rate, HRV, and respiratory rate; a downstream neural net extends to stress, blood-pressure proxies, and composite wellness scores.

Technical details Common questions

Recovered rPPG waveform

// 30 fps sampling · 30 s window · 72 bpm detected, HRV SDNN 48 ms
30 fps
Sample rate
900 frames
Per scan
< 50 MB
Peak RAM
0 bytes
Video uploaded
Integration

Four lines of code. On every platform.

Drop the SDK in, bind it to a video element, call startScan(). The SDK handles camera permissions, quality feedback, and posts the final vitals to your backend. Same adapter contract across Web, iOS, Android, React Native, and Flutter.

Full SDK reference Get an API key
face-scan.ts
import { FaceScan } from "@smartscanpro/face-scan";

const scan = new FaceScan({
  apiKey:       process.env.SMARTSCAN_KEY,
  videoElement: document.querySelector("#camera"),
  onProgress:   (p) => console.log(p.elapsedSeconds),
  onQuality:    (q) => console.log(q.signalQuality)
});

// Runs entirely on-device. Posts only the final structured result.
const result = await scan.start({ durationSeconds: 30 });

console.log(result.data.heart_rate);       // { value: 72, unit: "bpm", confidence: 0.94 }
console.log(result.data.hrv_sdnn);        // { value: 48, unit: "ms",  confidence: 0.87 }
console.log(result.data.respiratory_rate); // { value: 16, unit: "/min" }
console.log(result.fhir.entry);           // FHIR R4 Observations, LOINC-coded
Privacy by architecture

Raw face video literally cannot leave the device.

This isn't a policy promise. It's how the system is built — the SDK runs rPPG locally and only transmits numeric results.

On-device by default

Face detection, ROI tracking, and rPPG signal extraction all run in-browser or in-app. Our servers never see a face frame.

Numeric-only uplink

The POST body is a ~2 KB JSON payload of vitals + confidence scores. No images. No waveforms (unless you explicitly enable hybrid mode).

HIPAA aligned

PHI isolation per project. BAA on Enterprise. Region-pinning for US / EU / India. Audit logs exportable to Splunk or Datadog.

No training on customer data

Customer scans never feed our model training pipelines. Improvements ship via consented, de-identified research datasets only.

Zero-retention mode

Enterprise plans can opt into "process-and-forget" — even the structured result is deleted once your webhook confirms receipt.

Your keys, your data

Bring-your-own KMS key on Enterprise. We never hold plaintext PHI outside of transient processing.

Who uses it

Built for the moments a cuff would be too much friction.

Telehealth

Virtual-visit vitals capture

Clinicians can see a patient's HR, HRV, stress, and wellness BP during a video consult — no shipped hardware, no app clunk.

2–4× more vitals captured per visit
Insurance · underwriting

Contactless risk screening

A 30-second face scan during the digital application surfaces cardiovascular signals that support underwriting decisions.

Cut underwriting time by ~40%
Corporate wellness

Frictionless weekly check-ins

Employees take a 30-second scan every Monday. Dashboards track population stress and recovery trends without wearables.

3× participation vs. wearables
Fitness & coaching

Pre- and post-workout checks

Coaches see HRV, recovery readiness, and ANS balance before a session — and fatigue indicators after — in ten lines of app code.

Better programming from objective data
Vendor comparison

How we compare to the face-scan specialists.

Honest matrix of SmartScanPro vs. the dedicated rPPG vendors. Public data where available; estimated where not. Updated as competitors publish.

Vendor Per-scan price Markers On-device FHIR output Device OCR too Free tier
SmartScanPro
$0.15 – $0.30 30+ 10 / mo
Shen AI
$0.25 – $1.00 30+ Demo only
Binah.ai
Custom / $20k+/yr 20+
Nuralogix Anura
Custom 100+
FaceHeart
Custom 15+

Nuralogix leads on raw marker count (100+ including lipid proxies); we cover the clinically meaningful ~30 markers and add device OCR + FHIR output for roughly half the per-scan cost. Customers who need the long-tail research markers still go to Nuralogix; everyone else ships faster with SmartScanPro.

Face-scan pricing that grows with you.

Every SmartScanPro plan includes face-scan credits. Stack bolt-on packs when you exceed your tier, or upgrade to Enterprise for unlimited scans.

  • Starter: 10 face scans / month · free forever
  • Professional: 500 face scans / month · $490/mo
  • Bolt-on pack: 1,000 scans for $149 (applies to Pro and Enterprise)
  • Enterprise: unlimited scans · on-prem / BAA / white-label SDK
$0.15
per face scan at scale
See full pricing
Volume pricing down to $0.075 / scan at Enterprise tiers.
Face-scan FAQ

Everything you're wondering.

Missing one? Ask us directly →

What is rPPG, exactly?
Remote photoplethysmography — measuring subtle colour changes in skin from an ordinary video camera to infer the pulsatile blood-volume signal. From that signal we derive heart rate, HRV, respiratory rate, and 25+ other markers.
How accurate is the face scan?
Heart rate and HRV are typically within 3 bpm / 10 ms of a chest strap under good lighting. Wellness-band blood pressure is indicative, not clinical — a screening tool, not a replacement for a certified cuff. Every value carries a confidence score so you can set thresholds per field.
Does raw video leave the device?
No. The on-device SDK runs the entire rPPG pipeline locally. Only the final structured vitals (a ~2 KB JSON payload) are posted to SmartScanPro. Face video, raw frames, and PHI images never leave the user's browser or phone.
Is this a medical device?
No. Face-scan outputs are wellness / indicative — not clinical measurements. For clinical diagnosis, pair with certified devices via our device-extraction endpoints. Regulatory status is explicitly not MDR / FDA cleared.
Which platforms do you support?
Web SDK (TypeScript, React / Vue / vanilla) is GA. Native iOS (Swift) and Android (Kotlin) are in beta. React Native and Flutter SDKs ship Q3. All SDKs share the same adapter contract, so swapping platforms doesn't require rewriting UI.
What lighting / positioning conditions are needed?
Indoor diffuse lighting (avoid direct sunlight or strong backlight), face 40–60 cm from camera, eyes open, head still. The SDK shows real-time quality feedback and refuses to start a scan if conditions aren't met — so users don't get a bad reading silently.
Can I use it with a webcam on a desktop?
Yes. The Web SDK works on any device with a front-facing camera and WebRTC — laptops, desktops, tablets, phones. We recommend at least 720p at 30 fps; the SDK gracefully falls back if the camera can't hit that.
What about users with darker skin tones?
rPPG is sensitive to skin-tone variation — this is the #1 fairness issue in the field. Our calibration dataset is intentionally diverse (India, South-East Asia, Africa) and we publish per-Fitzpatrick-type accuracy deltas in the docs. Where accuracy drops, so do confidence scores, so your code can react accordingly.

Ship contactless vitals this sprint.

10 free face scans a month, forever. Paste in 20 lines of SDK code. Ship a scan flow to production this week. Upgrade only when you're ready.