Measured,
Not Assumed.
A.R.I.A.'s value is grounded in real data. We discovered that in real-world conditions, the AI model doesn't matter — labels do. That finding reframes the entire field.
Research Results
Lab Validation — WESAD, N=15
89.4% balanced accuracy
Protocol-induced stress detection with 5-minute guided calibration. Matches or exceeds published SOTA for wearable affect detection under controlled conditions. Calibration adds +10 percentage points over zero-shot baseline.
Field Study — DAPPER, N=84
~56% balanced accuracy
Self-report labels under daily-life conditions. All three model architectures converge at ~56%, with no statistically significant differences between conditions. Consistent with published field SOTA (Smets 2018: F1=0.43, N=568; Google LSM-2: F1=0.683, 40M hours pretraining, N=1,250).
Field accuracy across all published methods converges at ~56%. Google needed 40M hours of pretraining data and 1,250 subjects to reach F1=0.683. The gap between lab and field is universal — and we're running the experiment to close it (field validation Jun 2026 – Feb 2027).
47% of subjects achieve above-chance accuracy; 53% cluster at chance — a bimodal pattern consistent with literature. Understanding what distinguishes learnable subjects is a Phase 1 research question.
Consumer Hardware — Samsung Galaxy Watch
Samsung sensor path validated
71.8% balanced accuracy on Samsung Galaxy Watch 5 (heart rate + motion only). Dropping the extra sensor that research devices have (ECG) costs less than 1 percentage point — confirming Samsung Galaxy Watch 8 can run the full A.R.I.A. pipeline with no meaningful accuracy loss.
Novel Contribution — Paper 1
“Label Type, Not Architecture, Determines the Cost of Personalizing Wearable Arousal Detection”
No prior paper compares calibration speed across model families under identical protocols. Our data shows that in real-world conditions, which AI model you use doesn't matter — the quality and type of labels from users is what limits accuracy. This reframes the entire approach from “better algorithms” to “better calibration protocols.” Targeting ACM IMWUT, submission May 2026.
Engineering Validation
416 automated tests with full ML data leakage audit
Production-grade codebase with comprehensive test coverage across the full inference pipeline. Includes automated checks for ML data leakage — ensuring no information from test subjects contaminates training, a common source of inflated accuracy in published wearable studies.
Real-Time Inference
Real-time streaming API with WebSocket support for live inference
Demo API (FastAPI) with 6 endpoints including POST /v1/stream and WebSocket /v1/ws for live sensor data to prediction pipeline. Supports real-time wearable data ingestion with streaming predictions — the foundation for consumer device integration.
Public Demonstrations
A.R.I.A. has been demonstrated live at major international events under real-world, high-pressure conditions — not controlled lab environments.
Sónar+D 2025
Barcelona
Live installation
Science Week 2025
Berlin
Live installation
Science of Consciousness
2025
Featured presentation
Quantum Basel
Switzerland
Quantum-inspired emotional mapping
Research Infrastructure
Validation Lab
GPU-equipped research environment for pipeline validation across public and proprietary datasets.
Field Data Collection (Phase 1)
GDPR-compliant data collection planned for Phase 1 field validation (Jun 2026), building a proprietary dataset from consumer wearables under ecological conditions.
Research Pipeline
Automated experiment pipeline for reproducible validation across datasets, model architectures, and calibration conditions.
For Researchers
A.R.I.A.'s calibration methodology achieves 89.4% balanced accuracy with 5 minutes of guided data in controlled conditions (N=15, leave-one-subject-out cross-validation). We're measuring whether this transfers to daily life.
If you run wearable affect studies, we'd like to analyze your data with our calibration pipeline — at no cost, under NDA if needed.
What you get: per-subject accuracy with and without personalization, calibration gain analysis, architecture comparison across your dataset.
Get in TouchRead the Full Technical Paper
The A.R.I.A. white paper details the explainable ML architecture, edge-native processing pipeline, adaptive tiered model, market thesis, and strategic positioning as the foundational API for human-state intelligence.
Academic Foundations
A.R.I.A.'s approach is grounded in peer-reviewed research across neuroscience, affective computing, and signal processing.
Explainable ML with Feature Attribution
Classification models paired with SHAP-based feature attribution. Every inference traces back to specific physiological features — transparent, auditable, and designed for clinical trust.
Heart Rate Variability Analysis
Advanced HRV-spectral analysis techniques that go beyond standard time-domain metrics. Multi-modal fusion with accelerometry and electrodermal activity for robust physiological state estimation.
Contextual Data Fusion
Designed to integrate physiological signals with environmental context (calendar, location, activity type) for contextual attribution — distinguishing exercise-induced arousal from cognitive stress as the platform matures.