The official code of this paper. We implemented memory-efficient real-time step-by-step inference using ME-rPPG.
The ME model is now available in open-rppg.
Performing step-by-step inference through the following code.
state = load_state('state.json')
model = load_model('model.onnx')
while True:
........
facial_img = crop_face(frame) # Cropping to a 36×36x3 RGB facial image
output, state = model(facial_img, state) # Computing the BVP and updating the state
........
The following is our implemented web browser inference demo, which operates directly within the browser without requiring video uploads or GPU acceleration.
Demo URL: https://rppgdemo.kegang.wang/
Source code: https://github.com/Health-HCI-Group/ME-rPPG-demo
The ME-rPPG was trained on RLAP using PhysBench, with the full code coming soon.
@article{wang2025memory,
title={Memory-efficient Low-latency Remote Photoplethysmography through Temporal-Spatial State Space Duality},
author={Wang, Kegang and Tang, Jiankai and Fan, Yuxuan and Ji, Jiatong and Shi, Yuanchun and Wang, Yuntao},
journal={arXiv preprint arXiv:2504.01774},
year={2025}
}