Affiliation:
1. Columbia University, New York City, NY, USA
2. Snap Inc., New York City, NY, USA
3. Snap Inc., USA
Abstract
Jitter and lag severely impact the smoothness and responsiveness of user experience on vision-based human-display interactive systems such as phones, TVs, and VR/AR. Current manually-tuned filters for smoothing and predicting motion trajectory struggle to effectively address both issues, especially for applications that have a large range of movement speed. To overcome this, we introduce N-euro, a residual-learning-based neural network predictor that can simultaneously reduce jitter and lag while maintaining low computational overhead. Compared to the fine-tuned existing filters, N-euro improves prediction performance by 36% and smoothing performance by 42%. We fabricated a Fish Tank VR system and an AR mirror system and conducted a user experience study (n=34) with the real-time implementation of N-euro. Our results indicate that the N-euro predictor brings a statistically significant improvement in user experience. With its validated effectiveness and usability, we expect this approach to bring a better user experience to various vision-based interactive systems.
Publisher
Association for Computing Machinery (ACM)
Reference89 articles.
1. Accer. 2022. NITRO XV3. https://www.acer.com/ac/en/US/content/model/UM.HX3AA.P02. Accessed: 2023-02-13.
2. adafruit. 2022. Teensy 3.2. https://www.adafruit.com/product/2756. Accessed: 2023-02-13.
3. Modeling and Reducing Spatial Jitter caused by Asynchronous Input and Output Rates
4. Apple. 2021. MacBook Pro. https://www.apple.com/shop/buy-mac/macbook-pro/16-inch-space-gray-10-core-cpu-16-core-gpu-512gb. Accessed: 2023-02-13.
5. Apple. 2022. Face Tracking with ARKit. https://developer.apple.com/videos/play/tech-talks/601/. Accessed: 2022-08-27.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Normalization is All You Need: Robust Full-Range Contactless SpO2 Estimation Across Users;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14