figshare
Browse

Machine learning-assisted wearable sensing systems for speech recognition and interaction

dataset
posted on 2024-12-06, 03:47 authored by Dongxiao Li

We proposed a wearable wireless flexible skin-attached acoustic sensor (SAAS) that captures vocal organ vibrations and skin movements for voice recognition and human-machine interaction (HMI) in noisy environments. Using piezoelectric micromachined ultrasonic transducers (PMUT) with high sensitivity (-198 dB), wide bandwidth (10 Hz–20 kHz), and excellent flatness (±0.5 dB), the system ensures reliable performance. Flexible packaging enhances comfort, while integration with a Residual Network (ResNet) achieves over 96% accuracy in classifying laryngeal speech features. The system also demonstrated 99.8% sentence recognition accuracy using a deep learning model in various HMI scenarios. SAAS offers a low-cost, easy-to-fabricate, and high-performance solution for voice control, HMI, and wearable electronics.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC