Learning to Listen: An Active Acoustic Approach to Sensing Spaces

2018-10-22T20:07:02Z (GMT) by Oliver Shih
Recently, technologies utilizing ubiquitous sensory data have started to revolutionize our perception and interaction with the physical world, as the Internet of Things (IoT) continues to bring billions of sensors online. Most systems are currently architected with sensor data collection at the edge and processing in the cloud. However, as<br>embedded processing becomes cheaper, faster, and more efficient, we are seeing the opportunity to apply learning on raw data samples closer to the sensor devices. The combination of edge computing and in situ<br>learning not only improves a system’s sensing and analysis ability, but also maintains low transportation cost, low latency, and good scalability. In this dissertation, we explore this new class of agile sensing applied to active wide-band acoustic sensors. Unlike conventional<br>approaches that rely on signal processing and well-engineered acoustic features, we propose generic and adaptive learning algorithms that operate closer to raw waveforms. We demonstrate this applied to the<br>field of modern architectural acoustics, where modeling and<br>manipulating space acoustics remains a big challenge, and show its potential in applications such as occupancy estimation, room geometry sensing, acoustic model reconstruction, and microphone localization.<br>We address multiple challenges in designing a lightweight and adaptive learning algorithm, and evaluate trade-offs between estimation accuracy, memory consumption, and energy efficiency on an embedded platform in various real-world environments.