Bridge is a smart home platform that can hear.
It’s computer vision, for sound — it understands
both speech and non-speech audio, allowing it
to comprehend and respond to patterns
from its users' daily lives.
Bridge gives connected devices a sense of acoustic awareness. We use machine learning to understand what’s going on in a home environment through sound, and provide developers of smart products the ability to understand the human-scale events they represent — like Marie arriving home, or Martin cooking dinner.
The result? You get to focus on responding to users more intuitively, building more indispensable devices and thoughtful experiences.
Speech recognition gets lots of attention, but listening between the lines reveals a lot. Non-speech sounds are a rich source of information, and a single microphone in your device — paired with Bridge's machine learning platform — can detect a growing variety of events: from voices to the sounds of a dog barking or a baby crying, to a doorbell ring or an opening garage door. Smarts, without specialized sensors.
Your devices can learn new kinds of events long after they've shipped — via a software update to deliver new models, or directly from their users. Your microphone is now a multipurpose sensor.
All audio data is processed locally, on-device. This means lower latencies, better user experiences, and a giant leap forward in privacy and security: no raw audio data ever goes over the internet.
Shape your response to a voice command by knowing what else is going on. Bridge enables a new generation of digital assistants by putting speech and non-speech audio processing in the same pipeline.
No more relying on poor proxies for user behavior – like waiting for the user's cellphone to enter Bluetooth range to infer they've come home. The result is a product that is more secure and less error-prone.