Say hello to Aura —
a new kind of AI assistant
that hears the way humans do.

Aura uses machine learning to understand what’s happening around it through sound.

Aura has a sense of acoustic awareness. It can understand both speech and non-speech audio — so it can figure out when Marie comes home, or when Martin is making dinner, without you having to tell it.

When you do have a request for it, Aura knows enough of what’s going on to actually be helpful to you — like knowing how detailed an answer to give you, or that your hands might be full, or that the doorbell just rang.

The result? A smarter smart home that's more thoughtful and intuitive. One that knows how to assist – and when to get out of the way.

Take a sneak peek at
Bridge Kitchen.

Our smart assistant for the kitchen is the first product to showcase Aura’s computer hearing abilities.

For Developers

Listen —
that's the sound of context.

Speech recognition gets lots of attention, but listening between the lines reveals a lot. Aura uses machine learning to understand what’s going on in a home environment through sound, and provides developers of smart products the ability to understand the human-scale events they represent.

Request developer access

A single microphone in your device — paired with Bridge's machine learning platform — can detect a growing variety of events: from voices to the sounds of a dog barking or a baby crying, to a doorbell ring or an opening garage door. Smarts, without specialized sensors.

That means you get to focus on responding to users more intuitively, building more indispensable devices and thoughtful experiences.

More capabilities over time.

Your devices can learn new kinds of events long after they've shipped — via a software update to deliver new models, or directly from their users. Your microphone is now a multipurpose sensor.

Local, secure processing.

All audio data is processed locally, on-device. This means lower latencies, better user experiences, and a giant leap forward in privacy and security: no raw audio data ever goes over the internet.

Speech and non-speech.

Shape your response to a voice command by knowing what else is going on. Bridge enables a new generation of digital assistants by putting speech and non-speech audio processing in the same pipeline.

Directly sense user activity.

No more relying on poor proxies for user behavior – like waiting for the user's cellphone to enter Bluetooth range to infer they've come home. The result is a product that is more secure and less error-prone.

Join Bridge

A world-class team solving hard problems with leading-edge tech.

Bridge is growing, and we’re ramping up a number of roles. Get in touch at join@bridge.ai.