Magic mirrors and AI

by Rodolfo Rosini

If you haven’t yet, read My Bathroom Mirror Is Smarter Than Yours do it before this post.

alice-in-wonderland-fashion-model-walking-in-mirror.jpeg

Smart mirrors show consumers want a multi sensory experience. Touch, vision are primary drivers of Human-Computer Interaction. Voice is a novelty that can build on top but asking people to change behavior in HCI without a clear benefit is madness.

So far only benefit of voice is while driving, people who don’t know how to write or people with disabilities. Clearly not growing markets.

Using voice we can’t receive complex data structures like lists, charts etc easily and the bandwidth is an issue.

 
Reading 250 words per minute
Listening 150 wpm
Speaking 105 wpm (this assumes the AI will understand 100% of the time. Sci-fi)
Typing 33 wpm

Compare the reception of all the Nest / Echo etc products with the smart mirror. That guy accidentally discovered a potential product market fit with a Medium post (world first?)

 
Soon we’ll see more smart mirror equipment coming for sale, DIY kits, APIs. Citymapper just released an update that would make sense on a mirror (tells you if there is going to be a delay on your route before you ask for it). In fact while the Apple Watch is a “personal” device, a lot of the apps there might make more sense on a mirror.

I think there is an opportunity to exploit multi sensory AI and nobody has cracked that, I also believe that glass as a material will have an ever increasing role in our lives (Corning mentioned using Gorilla glass for windscreens for example, and there are transparent solar cells coming to the market.)

If you are an entrepreneur looking to build magic mirrors, or a VC looking to fund one (and you should) I’d love to discuss the AI behind it.

Advertisement