The London-area content creation start-up Sodaclick has expanded from providing SaaS-based HTML5 tools for on-screen creative into AI-driven voice interaction for screens.
The particularly interesting wrinkle here is that the two technologies are somewhat intertwined - so that voice interaction at screens can be supported by custom creative generated within Sodaclick's platform.
Sodaclick Voice uses ASR (Automatic Speech Recognition), billed as a friendly, near-human interaction. It is positioned as an alternative option to touch and gesture controls, and useful to people with visual or physical impairments, or people on the autism spectrum who would struggle with touch or gesture controls.
Sodaclick Voice supports more than 85 languages and variants, its own audio beam-forming software and patented noise cancellation technology from a partner.
The technology is positioned as helpful in use-cases like retail - when there may be issues with self-checkout like a product with no bar code. Along with being inclusive, Sodaclick suggests it may allow customers to check out 3X faster, and without concerns around whether the screen is sanitized.
It is also touted for scenarios like check-ins at medical clinics. There is a quick demo of that on this page ...
Voice has grown quite common for consumers who use Alexa and Siri, or call out Hey Google. But there has not been that much take-up - at least that I am aware of - in public environments. Along with worries about accuracy and ambient noise, there's probably some hesitancy about the cost and complications of putting a solution together and then supporting it.
The pitch when Sodaclick started up a few years ago was making it easy and affordable to produce on-screen messaging, and automate the updates. Like Intuiface with its no-code interactive platform, the proposition here is that users can "drive the content creation process on any device, by building digital signage, AI and IoT solutions, all without a single line of code."