Guest post from Kiosk Association member Geoff Bessin of Intuiface. Thanks to Dave Haynes & Sixteen-Nine for originally noting it.
You've been reading about ChatGPT, DALL-E, Stable Diffusion, and more. All are examples of the latest technical craze: Generative AI.
This post will explore generative AI, how to use it in digital signage, and what the future holds. So if you're looking to stay ahead of the curve in your digital signage strategy, read on!
The world of generative AI is on fire. Super-powered algorithms are writing code, crafting stories, and creating images that would challenge a Turing test. Under the covers, deeply complex machine-learning processes are burrowing through billions of human-created words, graphics, and code, getting more intelligent and more creative by the minute.
And since these algorithms are fully accessible via Web API, they are easily incorporated into your Intuiface experiences.
Let's spend some time understanding the world of generative AI their value for digital signage, and how you can use them in Intuiface
What is Generative AI?
Generative Artificial Intelligence (AI) is a subset of machine learning that enables computers to create new content - such as text, audio, video, images, or code - using the knowledge of previously created content. The output is authentic-looking and completely original.
What are the most popular Generative AI options?
The most well-known example of generative AI is GPT - currently represented by GPT-3.5, the latest release of the third-generation language prediction model in the GPT series. Created by OpenAI, it's an algorithm that can be adapted to create images and anything with a language structure: it answers questions, writes essays, develops summaries of longer text items, writes software code, and even translates languages. OpenAI provided the GPT model with around 570GB of text information from the internet to achieve this natural language ability. Want to try it out? Head to ChatGPT, create a free account, and start a conversation.
For image generation, the best-known options are DALL·E (also based on GPT), Midjourney, and StableDiffusion. Like ChatGPT, these services take natural language as input, but their output is images. The output can be in any requested style - from art-inspired themes like cubism or impressionism to completely realistic images that look like photographs but were created by an algorithm.
How Generative AI works
Ha! If you're looking for a treatise on the science of deep learning, this is not the place. However, what we can talk about is how these models are exposed to users.
Requests for both text and images are submitted as a 'prompt'. Prompts are natural language sentences that express the desired outcome. An evolving art is prompt creation because the more specific and descriptive a prompt is, the more likely you get exactly what you want.
This article is just one example of how prompt crafting is as much science as it is art.
Now you can indulge your desire to see "Yoda seated on the Iron Throne from "Game of Thrones" at home plate in Fenway Park." (We used Stable Diffusion to generate the image below with that exact text.)
As you'll learn below, most Generative AI services are accessible through a set of APIs. Through these APIs, business services – and, in our selfish interest, digital signage – can incorporate the technology.
How the B2B market is using Generative AI
A long (and growing) list of businesses leveraging Generative AI now exists.
In the graphic below, the column to the left identifies the most common generative models on the market. Various solution domains and companies using generative models to provide services for those domains are to the right.
How traditional digital signage can take advantage of Generative AI
Generative AI can be an excellent companion technology for creating unique and engaging digital signage experiences. With it, digital signage can dynamically create and display real-time content that perfectly fits the context. This content can be influenced by user behavior or external data sources, from weather forecasts to real-time prices.
- Create context-sensitive images that reflect the current information, environment, or audience.
- Generate summaries and/or translations of unpredictable text like news reports or sports events.
- Rewrite messages with different tones and lengths based on the audience or urgency.
The most significant hurdle is performance, particularly for image generation, as today's Generative AI solutions are not (yet) instantaneous. Depending on the complexity of the request and the complexity of the desired result, image generation can even take a few seconds. As a result, signage must be proactive in the content request to ensure there is no visual latency.
How interactive digital signage increases Generative AI value
By adopting interactive digital signage, which provides insight into the user's preferences, you can go further with Generative AI. Now you are not just limited to an external context; you have intimate knowledge of your audience and can communicate accordingly.
By "interactive" we mean any type of human-machine conversation, both active and passive. Active options include touch, gesture, and voice, while passive options include sensors and computer vision. For all modalities, in combination with context and on-screen content, digital signage can clearly identify a user's interests.
- Using user data to craft personalized "avatars" for the length of their session.
- Add quirky personality to interaction, creating jokes and witty asides for the user in the context of what could be an otherwise boring digital engagement.
- Converting a review of shopping cart orders into conversational text to humanize kiosk usage.
- Use anonymous facial recognition technology to identify age/gender and use that information to customize communication.
- Translate ever-changing data sources, like a product catalog or tourist information.
For any natural language scenario, the designer - or the user - could choose to dynamically transform the text to speech (TTS) using either OS-specific services or with the help of generative AI voice services like VALL-E.
In all cases, the creative team is freed from having to anticipate the broad range of potential users/scenarios/requirements. They can just rely on a Generative AI resource to do the heavy lifting in real time.
Using Generative AI in Intuiface
As many generative AI services are exposed through Web APIs, a text-based query (the "prompt") can be programmatically submitted, with the text/image response retrieved in real-time for display. Thanks to Intuiface API Explorer, Intuiface users can easily create integrations with these Web APIs despite having little to no understanding of how these APIs actually work.
Most Web APIs for generative AI permit limited use for free and enlist a token or image-based payment system for adoption at scale. Here are some API examples, all of which are supported by Intuiface API Explorer:
- OpenAI for text creation, completion, and translation
DALL·E (OpenAI-based) for image generation
As noted above, a good DALL·E example can be found in our User Community.
Stable Diffusion for image generation
An explanation for how to use Stable Diffusion in Intuiface is here.
- The official ChatGPT API is not yet available but will supposedly be released sometime soon...
For example, Paolo Tosolini, of Intuiface partner Tosolini Productions, posted a great example to our User Community of how he used API Explorer to create a real-time integration with DALL·E.
While the example presented above uses an Intuiface-based UI that depends on human input to generate a prompt (and thus an image), Intuiface can also extract information from an experience's environment and use it to create a prompt. For example, the prompt could include words related to the current temperature, the number or presence of people passing by, the time of day, etc. – all collected and meaningfully combined for a generative AI algorithm in real time.
As noted above, with this approach, the experienced designer is freed from having to identify all possible scenarios and create appropriate responses proactively. Images and the copy can be automatically created by a generative AI service based on endless environmental contexts and delivered in almost real-time. Such automatic content generation is a revolution for digital signage!
Generative AI and Digital Signage - Looking Ahead
Generative AI is continually evolving and becoming more accessible. It will become increasingly commonplace in digital signage networks as it gets cheaper, faster, and easier to use. This will empower businesses to create unique experiences tailored to the individual viewer or the surrounding environment.
One can imagine a fully automated Help Desk, recommendation engine (for clothing, meals, destinations), or tour guide. The possibilities of generative AI for digital signage – and our lives! - are virtually endless. As this technology continues to evolve, its potential applications will increase exponentially.
You can start your exploration today. Use Intuiface to dip your toe in the water, experiment with the technology, and use it to enhance your interactive experiences.
Want to try Intuiface?
Our free, 28-day trial gives you access to 100% of product capability. If Intuiface does it, you can do it - no credit card required. Start a Free Trial
Geoff Bessin @geoffbessin
I'm Chief Evangelist at Intuiface, which means I think about the intersection of digital interactivity with signage and presentations. Pearls of wisdom? Well...
Follow Up Links
- ArsTechnica Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns — But out of 300,000 high-probability images tested, researchers found a 0.03% memorization rate.
More Interesting AI in the News Links
Please sign in
If you are a registered user on AVIXA Xchange, please sign in
Wonderful article about Generative AI! I've used ChatGPT and also Stable Diffusion before, and it's so interesting to be able to put as descriptive or abstract of a prompt into Stable Diffusion to generate an image. I do remember that with the version I used, some of the more complex images would take hours but some would be done within minutes. So, I can see how the performance of image generation could be a hurdle since the technology isn't instantaneous just yet. It is important that signage be proactive in the content request to ensure no visual latency and I'm glad this was noted.
I can certainly visualize how this companion technology can begin to benefit so many people. I'd love to see a more in-depth look at how others are using Intuiface! Thanks for sharing this!
Very interesting, thanks!