Computer Vision Delivers Contextual And Emotionally Relevant Brand Messages
/The power of object recognition and the transformative effect of deep learning to analyze scenes and parse content can have a lot of impact on advertising. At the 2016 Annual LDV Vision Summit, Ken Weiner CTO at GumGum told us about the impact of image recognition and computer vision in online advertising.
I’m going to talk a little bit about advertising and computer vision and how they go together for us at GumGum. Digital images are basically showing up everywhere you look. You see them when you're reading editorial content. You see them when you're looking at your social feeds. They just can't be avoided these days. GumGum has basically built a platform with computer vision engineers that tries to identify a lot of information about the images that we come across online. We try to do object detection. We look for logos. We detect brand safety, sentiment analysis, all those types of things. We basically want to learn as much as we can about digital photos and images for the benefit of advertisers and marketers.
LDV Capital invests in deep technical people building visual technologies. Our LDV Vision Summit explores how visual technologies leveraging computer vision, machine learning, and artificial intelligence are revolutionizing how business and society.
LDV Capital regularly hosts Vision events – check out when the next one is scheduled.
The question is: what value do marketers get from having this information? Well, for one thing, if you're a brand, you really want to know: how are users out there engaging with your brand? We look at the fire hose of social feeds. We would look for, for example, at brand logos. In this example, Monster Energy drink wants to find all the images out there where their drink appears in the photo. You have to remember about 80% of the photos out there might have no textual information that’s going to identify the fact that Monster is involved in this photo, but they are. You really need computer vision in order to understand that.
Why do they do that? They want to look at how people engage with them. They want to look at how people are engaging with their competitors. They may want to just understand what is changing over time. What are maybe some associations with their brand that they didn't know about that might come up? For example, what if they start finding out that Monster Energy drinks are appearing in all these mountain biking photos or something? That might give them a clue that they should go out and sponsor a cycling competition. The other thing they can find out with this is who are their main brand ambassadors and influencers out there. Tools like this give them a chance to connect with those people.
What makes [in-image] even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo.
-Ken Weiner
Another product that’s been very successful for us is something we call in-image advertising. We came up with this kind of unit about eight years ago. It was really invented to combat what people call banner blindness, which is the notion that, out on a web page, you start to learn to ignore the ads that are showing at the top and the side of the page. If you were to place brand messages right in line with content that people are actively engaged with, you have a much better chance of reaching the consumer. What makes it even more powerful is if you can connect the brand message with that image in a very contextual way and tap into the emotion that somebody’s experiencing when they’re looking at a photo. Just the placement alone for an ad like this receives 10x the performance of traditional advertising because it’s something that a user pays attention to.
Obviously, we can build a big database of information about images and be able to contextually place ads like this, but sometimes situations will come from advertisers that won’t be able to draw upon our existing knowledge. We’ll have to go out and develop custom technology for them. For example, L’Oréal wanted to advertise a product for hair coloring. They asked us if we could identify every image out on different websites and identify the color of the hair of the people in the images so that they could strategically target the products that go along with those hair colors. We ran this campaign from them. They were really, really happy with it.
They liked it so much that they came back to us, and they said, “We had such a good experience with that. Now we want you to go out and find people that have bold lips,” which was a rather strange notion for us. Our computer vision engineers came up with a way to segment the lips, figure out, “What does boldness mean?” Loral was very happy. They ran a lipstick campaign on these types of images.
A couple years ago, we had a very interesting in-image campaign that I think might be the first time that the actual content that you're viewing became part of the advertising creative. What we did is, for Lifetime TV, they wanted to advertise the TV series, Witches of East End. We looked for photos where people were facing forward. When we encountered those photos, we dynamically overlaid green witch eyes onto these people. It gives people the notion that they become a little witchy for a few seconds. Then that collapses and becomes a traditional in-image ad where somebody can then, after being interested by the eyes, can go ahead and click on this to watch a Video LightBox to see the preview for the show.
I just thought this was one of the most interesting ad campaigns I’ve ever seen because it mixes the notion of content and creative into one. What’s coming after this? Naturally, this will extend into video. TV networks are already training you to look at information in the lower third of the screen. It’s only natural that this will get replaced by contextual advertising the same way we’ve done it for images online.
Another thing that I think is coming soon is the ability to really annotate specific products and items inside images at scale. People have tried to do this using crowdsourcing in the past, but it’s just too expensive. When you're looking at millions of images a day like we do, you really need information to come in a more automated way. There’s been a lot of talk about AR. Obviously, advertising’s going to have to fit into this in some way or another. It may be a local direct response advertiser. You're walking down the street. Someone gives you a coupon for McDonald’s. Maybe it’ll be a brand advertiser. You see a car accident, and they’re going to remind you that you need to get car insurance.
Lastly, I wanted to pose the idea of in-hologram ads that I think could come in the future if these things like Siri and Alexa … Now they’re voice, but in the future, who knows? They might be 3D images living in your living room, and advertisers are going to want a way to basically put their name on those holograms. Thank you very much.