Hearst Is Launching a 10-Person Team Tasked With Building Voice-Activated Experiences

Information is all around us, and gaining that information and knowledge is easier than ever. Hearst is now joining the era of voice-activated devices and using an API similar to the technology employed by our client Essextec CI, to compile data and give a summarized answer to the user.

Hearst may be a 129-year-old media company, but even it’s planning for a Jetsons-like future when news will be consumed through voice-controlled technology.

The New York-based company has quietly launched a 10-person group called the Native and Emerging Technologies (or NET group) that’s responsible for keeping the mega-publisher up to speed with the newest technologies, starting with voice-activated devices including Amazon Echo, Google Home and voice-based smartphone experiences. For instance, this week, the team launched an Amazon Echo Skill for Good Housekeeping. The group was born out of Hearst’s acquisition of startup BranchOut a couple of years ago.

“We’re looking at this new wave of natural language interfaces as being a great source of content discovery and content interaction,” said Phil Wiser, Hearst’s chief technology officer. “We find all of that to be increasingly important as a way to engage consumers.”

The Good Housekeeping Echo skill allows users to receive a step-by-step guide of instructions and recommended tools to remove stains by talking to the cylinder-shaped gadget. As consumers work through removing the stain, music plays in the background.

“It’s a really good branding opportunity—as we’re providing that advice, we can also give the consumer guidance on which brands they should look for,” Wiser said. “That’s a theme that we’re going to build on as we take our expert editorial content and weave it in with branded content.”

While NET doesn’t have any advertisers on board yet, Wiser said he envisions selling brands on voice experiences in the near future. For instance, the Good Housekeeping Echo skill could recommend a stain-removal brand that’s been vetted by the magazine. Or a food-themed skill could push a particular brand’s ingredient when reading a recipe out loud.

Hearst also has Amazon Echo skills for Elle, which answers horoscope questions, and for its newspaper brands—including the San Francisco Chronicle and Houston Chronicle—that reads daily news out loud through Echo’s Flash Briefings feature.

“We’re going to pick up the pace on these voice-activated devices,” Wiser explained. “The nice thing is that it extends directly to smartphones as well, so as more and more consumers use the voice interface to access their device or ask questions, we think we’ll be well-positioned to be an answer and show up by giving out more content in this format.”

In addition to Internet of Things technology like Amazon Echo and Google Home, Wiser named bots, artificial intelligence, augmented reality and over-the-top apps for smart TV devices like Roku as other big priorities for the group.

“The underlying theme in a lot of these areas links back to artificial intelligence, which from a corporate standpoint is an area that we’re doing quite a bit of work right now on machine learning,” the exec added. “We’re bringing that to life through some of these applications on these new devices.”

With augmented reality, Hearst is particularly interested in how the technology works within Snapchat since a number of its publishers are Snapchat Discover partners and chief content officer Joanna Coles sits on the mobile app’s board of directors. A few years ago, Hearst was also one of the first companies to build augmented reality apps for now-defunct Google Glass.

To help gear up for its push into artificial intelligence, NET is leaning on Hearst’s data-science team to analyze and format content for new devices. As Wiser explained it, the team first aggregates audience data that can then be picked apart to create bits of content as well as personalized ads for new devices. In a lot of cases, that means cutting down Hearst’s trove of service-based content down to the bare minimum needed to answer a simple audio question.

“[We] have an artificial intelligent agent look at every image and read every piece of content that we’ve produced for the last 10 years and extract those things that we think would be relevant for voice-based search,” Wiser said. “You can’t have Alexa read back three paragraphs to you, but you can summarize it into a couple of sentences that at least tease the user and the user can ask to get more information—that’s what we’re looking for now.”

Source: AdWeek December 2, 2016