Amazon appear this week that it’s amalgam new appearance into Alexa, which will allow developers of Skills to make it sound more alive and human.

Alexa now supports Speech Synthesis Markup Language (SSML), which allows developers to ascendancy the tone, timing, and affect of its voice. With this update, Alexa will whisper, emphasize, and even bleep inappropriate words if the developer chooses. As Amazon says, “These SSML appearance accommodate a more accustomed voice experience.”

I’m disturbing to find why Alexa whispering to me would be useful, abnormally if it takes more time to develop. Still, a more human sound could lead to a more affable user experience. It reminds me of how Microsoft sold Cortana on her smooth tones rather than Siri’s staccato cadence.

Human-sounding voices would apparently go a long way appear making Alexa one of the personalized, JARVIS-like claimed administration we heard discussed at SXSW.

But this isn’t article that’s going to be chip with Alexa. It’s article that Skill developers can use, but not article I can see being part of Alexa’s daily use.

Read next: Watch how Elon Musk plans to fix LA cartage with giant underground tunnels