Designing a natural language interface can be difficult, is the interface supposed to be able to interpret every single nuance of speech and how about slang? Or should we aim more towards forced language and make our users learn how to interact with simple commands?
All the big companies are making huge investments in AI personal assistants. Amazon has Alexa, Google has Google assistant, Apple has Siri and Microsoft has Cortana to name a few. For most people talking to their devices still feels a bit strange. According to Comscore, 50% of all searches will be made by voice by the year 2020. Today 40% of adults use voice commands at least once per day. Get ready now, and your bots and apps will be a delight to talk to!
Being a geek shows in all parts of her life, whether it be organizing hackathons, running a user group and a podcast with her husband, game nights (retro or VR/MR) with friends, just catching the latest superhero movie or speaking internationally at conferences. Her favorite topics are UX/UI, Mixed reality and other futuristic tech. She's a Windows Development MVP. Together with her husband she runs a company called "AZM dev" which is focused on HoloLens, Windows development, UX and teaching the same.