BOSTON--(BUSINESS WIRE)--Applause, the world leader in testing and digital quality, recently conducted a survey with 6,680 respondents about their experiences using artificial intelligence (AI) in the form of voice applications such as chatbots, interactive voice response (IVR), and other conversational assistants.
Conducted in February 2022, the survey showed alignment between the expectations and experiences of participants in the U.S. and across Europe. According to the survey, consumers expect apps and websites to provide AI-driven customer service solutions, but they are not always satisfied with the user experience.
For example, 93% of respondents expect chat functionality on a website, but only two-thirds said they were somewhat satisfied or extremely satisfied with the experience. More than half of respondents in the U.S. (51%) and Europe (57%) said they preferred to wait for a human agent when calling a company for customer support.
“The fact that more than half of respondents preferred to wait for a human agent instead of using a chatbot, IVR, or voice assistant speaks to a potential lack of confidence which perhaps is based on previous experiences. When a user has a bad digital experience, it is difficult to change that perception. This is a moment when quality can be a real differentiator, separating a brand from its competition. If customers expect these solutions to disappoint, they are predisposed to anticipate failure and quickly lose patience with any alternative that isn't a human interaction. Therefore, there is tremendous advantage to those who are able to deliver better experiences that can exceed the service level they have been conditioned to expect,” said Luke Damian, Chief Growth Officer, for Applause.
User Experience Trails Expectations
- 93% expect chat functionality on a company’s website or app but only 63% said they were somewhat satisfied or extremely satisfied with the experience.
- 89% expect call centers to have IVR systems that greet them but only 25% prefer immediate access to automated touchtone response systems, and 22% prefer an automated virtual service representative that responds to voice commands.
- 44% always expect mobile apps to have voice assistants or voice search features while 41% said it depends on the app category.
A single AI application can require tens of thousands or more accurate and relevant data artifacts, all of which need to be collected with the application’s specific purpose and needs in mind. Applause leverages a community of more than one million qualified testers worldwide to collect the volume and quality of real-world data needed to train and validate AI algorithms, like those used for IVR or chatbots, and then test the trained systems to ensure they are working as intended.
Bias is a well-known challenge in AI. Algorithms that are not provided enough data or learn from data that is collected from a group of people that is too homogenous, can produce overly-generalized, biased outcomes and unintended behaviors. The size and breadth of the Applause community enables a diversity of feedback and input representing a wide variety of devices, plus limitless diversity of demographic and psychographic characteristics, including countries of origin or residence, ages, genders, cultures, abilities, languages, socioeconomic variables, and more.
Additional resources on Artificial Intelligence
How to Build an AI Data Collection Program - Ebook
3 Unexpected AI Use Cases and Their Hidden Benefits - webinar
4 Best Practices for Better Natural Language Assistants - Ebook
Applause is the world leader in testing and digital quality. Brands today win or lose customers through digital interactions, and Applause alone can deliver authentic feedback on the quality of digital assets and experiences, provided by real users in real-world settings. Our disruptive approach harnesses the power of the Applause platform and leverages a vetted community of more than one million digital experts worldwide. Unlike traditional testing methods (including lab-based and offshoring), we respond with the speed, scale and flexibility that digital-focused brands require and expect. Applause provides insightful, actionable testing results that can directly inform go/no go release decisions, helping development teams build better and faster, and release with confidence. Thousands of digital-first brands – including Ford, Google, Western Union and Dow Jones – rely on Applause as a best practice to deliver the digital experiences their customers love.
Learn more at www.applause.com.