We are very excited to announce the integration of an application programming interface (API) inside of questions. questfox is partnering with audEERING and their emotion analysis engine based on speech analytics. questfox is now offering a permanent link to the incredible innovative audio analytics tool from audEERING (Munich).
Some questfox users already have access to a new question type called Speech2Text Voice Analysis. The question type is available for some users under Multimedia Insights.
The question type is basically a speaking answer question type (with all the restrictions still in practice*), where the transcript is generated based on the audio file. On top of this functionality, the audiofile is sent via API to audEERING to the check for emotional patterns in the voice answer. Within seconds questfox is able to receive the answer in the form of numbers assigned to different emotional states of a person. These values are saved inside the question type and kept in the background for export or other purposes. For example they can be used right away used for different actions during the live Interview. The values are exported alongside other data in the export file.
At the moment we are still testing the performance of the integrated API inside a survey tool. After additional tests runs, this emotional tracking opportunity will allow totally different kinds of research.
This new function is not included in the standard questfox software package as it causes additional cost anytime it is used. After a defined testing time we will define the pricing for this fabulous new feature set. Little by little we will create templates of ideas how to use emotional tracking in the interview process. We are also looking forward to see our users fantastic ideas come to life with this new set of opportunities.
We are looking forward to see new forms of research with the integration of the cognitive API inside the survey tool questfox. There may be more cognitive services connected to questfox in the future. We strongly believe in the power of expert knowledge from different fields of expertise. We will continue developing new standards in the world of interviewing as our field of expertise. The new technology allows us to connect several best-in-class approaches in one tool. The innovative tool allowing this open world of connected intelligence is still questfox.
*Restrictions when using voice type questions inside of questfox.
Years ago we made the decision not to develop an app for questfox as this would restrict our users around the world. We still believe that the effort needed for users to download an install an app for market research hardly pays off. This is why we opted for an open approach based on internet browser technologies. The negative side of this is that even in 2019 it is difficult to get voice answers from a series of browsers who do not allow the usage of a microphone in a browser. The biggest obstacle here still is Apple who does not allow to use a microphone in a browser situation on their devices. We strongly recommend a technology funnel in your suvey filtering out the users of iOS and some outdated browsers like Internet Explorer. Good news: Even Microsoft got the message right and the new version of Microsoft Edge is able to work with microphone. We know that it will still take years before 100% of a population can be researched with this approach. For the time being we have to live with the 70% of users who can actually be interviewed with this technology.
An indicator whether this is possible can be found here https://caniuse.com/#search=mediastream
questfox is fully into Speech.
In order to learn more about the quality of the transcriptions possible, we integrated a new speech type with the only job to check for the quality of the audio transcript. The new question type is available under “Multimedia Insights” with the label “Speech2Text Quality Score”.
Integrating the speech-to-text Quality Score allows you to use a simple test for the quality of the audio transcription. The result is the score comes out between zero (no quality) and 1 (perfect quality). The higher the score, the better the quality of the transcript. Before using live transcription in an interview situation, we recommend to have at least 0.7 in quality or above.
If the quality score falls below 0.5, we recommend to not use Audio transcription features in your project. Reason for a bad score could be the overall sound environment of the recording situation or the poor expression quality of the speech of an individual.
By using questlogix, you can steer a participant through the interview by not allowing voice functionalities in the interview situation.
At the moment we recommend to save the quality score along with the data to learn more about your respondents and their technical setting. Looking at the potential base of users being able to use speech on their device, more than seventy percent of the worldwide internet population should technically be available to participate in such a voice study. But in reality the feasibility of speech technology falls way behind those wishful numbers. Integrating the quality score will help you understand your speech in research opportunities better.
By the way: we do not record whatever people say under this quality score question. You can change the sample sentence that people should use in this question type into whatever you would like to.
You can also use the outcome as a variable in the ongoing interview by showing the score or using it as a trigger for a questlogix.
Explanation to better understand the Transcription Confidence Score
|above 0,9||very good||just go on with your project|
|0,8 – 0,9||good||no need do change anything|
|0,7 – 0,8||acceptable||reduce background noise|
|0,6 – 0,7||usable with caution||check microphone|
|0,5 – 0,6||not acceptable||re-position microphone/person|
|0 – 0,5||very bad||do not use AUDIO functionality|