Subjective Evaluation of Text-to-Speech Models: Comparing Absolute Category Rating and Ranking by Elimination Tests

Kishor Kayyar Lakshminarayana, Christian Dittmar, Nicola Pia, Emanuël A.P. Habets

Presented at ISCA 12th Speech Synthesis Workshop, Grenoble, France, 26-28 August, 2023

Click here for the paper.

Abstract

Modern text-to-speech (TTS) models are typically subjectivelyevaluated using an Absolute Category Rating (ACR) method.This method uses the mean opinion score to rate each modelunder test. However, if the models are perceptually too similar,assigning absolute ratings to stimuli might be difficult and proneto subjective preference errors. Pairwise comparison tests offerrelative comparison and capture some of the subtle differencesbetween the stimuli better. However, pairwise comparisons takemore time as the number of tests increases exponentially withthe number of models. Alternatively, a ranking-by-elimination(RBE) test can assess multiple models with similar benefits aspairwise comparisons for subtle differences across models without the time penalty. We compared the ACR and RBE tests forTTS evaluation in a controlled experiment. We found that theobtained results were statistically similar even in the presenceof perceptually close TTS models.

Code

Click here for the code in github

Additional Material

  • Poster used in the 12th Speech Synthesis Workshop held at Grenoble, France, on 26-28 August, 2023 (PDF)