Evaluation: Difference between revisions

From SynSIG
No edit summary
No edit summary
Line 1: Line 1:
There is need for a speech synthesis toolkit, like the modules that are already available to speed up speech recognition development.
== Towards freely usable software for assesment purposes ==
Assessing speech synthesis is not as easy as assessing speech recognition, for various reasons:
* Various criteria can be used (do we assess speech intelligibility, or speech naturalness, or the efficiency of the speech component in a given application, etc.).
* It systematically requires subjective tests by human listeners, which makes assessment a heavy task.
* Assessing the overall quality of a TTS system does not often give interesting information on how to improve the system, since the output os the result of several complex and intermixed processes.  


The community has made an excellent start with [[Festival]] and [[MBROLA]] , but in order to push the technology forward faster we need more people involved to further build on these efforts. Work is needed on:
It is generally agreed that the developement of free software can boost assessment and improvement of technologies.  


* Standards for component interfaces
As fas as speech synthesis is concerned, the community has made an excellent start with [[Festival]] and [[MBROLA]] , but in order to push the technology forward faster we need more people involved to further build on these efforts. Other software tools are available for tests, like recently [http://mary.dfki.de OpenMary], a multi-lingual (German, English, Tibetan) and multi-platform (Windows, Linux, MacOs X and Solaris) speech synthesis system.
* Tools (automatic segmentation, statistical training algorithms)
* Corpora (both speech and text, raw and annotated)
* TTS modules
* Add-on modules (email pre-processing)
 
This page has been set up to hold contributed modules, interfaces, components and tools
 
It is currently still at the proposal stage, but we are waiting for your contributions


==HLT-evaluation.org==
Another source of information on Speech Synthesis evaluation is the [http://www.hlt-evaluation.org/article.php3?id_article=16 TTS page] in the [http://www.hlt-evaluation.org Human Language Technologies Evaluation] web site.
Another source of information on Speech Synthesis evaluation is the [http://www.hlt-evaluation.org/article.php3?id_article=16 TTS page] in the [http://www.hlt-evaluation.org Human Language Technologies Evaluation] web site.

Revision as of 12:42, 21 April 2006

Towards freely usable software for assesment purposes

Assessing speech synthesis is not as easy as assessing speech recognition, for various reasons:

  • Various criteria can be used (do we assess speech intelligibility, or speech naturalness, or the efficiency of the speech component in a given application, etc.).
  • It systematically requires subjective tests by human listeners, which makes assessment a heavy task.
  • Assessing the overall quality of a TTS system does not often give interesting information on how to improve the system, since the output os the result of several complex and intermixed processes.

It is generally agreed that the developement of free software can boost assessment and improvement of technologies.

As fas as speech synthesis is concerned, the community has made an excellent start with Festival and MBROLA , but in order to push the technology forward faster we need more people involved to further build on these efforts. Other software tools are available for tests, like recently OpenMary, a multi-lingual (German, English, Tibetan) and multi-platform (Windows, Linux, MacOs X and Solaris) speech synthesis system.

HLT-evaluation.org

Another source of information on Speech Synthesis evaluation is the TTS page in the Human Language Technologies Evaluation web site.