Transform DITA to speech
This DITA-OT Plug-in transforms DITA to speech in the form of an audiobook.
<task id="replacecover" xml:lang="en-us">
<title>Replace the cover of your system.</title>
<shortdesc>The cover needs to be put back on to reduce problems from dust.</shortdesc>
<taskbody>
<steps>
<step>
<cmd>Retrieve the computer's cover from its safe place. Put it back on.</cmd>
</step>
<step>
<cmd>Retrieve the screws from the safe place. Put them back in.</cmd>
</step>
<step>
<cmd>Put away your screwdriver before you lose it.</cmd>
</step>
</steps>
</taskbody>
</task>
The audiobook plug-in has been tested against DITA-OT 3.x. It is recommended that you
upgrade to the latest version.
The DITA-OT Audiobook transform is a plug-in for the DITA Open Toolkit.
Full installation instructions for downloading DITA-OT can be found
here.
dita-ot-4.2.zip
package from the project website atbin
directory to the PATH system variable.This defines the necessary environment variable to run the dita
command from the command line.
curl -LO https://github.com/dita-ot/dita-ot/releases/download/4.2/dita-ot-4.2.zip
unzip -q dita-ot-4.2.zip
rm dita-ot-4.2.zip
dita install https://github.com/jason-fox/fox.jason.audiobook/archive/master.zip
The dita
command line tool requires no additional configuration.
FFmpeg is a free software project consisting of a software suite of libraries and programs for handling video, audio,
and other multimedia files and streams. FFmpeg is published under the GNU Lesser General Public License 2.1+ or GNU
General Public License 2+ (depending on which options are enabled).
To download a copy follow the instructions on the Download page
Several publically available text-to-speech cloud services are available for use, they typically offer a
try-before-you-buy option and generally offer sample access to the service for without cost. Upgrading to a paid
version will be necessary when transforming larger documents.
The IBM Text to Speech service processes text and natural language to generate synthesized audio output complete with
appropriate cadence and intonation. It is available in several voices:
Introduction: Getting Started
Create an instance of the service:
Copy the credentials to authenticate to your service instance:
API Key
and URL
values.cfg/configuration.properties
to hold your API Key
and URL
.The Speech Services allow you to convert text into synthesized speech and get a list of supported voices for a region
using a set of REST APIs. Each available endpoint is associated with a region. A subscription key for the
endpoint/region you plan to use is required.
Introduction: Getting Started
Create an instance of the service:
You can sign up for a free Microsoft account at the Microsoft account portal. To get started, click Sign in with
Microsoft and then, when asked to sign in, click Create one. Follow the steps to create and verify your new Microsoft
account.
After you sign in to Try Cognitive Services, your free trial begins. The displayed webpage lists all the Azure Cognitive
Services services for which you currently have trial subscriptions. Two subscription keys are listed beside Speech
Services. You can use either key in your applications.
Copy the credentials to authenticate to your service instance:
API Key
and Endpoint
values.cfg/configuration.properties
to hold your API Key
and URL
.To run, use the ssml
transform.
PATH_TO_DITA_OT/bin/dita -f ssml -o out -i PATH_TO_DITAMAP
Once the command has run, a list.txt
and a series of *.ssml
files will be available in the output directory.
To run, use the mp3
transform.
PATH_TO_DITA_OT/bin/dita -f mp3 -o out -i PATH_TO_DITAMAP --ssml.service=[bing|watson]
Once the command has run, a list.txt
and a series of *.mp3
files will be available in the output directory.
To run, use the audiobook
transform.
PATH_TO_DITA_OT/bin/dita -f audiobook -o out -i PATH_TO_DITAMAP --ssml.service=[bing|watson]
Once the command has run, an *.m4a
file will be created in the output directory.
ssml.service
- Decides which translation service to use:dummy
- Avoids accessing a Speech-to-Text service, uses a dummy MP3 file for all outputscustom
- Sends the SSML to an arbitrary URL using POST - use this to connect to proxies for Amazonwatson
- Connects to the IBM Cloud Speech-to-Text servicebing
- Connects to the Microsoft Speech-to-Text servicessml.gender
- Prefered Voice Gender:male
- Use a male voice for text-to-speech where available.female
- Use a female voice for text-to-speech where available.ssml.authentication.url
- URL for creating an OAuth token if needed for a service. Defaults to the value inconfiguration.properties
ssml.output.format
- Output format override for a Speech-to-Text service. Defaults to the value inconfiguration.properties
ssml.apikey
- API Key for the Speech-to-Text service. Defaults to the value in configuration.properties
ssml.url
- URL for a Speech-to-Text service. Defaults to the value in configuration.properties
mp3.cachefile
- Specifies the location of a cache file to be used. If the SSML file matches to a previouslymp3.cover.art.add
- Specifies whether or not cover art is to be added to an album (default no
)mp3.cover.art.image
- Specifies the cover art to be used for an album, the default will use the image plug-incfg/cover-art.png
audiobook.format
- mp4 Output Format (with or without DRM)m4a
- audio file created in the MPEG-4 format (default)m4b
- audio file created in the MPEG-4 format with DRMWhen running the mp3
or audiobook
transforms, the male voice corresponding to the xml:lang
attribute of the root
topic will be chosen to render the speech. Use the --ssml.gender=female
parameter to switch to the female voice. If
no voice of the preferred gender can be found, the default will be used.
A list of available voices can be found within following files:
cfg/attrs/bing.voice-attr.xsl
cfg/attrs/watson.voice-attr.xsl
Each listing shows the default male and female voices for a language, plus any regional variants which are available:
<!-- Voices speaking in English -->
<xsl:attribute-set name="__voice__en__male">
<xsl:attribute name="voice">en-US_MichaelVoice</xsl:attribute>
</xsl:attribute-set>
<xsl:attribute-set name="__voice__en__female">
<xsl:attribute name="voice">en-US_AllisonVoice</xsl:attribute>
</xsl:attribute-set>
<!-- Voices speaking in Regional English -->
<xsl:attribute-set name="__voice__en-us__female">
<xsl:attribute name="voice">en-US_AllisonVoice</xsl:attribute>
</xsl:attribute-set>
<!--xsl:attribute-set name="__voice__en-us__female">
<xsl:attribute name="voice">en-US_LisaVoice</xsl:attribute>
</xsl:attribute-set-->
<xsl:attribute-set name="__voice__en-gb__female">
<xsl:attribute name="voice">en-GB_KateVoice</xsl:attribute>
</xsl:attribute-set>
As you can see the en-US_AllisonVoice
is currently the preferred female voice for all documents marked up asxml:lang="en"
and xml:lang="en-US"
.
en
preferences, replace the text within the <xsl:attribute name="voice">
element with the preferreden-us
preferences, comment out the existing selection and uncomment the new preferred voice.Some DITA tags such as <p>
and <b>
translate directly to SSML, however there is rich vocabulary of audio effects
which are missing from the vanilla DITA specification. These can be accommodated using the props
attribute added to<ph>
tag. Examples are given below. The listing is mainly based on the
IBM Text to Speech Programming Guide,
however the DITA plug-in is not service specific so some additional tags can be used. Obviously common substitutions
should be replaced with <keyword>
elements for consistency of reuse.
Note: Not all tags and attributes will be supported by every provider.
<say-as>
ElementThe say-as
tag allows the author to indicate information on the type of text contained within the tag and to help
specify the level of detail for rendering the text. The required attribute for this tag is interpret-as
. There are
two optional attributes, format
and detail
, which are only used with particular values within the interpret-as
attribute. These optional attributes are illustrated within the entries for their associated values.
letters
: This value spells out the characters in a given word within the enclosed tag.
<ph props="say-as interpret-as(letters)">Hello</ph>
digits
: This value spells out the digits in a given number within the enclosed tag.
<ph props="say-as interpret-as(digits)">123456</ph>
vxml:digits
: This value performs the same function as the digits value.
<ph props="say-as interpret-as(vxml:digits)">123456</ph>
date
This value will speak the date within the enclosed tag, using the format given in the associated format
format
attribute is required for use with the date value of interpret-as
, but if format
is not
<ph props="say-as interpret-as(date) format(mdy)">12/17/2005</ph>
<ph props="say-as interpret-as(date) format(ymd)">2005/12/17</ph>
<ph props="say-as interpret-as(date) format(dmy)">17/12/2005</ph>
<ph props="say-as interpret-as(date) format(ydm)">2005/17/12</ph>
<ph props="say-as interpret-as(date) format(my)">12/2005</ph>
<ph props="say-as interpret-as(date) format(md)">12/17</ph>
<ph props="say-as interpret-as(date) format(ym)">2005/12</ph>
ordinal
- This value will speak the ordinal value for the given digit within the enclosed tag.
<ph props="say-as interpret-as(ordinal)">2</ph>
<ph props="say-as interpret-as(ordinal)">1</ph>
cardinal
- This value will speak the cardinal number corresponding to the Roman numeral within the enclosed tag.
Super Bowl <ph props="say-as interpret-as(cardinal)">XXXIX</ph>
number
- This value is an alternative to using the values given above. Using the format
attribute to determinedetail
attribute.
<ph props="say-as interpret-as(number)">123456</ph>
<ph props="say-as interpret-as(number) format(ordinal)">123456</ph>
<ph props="say-as interpret-as(number) format(cardinal)">123456</ph>
<ph props="say-as interpret-as(number) format(telephone)">555-555-5555</ph>
<ph props="say-as interpret-as(number) format(telephone) detail(punctuation)">555-555-5555</ph>
vxml:boolean
- This value will speak yes
or no
depending on the value given within the enclosed tag.
<ph props="say-as interpret-as(vxml:boolean)">true</ph>
<ph props="say-as interpret-as(vxml:boolean)">false</ph>
vxml:date
- This value works like the date value, except that the format is predefined as YYYYMMDD
. When a value
<ph props="say-as interpret-as(vxml:date)">20050720</ph>
<ph props="say-as interpret-as(vxml:date)">????0720</ph>
<ph props="say-as interpret-as(vxml:date)">200507??</ph>
vxml:currency
- This value is used to control the synthesis of monetary quantities. The string must be written inUUUmm.nn
format, where UUU
is the three character currency indicator specified by ISO standard 4217, andmm.nn
is the amount.
<ph props="say-as interpret-as(vxml:currency)">USD45.30</ph>
If there are more than two decimal places in the number within the enclosed tag, the amount will be synthesized as a
decimal number followed by the currency indicator. If the three character currency indicator is not present, the number
will be synthesized as a decimal only, with no pronunciation of currency type.
<ph props="say-as interpret-as(vxml:currency)">USD45.329</ph>
vxml:phone
- This value will speak a phone number with both digits and punctuation, similar to the number
valueformat(telephone)
.
<ph props="say-as interpret-as(vxml:phone)">555-555-5555</ph>
<phoneme>
ElementThe SSML phoneme tag enables users to provide a phonetic pronunciation for the enclosed text. This tag has two
attributes:
alphabet
- This attribute specifies the phonology used. The supported alphabets to designate are ipa
for the
International Phonetic Alphabet, and ibm
for the SPR representation.
ph
- This attribute specifies the pronunciation. It is a required attribute. This example shows how a
pronunciation for “tomato” is specified using the IPA phonology, where the symbols are given using Unicode:
<ph props="phoneme alphabet(ipa) ph(təmeiɾou̥)">tomato</ph>
This example shows how a pronunciation for “tomato” is specified using the SPR phonology:
<ph props="phoneme alphabet(ibm) ph(.0tx.1me.0fo)">tomato</ph>
<sub>
ElementThis tag is used to indicate that the text included in the alias attribute is to replace the text enclosed within the
tag when speech is synthesized. The only attribute for this tag is the alias
attribute, and it is required.
<ph props="sub alias(International Business Machines)">IBM</ph>
<voice>
ElementThis tag is used when a change in voice is required. Although all attributes listed are optional, without any attributes
defined an error will result. The optional attributes are:
age
Accepted values are positive integers between the ages of 14 and 60 for both male and female.gender
Accepted values are male
and female
.name
Accepted values are the installed voices’ names.variant
Accepted values are positive integers.
<ph props="voice age(60)">Sixty year-old's voice.</ph>
<ph props="voice gender(female)">This is a female voice.</ph>
<ph props="voice name(Allison)">Use the IBM TTS voice named Allison.</ph>
<ph props="voice name(Allison, Andrew, Tyler)">Use the first available IBM TTS voice named in the given list.</ph>
<emphasis>
ElementThe <emphasis>
element equests that the contained text be spoken with emphasis (also referred to as prominence or
stress).
level
: the optional level attribute indicates the strength of emphasis to be applied. Defined values are strong
,moderate
, none
and reduced
. The default level is moderate
. The meaning of strong
and moderate
emphasisreduced
level is effectively thenone
level is used to prevent the synthesis processor from emphasizing words that it might typically emphasize.
That is a <ph props="emphasis"> big </ph> car!
That is a <ph props="emphasis level(strong)"> huge </ph>bank account!
Emphasis can also be achieved using the <b>
tag
That is a <b> big </b> car!
That is a <b props="level(strong)"> huge </b>bank account!
<break>
ElementThis tag inserts pauses into the spoken text. It has the following optional attributes:
strength
- This attribute specifies the length of a pause in terms of varying strength values: none,
x-weak,
weak,
medium,
strong,
or x-strong.
time
- This attribute specifies the length of the pause in terms of seconds or milliseconds. The values formatsNNNs
for seconds or NNNms
for milliseconds.
Different sized <ph props="break strength(none)"></ph> pauses.
Different sized <ph props="break strength(x-weak)"></ph> pauses.
Different sized <ph props="break strength(weak)"></ph> pauses.
Different sized <ph props="break strength(medium)"></ph> pauses.
Different sized <ph props="break strength(strong)"></ph> pauses.
Different sized <ph props="break strength(x-strong)"></ph> pauses.
Different sized <ph props="break time(1s)"></ph> pauses.
Different sized <ph props="break time(1000ms)"></ph> pauses.
<prosody>
ElementThis tag controls the pitch, range, speaking rate, and volume of the text. all attributes are optional, but if no
attribute is given an error results.
Here is a description of the optional attributes:
pitch
- This attribute modifies the baseline pitch for the text enclosed within the tag. Accepted values are
either:, a number followed by the Hz designation, a relative change, x-low
, low
, medium
, high
, x-high
,default
range
This attribute modifies the pitch range for the text enclosed within the tag. Accepted values for this
attribute are the same as the accepted values for pitch
.
rate
- This attribute indicates a change in the speaking rate for contained text. Accepted values are: - ax-slow
, slow
, medium
, fast
, x-fast
, default
The rate
is specified in terms of words-per-minute. If the speaking rate is 50 words per minute, then rate=50
. If
the setting is rate=+10
, the speaking rate will be 10 words per minute faster than your current rate
setting.
0.0
to 100.0
or thesilent
, x-soft
, soft
, medium
, loud
, x-loud
, default
<ph props="prosody pitch(150Hz)"> Modified pitch </ph>
<ph props="prosody pitch(-20Hz)"> Modified pitch </ph>
<ph props="prosody pitch(+20Hz)"> Modified pitch </ph>
<ph props="prosody pitch(-12st)"> Modified pitch </ph>
<ph props="prosody pitch(+12st)"> Modified pitch </ph>
<ph props="prosody pitch(x-low)"> Modified pitch </ph>
<ph props="prosody range(150Hz)"> Modified pitch range</ph>
<ph props="prosody range(-20Hz)"> Modified pitch range</ph>
<ph props="prosody range(+20Hz)"> Modified pitch range</ph>
<ph props="prosody range(-12st)"> Modified pitch range</ph>
<ph props="prosody range(+12st)"> Modified pitch range</ph>
<ph props="prosody range(x-high)"> Modified pitch range</ph>
<ph props="prosody rate(slow)"> Modified speaking rate</ph>
<ph props="prosody rate(+25)"> Modified speaking rate</ph>
<ph props="prosody rate(-25)"> Modified speaking rate</ph>
<ph props="prosody volume(88.9)">Modified volume</ph>
<ph props="prosody volume(loud)">Modified volume</ph>
<audio>
ElementThis tag inserts recorded elements into the generated audio. The only attribute is src
and is required. This attribute
specifies the location of the file to be inserted.
<ph props="audio src(http://www.myfiles.com/files/beep.wav)"></ph>
PRs accepted.
Apache 2.0 © 2019 - 2024 Jason Fox