• 河北南和县:芒种时节农事忙 2020-01-18
  • 老外假装施工事故整蛊路人,一个个真吓得不轻(上) 2020-01-18
  • 习近平为传统文化“代言” 2020-01-18
  • 迎泽大街下穿火车站通道顶进过半 2020-01-18
  • 游客被指捡石子砸老虎 北京野生动物园:正在核实 2020-01-17
  • 戏曲进校园 宣传十九大又传承民族文化 2020-01-17
  • 《人民日报》创刊70周年 各界人士送祝福 2020-01-15
  • “暗剑”无人机相关新闻 2020-01-15
  • 习近平:在深入推动长江经济带发展座谈会上的讲话 2020-01-12
  • 打造科技创新领域的竞争优势 2020-01-12
  • 读书、看展成潮流 山城端午文化热 2020-01-08
  • 重庆市公安局交通管理局互联网交通安全服务管理平台 2020-01-08
  • 《鬼神童子》漫画家黑岩善宏因心肌梗塞于本月8日去世 2020-01-08
  • 第八届全国税收宣传漫画大赛评选结果揭晓 2020-01-07
  • 习近平会见美国国务卿蓬佩奥 2020-01-03
  • 2013年生肖特码八句诗:Pronunciation Use Cases

    W3C First Public Working Draft

    This version:
    //www.gzifj.tw/TR/2019/WD-pronunciation-use-cases-20190905/
    Latest published version:
    //www.gzifj.tw/TR/pronunciation-use-cases/
    Latest editor's draft:
    https://w3c.github.io/pronunciation/use-cases
    Editors:
    (Educational Testing Service)
    (Deque System)
    (Educational Testing Service)
    (W3C)

    Abstract

    The objective of the Pronunciation Task Force is to develop normative specifications and best practices guidance collaborating with other W3C groups as appropriate, to provide for proper pronunciation in HTML content when using text to speech (TTS) synthesis. This document provides various use cases highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements from the user scenarios provide the basis for these technical requirements/specifications.

    Status of This Document

    This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at //www.gzifj.tw/TR/.

    This is a First Public Working Draft of Pronunciation User Scenarios by the Accessible Platform Architectures Working Group. It was initially developed by the Pronunciation Task Force to provide various use cases highlighting the need for standardization of pronunciation markup, to ensure that consistent and accurate representation of the content. The requirements from the user scenarios provide the basis for these technical requirements/specifications.

    To comment, file an issue in the W3C pronunciation GitHub repository. If this is not feasible, send email to [email protected] (subscribe, archives). Comments are requested by 14 October 2019. In-progress updates to the document may be viewed in the publicly visible editors' draft.

    Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

    This document was produced by a group operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

    This document is governed by the 1 March 2019 W3C Process Document.

    1. Introduction

    This section is non-normative.

    This document provides use cases which describe specific implmentation approaches for introducing pronunciation and spoken presentation authoring markup into HTML5. These approaches are based on the two primary approaches that have evolved from the Pronunciation Task Force members. Other approaches may appear in subsequent working drafts.

    Successful use cases will be those that provide ease of authoring and consumption by assistive technologies and user agents that utilize synthetic speech for spoken presentation of web content. The most challenging aspect of consumption may be alignment of the markup approach with the standard mechanisms by which assistive technologies, specifically screen readers, obtain content via platform accessibility APIs.

    2. Use Case aria-ssml

    2.1 Background and Current Practice

    A new aria attribute could be used to include pronunciation content.

    2.2 Goal

    Embed SSML in an HTML document.

    2.3 Target Audience

    2.4 Implementation Options

    aria-ssml as embedded JSON

    When AT encounters an element with aria-ssml, the AT should enhance the UI by processing the pronunciation content and passing it to the Web Speech API or an external API (e.g., ).

    I say <span aria-ssml='{"phoneme":{"ph":"p??kɑ?n","alphabet":"ipa"}}'>pecan</span>.
    You say <span aria-ssml='{"phoneme":{"ph":"?pi.k?n","alphabet":"ipa"}}'>pecan</span>.

    Client will convert JSON to SSML and pass the XML string a speech API.

    var msg = new SpeechSynthesisUtterance();
    msg.text = convertJSONtoSSML(element.getAttribute('aria-ssml'));
    speechSynthesis.speak(msg);

    aria-ssml referencing XML by template ID

    <!-- ssml must appear inside a template to be valid -->
    <template id="pecan">
    <?xml version="1.0"?>
    <speak version="1.1"
           xmlns="//www.gzifj.tw/2001/10/synthesis"
           xmlns:xsi="//www.gzifj.tw/2001/XMLSchema-instance"
           xsi:schemaLocation="//www.gzifj.tw/2001/10/synthesis
                       //www.gzifj.tw/TR/speech-synthesis11/synthesis.xsd"
           xml:lang="en-US">
        You say, <phoneme alphabet="ipa" ph="p??kɑ?n">pecan</phoneme>.
        I say, <phoneme alphabet="ipa" ph="?pi.k?n">pecan</phoneme>.
    </speak>
    </template>
    
    <p aria-ssml="#pecan">You say, pecan. I say, pecan.</p>

    Client will parse XML and serialize it before passing to a speech API:

    var msg = new SpeechSynthesisUtterance();
    var xml = document.getElementById('pecan').content.firstElementChild;
    msg.text = serialize(xml);
    speechSynthesis.speak(msg);

    aria-ssml referencing an XML string as script tag

    <script id="pecan" type="application/ssml+xml">
    <speak version="1.1"
           xmlns="//www.gzifj.tw/2001/10/synthesis"
           xmlns:xsi="//www.gzifj.tw/2001/XMLSchema-instance"
           xsi:schemaLocation="//www.gzifj.tw/2001/10/synthesis
                       //www.gzifj.tw/TR/speech-synthesis11/synthesis.xsd"
           xml:lang="en-US">
        You say, <phoneme alphabet="ipa" ph="p??kɑ?n">pecan</phoneme>.
        I say, <phoneme alphabet="ipa" ph="?pi.k?n">pecan</phoneme>.
    </speak>
    </script>
    
    <p aria-ssml="#pecan">You say, pecan. I say, pecan.</p>

    Client will pass the XML string raw to a speech API.

    var msg = new SpeechSynthesisUtterance();
    msg.text = document.getElementById('pecan').textContent;
    speechSynthesis.speak(msg);

    aria-ssml referencing an external XML document by URL

    <p aria-ssml="//example.com/pronounce.ssml#pecan">You say, pecan. I say, pecan.</p>

    Client will pass the string payload to a speech API.

    var msg = new SpeechSynthesisUtterance();
    var response = await fetch(el.dataset.ssml)
    msg.txt = await response.text();
    speechSynthesis.speak(msg);

    2.5 Existing Work

    2.6 Problems and Limitations

    3. Use Case data-ssml

    3.1 Background and Current Practice

    As an existing attribute, data-* could be used, with some conventions, to include pronunciation content.

    3.2 Goal

    3.3 Target Audience

    Hearing users

    3.4 Implementation Options

    data-ssml as embedded JSON

    When an element with data-ssml is encountered by an SSML-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the Web Speech API or an external API (e.g., ).

    <h2>The Pronunciation of Pecan</h2>
    <p><speak>
    I say <span data-ssml='{"phoneme":{"ph":"p??kɑ?n","alphabet":"ipa"}}'>pecan</span>.
    You say <span data-ssml='{"phoneme":{"ph":"?pi.k?n","alphabet":"ipa"}}'>pecan</span>.

    Client will convert JSON to SSML and pass the XML string a speech API.

    var msg = new SpeechSynthesisUtterance();
    msg.text = convertJSONtoSSML(element.dataset.ssml);
    speechSynthesis.speak(msg);

    data-ssml referencing XML by template ID

    <!-- ssml must appear inside a template to be valid -->
    <template id="pecan">
    <?xml version="1.0"?>
    <speak version="1.1"
           xmlns="//www.gzifj.tw/2001/10/synthesis"
           xmlns:xsi="//www.gzifj.tw/2001/XMLSchema-instance"
           xsi:schemaLocation="//www.gzifj.tw/2001/10/synthesis
                       //www.gzifj.tw/TR/speech-synthesis11/synthesis.xsd"
           xml:lang="en-US">
        You say, <phoneme alphabet="ipa" ph="p??kɑ?n">pecan</phoneme>.
        I say, <phoneme alphabet="ipa" ph="?pi.k?n">pecan</phoneme>.
    </speak>
    </template>
    
    <p data-ssml="#pecan">You say, pecan. I say, pecan.</p>

    Client will parse XML and serialize it before passing to a speech API:

    var msg = new SpeechSynthesisUtterance();
    var xml = document.getElementById('pecan').content.firstElementChild;
    msg.text = serialize(xml);
    speechSynthesis.speak(msg);

    data-ssml referencing an XML string as script tag

    <script id="pecan" type="application/ssml+xml">
    <speak version="1.1"
           xmlns="//www.gzifj.tw/2001/10/synthesis"
           xmlns:xsi="//www.gzifj.tw/2001/XMLSchema-instance"
           xsi:schemaLocation="//www.gzifj.tw/2001/10/synthesis
                       //www.gzifj.tw/TR/speech-synthesis11/synthesis.xsd"
           xml:lang="en-US">
        You say, <phoneme alphabet="ipa" ph="p??kɑ?n">pecan</phoneme>.
        I say, <phoneme alphabet="ipa" ph="?pi.k?n">pecan</phoneme>.
    </speak>
    </script>
    
    <p data-ssml="#pecan">You say, pecan. I say, pecan.</p>

    Client will pass the XML string raw to a speech API.

    var msg = new SpeechSynthesisUtterance();
    msg.text = document.getElementById('pecan').textContent;
    speechSynthesis.speak(msg);

    data-ssml referencing an external XML document by URL

    <p data-ssml="//example.com/pronounce.ssml#pecan">You say, pecan. I say, pecan.</p>

    Client will pass the string payload to a speech API.

    var msg = new SpeechSynthesisUtterance();
    var response = await fetch(el.dataset.ssml)
    msg.txt = await response.text();
    speechSynthesis.speak(msg);

    3.5 Existing Work

    3.6 Problems and Limitations

    4. Use Case HTML5

    4.1 Background and Current Practice

    HTML5 includes the XML namespaces for MathML and SVG. So, using either's elements in an HTML5 document is valid. Because SSML's implementation is non-visual in nature, browser implementation could be slow or non-existent without affecting how authors use SSML in HTML. Expansion of HTML5 to include SSML namespace would allow valid use of SSML in the HTML5 document. Browsers would treat the element like any other unknown element, as HTMLUnknownElement.

    4.2 Goal

    4.3 Target Audience

    4.4 Implementation Options

    SSML

    When an element with data-ssml is encountered by an SSML-aware AT, the AT should enhance the user interface by processing the referenced SSML content and passing it to the Web Speech API or an external API (e.g., ).

    <h2>The Pronunciation of Pecan</h2>
      <p><speak>
      You say, <phoneme alphabet="ipa" ph="p??kɑ?n">pecan</phoneme>.
      I say, <phoneme alphabet="ipa" ph="?pi.k?n">pecan</phoneme>.
    </speak></p>

    4.5 Existing Work

    4.6 Problems and Limitations

    SSML is not valid HTML5

    5. Use Case Custom Element

    5.1 Background and Current Practice

    Embed valid SSML in HTML using custom elements registered as ssml-* where * is the actual SSML tag name (except for p which expects the same treatment as an HTML p in HTML layout).

    5.2 Goal

    Support use of SSML in HTML documents.

    5.3 Target Audience

    5.4 Implementation Options

    ssml-speak: see demo

    Only the <ssml-speak> component requires registration. The component code lifts the SSML by getting the innerHTML and removing the ssml- prefix from the interior tags and passing it to the web speech API. The <p> tag from SSML is not given the prefix because we still want to start a semantic paragraph within the content. The other tags used in the example have no semantic meaning. Tags like <em> in HTML could be converted to <emphasis> in SSML. In that case, CSS styles will come from the browser's default styles or the page author.

    <ssml-speak>
      Here are <ssml-say-as interpret-as="characters">SSML</ssml-say-as> samples.
      I can pause<ssml-break time="3s"></ssml-break>.
      I can speak in cardinals.
      Your number is <ssml-say-as interpret-as="cardinal">10</ssml-say-as>.
      Or I can speak in ordinals.
      You are <ssml-say-as interpret-as="ordinal">10</ssml-say-as> in line.
      Or I can even speak in digits.
      The digits for ten are <ssml-say-as interpret-as="characters">10</ssml-say-as>.
      I can also substitute phrases, like the <ssml-sub alias="World Wide Web Consortium">W3C</ssml-sub>.
      Finally, I can speak a paragraph with two sentences.
      <p>
        <ssml-s>You say, <ssml-phoneme alphabet="ipa" ph="p??kɑ?n">pecan</ssml-phoneme>.</ssml-s>
        <ssml-s>I say, <ssml-phoneme alphabet="ipa" ph="?pi.k?n">pecan</ssml-phoneme>.</ssml-s>
      </p>
    </ssml-speak>
    <template id="ssml-controls">
      <style>
        [role="switch"][aria-checked="true"] :first-child,
        [role="switch"][aria-checked="false"] :last-child {
          background: #000;
          color: #fff;
        }
      </style>
      <slot></slot>
      <p>
        <span id="play">Speak</span>
        <button role="switch" aria-checked="false" aria-labelledby="play">
          <span>on</span>
          <span>off</span>
        </button>
      </p>
    </template>
    class SSMLSpeak extends HTMLElement {
      constructor() {
        super();
        const template = document.getElementById('ssml-controls');
        const templateContent = template.content;
        this.attachShadow({mode: 'open'})
          .appendChild(templateContent.cloneNode(true));
      }
      connectedCallback() {
        const button = this.shadowRoot.querySelector('[role="switch"][aria-labelledby="play"]')
        const ssml = this.innerHTML.replace(/ssml-/gm, '')
        const msg = new SpeechSynthesisUtterance();
        msg.lang = document.documentElement.lang;
        msg.text = `<speak version="1.1"
          xmlns="//www.gzifj.tw/2001/10/synthesis"
          xmlns:xsi="//www.gzifj.tw/2001/XMLSchema-instance"
          xsi:schemaLocation="//www.gzifj.tw/2001/10/synthesis
            //www.gzifj.tw/TR/speech-synthesis11/synthesis.xsd"
          xml:lang="${msg.lang}">
        ${ssml}
        </speak>`;
        msg.voice = speechSynthesis.getVoices().find(voice => voice.lang.startsWith(msg.lang));
        msg.onstart = () => button.setAttribute('aria-checked', 'true');
        msg.onend = () => button.setAttribute('aria-checked', 'false');
        button.addEventListener('click', () => speechSynthesis[speechSynthesis.speaking ? 'cancel' : 'speak'](msg))
      }
    }
    
    customElements.define('ssml-speak', SSMLSpeak);

    5.5 Existing Work

    5.6 Problems and Limitations

    6. Use Case JSON-LD

    6.1 Background and Current Practice

    JSON-LD provides an established standard for embedding data in HTML. Unlike other microdata approaches, JSON-LD helps to reuse standardized annotations through external references.

    6.2 Goal

    Support use of SSML in HTML documents.

    6.3 Target Audience

    6.4 Implementation Options

    JSON-LD

    <script type="application/ld+json">
    {
      "@context": "//schema.org/",
      "@id": "/Pronunciation#WKRP",
      "@type": "TextPronunciation",
      "@language": "en",
      "text": "WKRP",
      "speechToTextMarkup": "SSML",
      "phoneticText": "<say-as interpret-as=\"characters\">WKRP</say-as>"
    }
    </script>
    <p>
      Do you listen to <span itemscope
        itemtype="//schema.org/TextPronunciation"
        itemid="/Pronunciation#WKRP">WKRP</span>?
    </p>

    6.5 Existing Work

    6.6 Problems and Limitations

    not an established "type"/published schema

    7. Use Case Ruby

    7.1 Background and Current Practice

    <Ruby> annotations are short runs of text presented alongside base text, primarily used in East Asian typography as a guide for pronunciation or to include other annotations.

    ruby guides pronunciation visually. This seems like a natural fit for text-to-speech.

    7.2 Goal

    7.3 Target Audience

    7.4 Implementation Options

    ruby with microdata

    Microdata can augment the ruby element and its descendants.

    <p>
      You say,
      <span itemscope="" itemtype="//example.org/Pronunciation">
        <ruby itemprop="phoneme" content="pecan">
          pecan
          <rt itemprop="ph">p??kɑ?n</rt>
          <meta itemprop="alphabet" content="ipa">
        </ruby>.
      </span>
      I say,
      <span itemscope="" itemtype="//example.org/Pronunciation">
        <ruby itemprop="phoneme" content="pecan">
          pe
          <rt itemprop="ph">?pi</rt>
          can
          <rt itemprop="ph">k?n</rt>
          <meta itemprop="alphabet" content="ipa">
        </ruby>.
      </span>
    </p>

    7.5 Existing Work

    7.6 Problems and Limitations

    A. Acknowledgments

    This section is non-normative.

    The following people contributed to the development of this document.

    A.1 Participants active in the Pronunciation TF at the time of publication

  • 河北南和县:芒种时节农事忙 2020-01-18
  • 老外假装施工事故整蛊路人,一个个真吓得不轻(上) 2020-01-18
  • 习近平为传统文化“代言” 2020-01-18
  • 迎泽大街下穿火车站通道顶进过半 2020-01-18
  • 游客被指捡石子砸老虎 北京野生动物园:正在核实 2020-01-17
  • 戏曲进校园 宣传十九大又传承民族文化 2020-01-17
  • 《人民日报》创刊70周年 各界人士送祝福 2020-01-15
  • “暗剑”无人机相关新闻 2020-01-15
  • 习近平:在深入推动长江经济带发展座谈会上的讲话 2020-01-12
  • 打造科技创新领域的竞争优势 2020-01-12
  • 读书、看展成潮流 山城端午文化热 2020-01-08
  • 重庆市公安局交通管理局互联网交通安全服务管理平台 2020-01-08
  • 《鬼神童子》漫画家黑岩善宏因心肌梗塞于本月8日去世 2020-01-08
  • 第八届全国税收宣传漫画大赛评选结果揭晓 2020-01-07
  • 习近平会见美国国务卿蓬佩奥 2020-01-03
  • 星彩票网站 装修石膏角线赚钱不 浙江20选5走势图开奖结果查询 大神棋牌最新版本 意甲直播是哪一年 秒速时时彩选开奖结果 极速11选5 特区彩票网址 听文章能赚钱的app 辽宁快乐12玩法介绍及图片