約 4,467,490 件
https://w.atwiki.jp/cadencii_en/pages/57.html
English 日本語 Release Note Release Date 20 Apr, 2009 Notes Cadencii requires ".NET Framework Runtime(version 2.0 or later)" and "Visual C++ Library DLLs". Installers of these rumtimes are available from the links below. .NET Framework Runtime Download .NET Framework 3.5 SP1 Visual C++ Library DLL Microsoft Visual C++ 2008 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 1.4.2 (417KB) CadenciiSDK version 1.3 (387KB) How to get source codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 84 http //svn.sourceforge.jp/svnroot/cadencii/branches/1.4 ./ These cvs / svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" options from these commands.
https://w.atwiki.jp/hypnosis-eng/pages/44.html
Top Script 作者不詳のInduction(催眠誘導)—Association Method(連結法) Association Method 連結法 Spoken to the subject 被催眠者へのことば You can close your eyes now ... And begin breathing deeply and slowly ... Before you let go completely, and go into a deep hypnotic state, just let yourself listen carefully to everything I say to you ... さあ、あなたは目を閉じることができます。・・・そして深くゆっくりと呼吸しはじめます。・・・あなたが完全に、深い催眠に入るまで、私の言うことすべてを注意深く聞いてください。 It s going to happen automatically ... So you don t need to think about that now ... And you will have no conscious control over what happens ... 事は自動的に生じるでしょう。・・・それであなたは考えることも必要ありません。・・・そして何が起こるか意識的にコントロールすることはないでしょう。 The muscles in and around your eyes will relax all by themselves as you continue breathing ... Easily and freely ... 息を続けていると、あなたの目の周りの筋肉は、ひとりでに、すっかりリラックスするでしょう。楽に、自由に、息をしていると・・・ Without thinking about it, you will soon enter a deep, peaceful, hypnotic trance, without any effort ... There is nothing important for your conscious mind to do ... 考えることなしに、あなたはすぐに深い、平穏な、催眠に入って生きます。何も努力することはいりません。・・・意識が行うことで重要なことは何もありません。 There is nothing really important except the activities of your subconscious mind ... And that can be just as automatic as dreaming ... And you know how easily you can forget your dreams when you awaken ... あなたの無意識が行う活動をのぞいて、本当に重要なことは何もありません。まるで夢のように自動的に行われます。・・・そして目覚めたときには夢をわすれることがどれほど簡単か、あなたは知っています。 You are responding very good. Without noticing it, you have already altered your rate of breathing ... You are breathing much more easily and freely ... And you are revealing signs that indicate you are beginning to drift into a hypnotic trance ... とてもよい反応していますよ。気づいていなくても、あなたの呼吸の速さはもう変わっています。・・・あなたはもっと楽にもっと自由に呼吸していきます。・・・そして催眠に入り始めているサインを示し出しています。 You can really enjoy relaxing more and more, and your subconscious mind will listen to each word I say ... And it keeps becoming less important for you to consciously listen to my voice ... あなたは次第にリラックスすることを楽しむことができます、そしてあなたの無意識は私の言葉のひとつひとつを聞くでしょう・・・そしてあなたが意識的に私の声を聞くことはますます重要でなくなっていきます。 Your subconscious mind can hear even if I whisper ... あなたの無意識は、私の囁きさえ聞くことができます。 You are continuing to drift into a more detached state as you examine privately in your own mind ... Secrets, feelings, sensations, and behavior you didn t know you had ... At the same time, letting go completely ... Your own mind is solving that problem ... At your own pace ... Just as rapidly as it feels you are ready ... あなたは、より切り離された状態の中に漂いつづけます。自分の心の中を誰にも知らせず調べるために・・・あなたがやったことも知らない秘密、感情、感覚、そして行動を・・・同時に、完全にトランスに入りながら・・・あなたの心は問題を解決しています・・・あなたのペースで・・・あなたが準備できるとちょうど感じる速さで・・・ You continue becoming more relaxed and comfortable as you sit there with your eyes closed ... 目を閉じてそこに座っていると、あなたはもっとリラックスし、もっと心地よくなっていきます。 As you experience that deepening comfort you don t have to move, or talk, or let anything bother you ... 深く入ることが快適であるのを経験するのに、あなたは動くことも、話すことも、あなたを困らせるどんなこともする必要はありません。 Your own inner mind can respond automatically to everything I tell you ... and you will be pleasantly surprised with your continuous progress ... あなた自身の内なる心は、私が話すことすべてに、自動的に反応することができます・・・そしてあなたは、自分の中で進んでいくことに驚き喜びを感じるでしょう。 You are getting much closer to a deep hypnotic trance ... And you are beginning to realize that you don t care whether or not you are going into a deep trance ... あなたは深い催眠トランスにもっと近づいていきます・・・そして深いトランスに入っているかどうか気にかけなくてもよいことがわかり始めています。 Being in this peaceful state enables you to experience the comfort of the hypnotic trance ... 平和な状態の中にいることで、あなたは催眠トランスを心地よく体験することができます。 Being hypnotized is always a very enjoyable, very pleasant, calm, peaceful, completely relaxing experience ... 催眠状態なることは、いつもとても楽しく、とてもうれしく、静かで、平和で、完全にリラックスした経験です。 It seems natural ... to include hypnosis in your future ... あなたの未来に、催眠が含まれることは、当然のように思えます。 Every time I hypnotize you it keeps becoming more enjoyable, and you continue experiencing more benefits ... So you will really enjoy having me hypnotize you ... 私があなたを催眠に入れるときはいつも、催眠はもっと楽しいものになっていきます。そしてあなたはもっと価値ある経験をするようになります。・・・それで私があなたを催眠に入れることを楽しむようになるでしょう。 You will always enjoy the sensations ... Of comfort ... Of peacefulness ... Of calmness ... And all the other sensations that come automatically from this wonderful experience ... あなたは常に、心地よい、平和で、静かな、感覚を楽しむでしょう。そして、すべてのほかの感覚は、すばらしい経験から自動的に生まれてきます。 You will be really happy that you decided to have me hypnotize you ...as you continue experiencing progressive understanding on your part ... 私があなたを催眠に入れるよう決めるのはあなたなので、あなたは本当にしあわせを感じるでしょう。・・・あなたの一部を理解する経験を進めていけばいくほど、そうなるでしょう・・・ You are learning something about yourself ... You are developing your own techniques of therapy ... Without knowing you are developing them ...You can have it as a surprise sooner or later ... a very pleasant surprise ... あなたは自分自身について何かを学んでいます・・・自分自身で行うセラピーの技法を発達させていきます・・・それを発達させていることを知らないままに・・・遅かれ早かれ、あなたは自分がセラピーの技法を持っていることを知って驚くでしょう。それはとても喜びに満ちた驚きです。 Imagine yourself in a place you like very much ... By a lake, or by the ocean ... Perhaps you are floating gently on a sailboat on a peaceful lake ... On a warm, summer day ... You are continuing to relax even more now ... And you continue becoming more comfortable ... どこでも大好きな場所にいる自分を想像してください・・・それは湖のそばかもしれないし、海のそばかもしれません・・・おそらく、穏やかな湖でヨットの上であなたは穏やかに揺られています・・・暖かい、夏の日・・・さあ、あなたはもっとリラックスしていきます・・・あなたはもっと心地よくなっていきます・・・ This is your own world that you like very much ... ここは、あなたが大好きな、あなた自身の世界です・・・ You are going to find that any time you want to spend a few minutes by yourself, relaxing, and feeling very comfortable and serene, you can automatically go back to this feeling you re experiencing now ... リラックスするために、とても心地よく心静かに感じるために、あなたはどんな時でも、好きにつかえる数分間があることがわかるでしょう。あなたは自動的に、今体験しているこの感情に戻ってくることができます。 You can put yourself into this world anytime you like ... There are times when you will want this serene feeling ... And it is yours whenever you want it ... あなたは自分をいつでも好きな時に、この世界につれてくることができます・・・心静かに感じたくなる時があるでしょう・・・ししてあなたはいつでも好きな時に、心静かな感情を自分のものにするでしょう。 Continue enjoying this pleasant experience as your subconscious mind is receiving everything I tell you ... And you will be pleased the way you automatically respond to everything I say. このうれしい経験を楽しむことを続けてください。あなたの無意識は私が話すすべてを受け取っています・・・そして私が言うすべてに自動的に反応するやり方を、あなたは喜ぶと思います。
https://w.atwiki.jp/cadencii_en/pages/51.html
English 日本語 Release Note Release Date 8 Jun, 2009 Notes Cadencii requires ".NET Framework Runtime(version 2.0 or later)" and "Visual C++ Library DLLs". Installers of these rumtimes are available from the links below. .NET Framework Runtime Download .NET Framework 3.5 SP1 Visual C++ Library DLL Microsoft Visual C++ 2008 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 2.0.2 (566KB) CadenciiSDK version 2.0 (455KB) How to get source codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 233 http //svn.sourceforge.jp/svnroot/cadencii/branches/2.0 ./ These svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" option.
https://w.atwiki.jp/cadencii_en/pages/55.html
English 日本語 Release Note Release Date 3 May, 2009 Notes Cadencii requires ".NET Framework Runtime(version 2.0 or later)" and "Visual C++ Library DLLs". Installers of these rumtimes are available from the links below. .NET Framework Runtime Download .NET Framework 3.5 SP1 Visual C++ Library DLL Microsoft Visual C++ 2008 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 1.4.4 (417KB) CadenciiSDK version 1.3 (387KB) How to get source codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 114 http //svn.sourceforge.jp/svnroot/cadencii/branches/1.4 ./ These cvs / svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" options from these commands.
https://w.atwiki.jp/cadencii_en/pages/56.html
English 日本語 Release Note Release Date 22 Apr, 2009 Notes Cadencii requires ".NET Framework Runtime(version 2.0 or later)" and "Visual C++ Library DLLs". Installers of these rumtimes are available from the links below. .NET Framework Runtime Download .NET Framework 3.5 SP1 Visual C++ Library DLL Microsoft Visual C++ 2008 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 1.4.3 (417KB) CadenciiSDK version 1.3 (387KB) How to get source codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 90 http //svn.sourceforge.jp/svnroot/cadencii/branches/1.4 ./ These cvs / svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" options from these commands.
https://w.atwiki.jp/cadencii_en/pages/66.html
English 日本語 Release Note Release Date 8 Jul., 2010 Notes Cadencii requires ".NET Framework Runtime Library(version 2.0 or later)" and "Visual C++ 2010 Runtime Library". Installer of these runtimes are available from the links below. .NET Framework Runtime Library Download .NET Framework 3.5 SP1 Visual C++ 2010 Runtime Library Microsoft Visual C++ 2010 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 3.2.0 (2.3MB) Get codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 1103 http //svn.sourceforge.jp/svnroot/cadencii/Cadencii/branches/3.2 ./ These svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" option.
https://w.atwiki.jp/cadencii_en/pages/70.html
English 日本語 Release Note Release Date 5 Oct., 2010 Notes Cadencii requires ".NET Framework Runtime Library(version 2.0 or later)" and "Visual C++ 2010 Runtime Library". Installer of these runtimes are available from the links below. .NET Framework Runtime Library Download .NET Framework 3.5 SP1 Visual C++ 2010 Runtime Library Microsoft Visual C++ 2010 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 3.2.3 (2.3MB) Get codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 1303 http //svn.sourceforge.jp/svnroot/cadencii/Cadencii/branches/3.2 ./ These svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" option.
https://w.atwiki.jp/kumicit/pages/1088.html
Kumicitのコンテンツ 創造論ネタ? Apearance of Age 宇宙は古く見えるように創造されていると言うSouthern Baptist Theological Seminary理事長 ケンタッキー州Louisvilleにあるキリスト教学校Southern Baptist Theological Seminaryの理事長R. Albert Mohler, Jr. (1959~)が、Ligonier Ministries 2010 National Conferenceで、宇宙も地球も6000歳なのに、それよりずっと古く見える理由を次のように語った。 I want to suggest to you that the most natural understanding from the scripture of how to answer that question comes to this The universe looks old because the creator made it whole. When he made Adam, Adam was not a fetus; Adam was a man; he had the appearance of a man. By our understanding that would’ve required time for Adam to get old but not by the sovereign creative power of God. He put Adam in the garden. The garden was not merely seeds; it was a fertile, fecund, mature garden. The Genesis account clearly claims that God creates and makes things whole. Secondly―and very quickly―if I’m asked why does the universe look so old, I have to say it looks old because it bears testimony to the affects of sin. And testimony of the judgment of God. It bears the effects of the catastrophe of the flood and catastrophes innumerable thereafter. I would suggest to you that the world looks old because as Paul says in Romans chapter 8 it is groaning. And in its groaning it does look old. It gives us empirical evidence of the reality of sin. 聖書の最も自然な理解による、この問題への回答として以下を勧めたい。宇宙は古く見えるのは、創造主がそのように創造したからだ。創造主がアダムを創造したとき、アダムは胎児ではなく、成人であり、成人としての外見をしていた。我々の理解では、神の創造の力がなければ、アダムは成人になるまでに時間が必要だ。神はアダムをエデンの園に置いた。エデンの園は種ではなく、既に肥沃であり、熟しており、成熟した庭園であった。創世記の記述は明らかに神がそのように創造したと主張している。 宇宙が古く見える理由を問われたときに、第2の、そして迅速に、答えるなら、現在の影響によるものだと言うだろう。そして神の審判によるものだと。ノアの洪水およびその後の数え切れない大災害のせいであると。私は、世界が古く見えるのはパウロがローマ人への手紙8章にあるように、苦しみのせいだと考えるのを勧める。苦しむが故に古く見えるのだと。それは原罪の現実の経験的証拠である。 [ Albert Mohler Why Does the Universe Look So Old? (A Transcript from the Ligonier Ministries 2010 National Conference Live Webcast via Panda s Thumb ] ひとつめの回答は、創造科学の父たるDr. Henry M. Morrisなどの"若い地球の創造論者"が採用していた立場。これは、「神による宇宙の偽造」であるが故に、あまり好まれることはなく、現在の"若い地球の創造論者"たちは、「創造の一週間は宇宙と地球での時間の流れが違った」とか、R. Albert Mohler, Jrの言う二つめの回答を採用する。R. Albert Mohler, Jrは「神による宇宙の偽造」を嫌わない人らしい。 そして、"若い地球の創造論"をとる理由を次のようにのべる。 I would suggest to you that in our effort to be most faithful to the scriptures and most accountable to the grand narrative of the gospel an understanding of creation in terms of 24-hour calendar days and a young earth entails far fewer complications, far fewer theological problems and actually is the most straightforward and uncomplicated reading of the text as we come to understand God telling us how the universe came to be and what it means and why it matters. 聖書に最も忠実であろうとするなら、そして福音書の壮大な物語に最も説明がつくのは、創造の物語を24時間の暦日と"若い地球"として理解することである、それが、宇宙の始まりと、その意味と、その理由を神が我々に語ったことを理解するために、もっとも混乱が少なく、もっとも神学的問題が少なく、実際に直接的かつ混乱しない聖書の読み方である。 [ Albert Mohler Why Does the Universe Look So Old? (A Transcript from the Ligonier Ministries 2010 National Conference Live Webcast via Panda s Thumb ] 「自然界についての知識に反する聖書の記述はすべて比喩」という立場をとらないのなら、これはおそらく正しい考え方。
https://w.atwiki.jp/w3cwiki/pages/4.html
W3C Multimodal Application Developer Feedback W3C Working Group Note 14 April 2006 This version http //www.w3.org/TR/2006/NOTE-mmi-dev-feedback-20060414/ Latest version http //www.w3.org/TR/mmi-dev-feedback/ Previous version This is the first publication. Editors Andrew Wahbe, VoiceGenie Technologies Gerald McCobb, IBM Klaus Reifenrath, Nuance Raj Tumuluri, Openstream Sunil Kumar, V-Enable Copyright © 2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract Several years of multimodal application development in various business areas and on various device platforms has provided developers enough experience to provide detailed feedback about what they like, dislike, and want to see improve and continue. This experience is provided here as an input to the specifications under development in the W3C Multimodal Interaction and Voice Browser Activities. Status of this Document This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http //www.w3.org/TR/. This document is a W3C Working Group Note. It represents the views of the W3C Multimidal Interaction Working Group at the time of publication. The document may be updated as new technologies emerge or mature. Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress. This document is one of a series produced by the Multimodal Interaction Working Group (Member Only Link), part of the W3C Multimodal Interaction Activity. The MMI activity statement can be seen at http //www.w3.org/2002/mmi/Activity. Comments on this document can be sent to www-multimodal@w3.org, the public forum for discussion of the W3C s work on Multimodal Interaction. To subscribe, send an email to www-multimodal-request@w3.org with the word subscribe in the subject line (include the word unsubscribe if you want to unsubscribe). The archive for the list is accessible online. This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. This document is informative only. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. Table of Contents * 1 Introduction * 2 What developers liked o 2.1 Reusable and pluggable modality components o 2.2 Modular modality components o 2.3 Declarative synchronization between modalities o 2.4 Scripting and semantic interpretation o 2.5 Styling * 3 What developers would like to see o 3.1 Global grammars o 3.2 Speech grammars for HTML links and controls o 3.3 Speech prompts for voice-enabled HTML links and controls o 3.4 Speech-enabled widgets o 3.5 Use speech to activate links and change focus o 3.6 Back functionality * 4 What developers would like to see continue and improve o 4.1 Support for both off-line and on-line multimodal interaction o 4.2 Support for events distributed over the network o 4.3 Support for implicit events o 4.4 VoiceXML tag and feature support o 4.5 Support for both directed and user-initiated dialogs o 4.6 Mixed-initiative interaction o 4.7 Access to speech confidence scores and n-best list by the application o 4.8 Access to device details o 4.9 Choice of ASR o 4.10 Controlling N-Best choice of ASR 1 Introduction IBM, VoiceGenie Technologies, Nuance, V-Enable, and OpenStream customers have been developing multimodal applications in a broad range of business areas, including Field-Force Productivity, Health Care and Life Sciences, Warehouse and Distribution, Industrial Plant Floor, Financial and Information Services, Directory Assistance, and the Mobile Web. Customer device platforms have included PC s (desktops, laptops, and tablets), PDA s, kiosks, appliances, equipment consoles, and web browser-based smart phones. The multimodal applications primarily extended the traditional GUI mode of interaction with speech, with the location of the speech services either local on the device or distributed on a remote server. Several XML markup languages were used to develop these applications, including XHTML+Voice (X+V) and xHMI. During the process of developing these applications, developers found features they liked about the development environment they were using and found features they thought were lacking. Their experiences were collected and are summarized here as feedback for the W3C Multimodal Interaction and Voice Browser Working Groups to consider when specifying future multimodal and voice authoring capabilities. We also solicit comments from the wider multimodal development community on the extent to which these observations are consistent with their own development experiences. The developers surveyed were expert in various programming languages and application environments. Developers expert in C/C++ and Java generally speech enabled native applications on small devices. Device platforms included Windows Mobile, BREW, embedded Linux, Symbian, and J2ME. Developers expert in the Web generally speech enabled browser based applications. Web browser platforms included Opera, Access NetFront, Windows Mobile Internet Explorer, and the Nokia Series 60. Web developers understood the web programming model very well but generally were new to speech. They liked XHTML, XML namespaces, XML Events, CSS, JavaScript, and VoiceXML with its ability to hide platform details. Developers expert in VoiceXML and dictation had backgrounds in speech and telephony and generally worked on adding GUI to voice and dictation applications. 2 What developers liked 2.1 Reusable and pluggable modality components Developers preferred to develop modality components that are reusable and pluggable. Use Case VoiceXML modality component A VoiceXML modality component is reused without modification in different multimodal applications. 2.2 Modular modality components Modular modality components are preferred because they can be authored separately by the modality experts. Use Case XHTML and VoiceXML modality components A VoiceXML expert authors the voice modality component and an XHTML expert authors the GUI component. Modality component coordination is handled independently, for example, by X+V sync and cancel elements. 2.3 Declarative synchronization between modalities Implicit event support includes both implicit event generation and implicit event handling. At different stages in the operation of the modality component, there will be either event generation or event handling by the component itself. Use Case X+V sync element The X+V sync element provides a declarative synchronization of XHTML form control elements and the VoiceXML field element. The sync element allows input from one speech or visual modality to set the field in the other modality. Also, setting the focus of an input element that is synchronized with a VoiceXML field updates the FIA to visit that VoiceXML field. 2.4 Scripting and semantic interpretation Developers liked support for modality component integration via scripting and semantic interpretation. Use Case Timed notifications of an operating room medical procedure A timed notification changes dynamically as time progresses. The notification depends on the current state of the application as well as the notification state. For a GUI+speech multimodal application a notification may be a TTS output and a new GUI page, corresponding to the next step of an operating room medical procedure. Use Case Integrated pen and speech interaction with a map The user says "zoom in here" while drawing an area on a map. The application responds by enlarging the detail of the area within the boundary drawn by the user. 2.5 Styling Developers liked CSS for styling each modality. For example, the CSS3 module for styling speech based on SSML was useful for styling the voice modality. Use Case TTS rendering of a news article on the web The news article is read by the computer in a realistic voice that uses a different sounding voices for headlines, section headings, and text. There are also a pauses between paragraphs and before article headlines. 3 What developers would like to see 3.1 Global grammars Developers would like support for top-level ("global") grammars that are active across multiple windows (e.g., HTML frames or portlets) of the application. Use Case Top-level menus An application has top level menus "buy", "sell", and "trade". At any time while involved in the "buy" dialog, a user can say "trade" and be switched to the "trade" multimodal dialog. 3.2 Speech grammars for HTML links and controls Developers would like support for explicitly adding speech grammars to activate HTML links and controls. An automatically created speech grammar may not capture everything the user may say. Use Case Hotel booking application get list of hotels Before booking a hotel reservation the user looks up a list of available hotels. On the page along with the reservation is a link labeled "Available Hotels." The developer anticipates that besides "available hotels", the user may say "show me the available hotels" or ask "what hotels are available", and adds these two phrases to the grammar for activating the link. Use Case Hotel booking application submit reservation The reservation form s submit button says "submit reservation", but the developer anticipates that a user might say "submit booking" instead, and adds "submit booking" to the grammar for activating the button. 3.3 Speech prompts for voice-enabled HTML links and controls Developers would like support for explicitly adding speech prompts to voice-enabled HTML hyperlinks and controls. The prompts can provide more information than the visual labels attached to the HTML hyperlinks and input fields. Use Case Hotel booking application enter Hotel name The user is prompted to enter a hotel name with the following TTS "please enter a hotel name. You can get a list of available hotels by saying show me available hotels. " 3.4 Speech-enabled widgets Developers would like to see speech enabled UI widgets which contain a simple dialog flow (e.g. widgets which contain confirmation or disambiguation steps). This allows an author to configure the dialog properties (prompts, grammars, confirmation-mode, confidence thresholds, etc.) of an HTML control or hyperlink. Use Case Hotel booking application confirm hotel The user says the name of one of the available hotels. The application repeats the name of the hotel back to the user and asks if it is correct. If the user says yes then the application fills in the HTML field with the user s input. 3.5 Use speech to activate links and change focus It should be easy to use speech to do more than fill in HTML form controls. For example, there should be declarative support for activating an HTML link or changing focus within an HTML page. Use Case Speech enabled bookmark page A page that displays the user s bookmarks is speech-enabled such that each bookmark has an associated grammar for moving the browser to the bookmarked page. 3.6 Back functionality Developers like to see support for a consistent and intuitive "back" handling across modalities. The browser "Back" multimodal functionality should be built-in and not require custom code. Use Case Browser "back" button The user can either press the browser back button or say "browser go back" to return to the previous multimodal page. All spoken commands which control the browser are preceded by "browser" so there is no collision with an application grammar. 4 What developers would like to see continue and improve 4.1 Support for both off-line and on-line multimodal interaction Multimodal interaction should be supported both for applications that are on-line, that is, are connected to the network, as well as for off-line applications. If the multimodal application goes from an on-line to an off-line state, multimodal interaction should still be supported by the modality components that run locally on the device. Use Case Access of medical information while walking down a hallway A doctor carrying a wireless tablet accesses patient medical information while walking down a hallway. Loss of wireless connectivity does not prevent the multimodal application from interacting with the doctor or presenting information it has stored on the doctor s tablet. Use Case Multimodal application in hospital operating room An off-line multimodal application in an operating room delivers timely instructions to the doctor. 4.2 Support for events distributed over the network Because a modality may be distributed on a remote server, there must be support for distributed events between a modality and the interaction manager. Use Case Driving directions A user accesses a multimodal driving directions application using a cell-phone. The application tells the user to turn right at the next intersection. An arrow pointing right pops up over a map. The application had received an event to display an arrow from the server. 4.3 Support for implicit events Implicit event support includes both implicit event generation and implicit event handling. At different stages in the operation of the modality component, there will be either event generation or event handling by the component itself. For example, the VoiceXML modality component could implicitly generate a focus event when the FIA selects a new form input item. Use Case Hotel booking application name, address, phone number A hotel booking application has a form with separate HTML input fields for entering name, street address, city, state and phone number. When the user selects one of the fields the user hears a prompt for entering the correction information into the field. The visual input focus is coordinated with the speech input focus. 4.4 VoiceXML tag and feature support VoiceXML support should include, for example, the object and mark tags and the "record while recognition is in progress" feature. Use case Windows program for calculating stock purchase totals The object element can be used to load a reusable platform-specific plug-in. For example, the application would load a Windows program which calculates stock purchase totals using the object element. Use case Read part of an e-mail message The mark tag can be used to mark how much of the text was actually read before the user left the page. When the user returns to the page the rest of the text can be read beginning where the user left off. Use case Unrecognized user input The recording of an unrecognized user input can be logged by the speech recognizer. 4.5 Support for both directed and user-initiated dialogs There must be arbitrary as well as procedural speech access to the visual application. For a dialog mechanism used in conjunction with a visual form there should be support for user-initiated dialogs. For example, the user should be able to jump to arbitrary points in the dialog by changing the visual focus (e.g., by clicking on a text box). Use Case Form filling for air travel reservation The air travel reservation application takes the user step by step through making a reservation, beginning with the origin and destination of the flight. After the user has been given a selection of flights, the user clicks on the visual departure date field to change the departure date. Use Case Application with two HTML forms The user is taken step-by-step through filling out a set of HTML fields in a form. Before all the fields have been filled, the user clicks on a field belonging to the other form. 4.6 Mixed-initiative interaction Dialog mechanisms that combine speech and text input must support mixed-initiative interaction. Use Case Flight reservation application A flight reservation application has separate HTML input fields for entering destination airport, date of travel and seating class. With a single utterance "I d like to go to San Francisco on April 20th, business class" the user fills in all the fields at one time. 4.7 Access to speech confidence scores and n-best list by the application Confidence scores and n-best lists are useful for example to allow the user to pick from a set of results supplied by an input recognizer. Use Case Select a football player A user says the name of a favorite football player. A number of players matched the user s input with the same low confidence score. Instead of asking the user to repeat the name, the application displays a visual list of player names that was matched. The user selects a name from the list. 4.8 Access to device details The developer would like access to device information such as, for example, the cell phone number, phone model, and display screen size. Typically in any mobile application the content is very specific to the device and at times personalized for the user. Access to device specific details such device model (e.g., Nokia 6680) helps the application reduce the grammar size and render device specific content. Access to user information such as the phone number allows the application to personalize the content for the user. Use Case Mobile appointment application When user George accesses the appointment application the application says "Welcome George " and presents a list of appointments for the day. The user can select any of his appointments by saying an appointment label shown on his phone. Each label is short enough to fit entirely on George s display. 4.9 Choice of ASR The developer would like to have more control over ASR. An example is the capability of a multimodal application to choose between a local ASR or network based ASR depending on the location of the grammar. The developer should be allowed to pick the ASR depending on the application logic. Use Case Music search mobile application In a music search mobile application the application uses network-based ASR to perform a search for a particular Artist/Album such as Green Day , 50 Cent etc. In case of network-based recognition the grammar is changing dynamically and is large in size. The same music application may use local ASR for the purpose navigating through the application using commands such as Home , Next Page etc. 4.10 Controlling N-Best choice of ASR The application should be able to control the number of results it wants from ASR based on either a number N (say return top 5 matches) or confidence score (say return 0.8 score). The developer should be able to author this N-Best list control. Use Case Select a football player mobile application As with the previous football player selection use case, the list of players is visually displayed for the user to select. The user can make a selection from the visual presentation. The ASR may return more than 10 results as part of its N-Best response mechanism. However, the application depending on the screen size may choose to display only the top 5 entries on the screen. The application requests only the top 5 players in the N-best result instead of receiving 10 results and then ignoring the last 5 results.
https://w.atwiki.jp/cadencii_en/pages/67.html
English 日本語 Release Note Release Date 13 Jul., 2010 Notes Cadencii requires ".NET Framework Runtime Library(version 2.0 or later)" and "Visual C++ 2010 Runtime Library". Installer of these runtimes are available from the links below. .NET Framework Runtime Library Download .NET Framework 3.5 SP1 Visual C++ 2010 Runtime Library Microsoft Visual C++ 2010 Redistributable Package (x86) Cadencii can be launched with the latest version of mono. This enable you to use Cadencii with many platforms supported by mono. (Note Several functions using VOCALOID2 VSTi are not available in this case.) Mono is available from the link mono download Download Cadencii version 3.2.1 (2.3MB) Get codes Source code is available on SourceForege.JP. Please follow the instruction below for checking out the SourceForge.JP s SVN repository. svn checkout -r 1106 http //svn.sourceforge.jp/svnroot/cadencii/Cadencii/branches/3.2 ./ These svn command is for checiking out "THIS" version of Cadencii. In order to get the latest source codes, please remove "-r" option.