約 1,060,679 件
https://w.atwiki.jp/usb_audio/pages/58.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 16 2 Management Overview The USB is very well suited for transport of audio ranging from low fidelity voice connections to high quality, multi-channel audio streams. The USB has become a ubiquitous connector on modern PC’s and is well-understood by most consumers today. As such, it has become the connector of choice for many peripherals and is indeed the simplest and most pervasive digital audio connector available today. With the advent of the High Speed USB, consumers can count on this medium to meet all of their audio needs today and into the future. Many applications from communications, to entertainment, to music recording and playback, can take advantage of audio features of the USB. In principle, a versatile bus specification like the USB provides many ways to propagate and/or control digital audio. For the industry, however, it is very important that audio transport mechanisms be well defined and standardized on the USB. Only in this way can interoperability be guaranteed among the many possible audio devices on the USB. Standardized audio transport mechanisms also help to keep software drivers as generic as possible. The Audio Device Class described in this document satisfies those requirements. It is written and revised by experts in the audio field. Other device classes that address audio in some way should refer to this document for their audio interface specification. An essential issue in audio is synchronization of the data streams. Indeed, the smallest artifacts are easily detected by the human ear. Therefore, a robust synchronization scheme on isochronous transfers has been developed and incorporated in the USB Specification. The Audio Device Class definition adheres to this synchronization scheme to transport audio data reliably over the bus. This document contains all necessary information for a designer to build a USB-compliant device that incorporates audio functionality. It specifies the standard and class-specific descriptors that must be present in each USB audio function. It further explains the use of class-specific requests that allow for full audio function control. A number of predefined data formats are listed and fully documented. Each format defines a standard way of transporting audio over the USB. Provisions have been made so that vendor-specific audio formats and compression schemes can be handled. Many of the changes introduced in Version 2.0 of the USB Specification for Audio Devices take advantage of the new features provided in the USB 2.0 Specification. With the additional bandwidth made available, high speed USB operation allows the transport of multiple channels of high bit rate audio. This expands the range of solutions provided by USB audio devices but also challenges the way in which they operate. In addition to supporting the additional bandwidth, the specification supports new codec types for consumer audio applications, provides numerous clarifications of the original specification and extensions to support various changes in the core specification. The changes are not generally backwards compatible to 1.0 because that would too severely limit this new class of devices. 2.1 Overview of Key Differences between ADC v1.0 and v2.0 The following list is not an exhaustive list of all changes that have been introduced. For complete information, refer to the full specification. Pay special attention to Sections 1 through 6! • Complete support for high speed operation - no longer are audio class devices limited to full speed operation. • The notion of physical and logical Audio channel clusters. • The number of predefined spatial locations has increased. In addition, a virtual spatial location called Raw Data was introduced. • Use of the interface association descriptor - The standard Interface Association mechanism is used to describe an Audio Interface Collection. The former class specific mechanism was deprecated. • Descriptor updates fixed offsets associated with many descriptors and enlarged three byte fields into four bytes. • Extensive support for interrupts to inform the host about dynamic changes that occur on the different addressable Entities (Clock Entities, Terminals, Units, interfaces and endpoints) inside the audio function. • More clarification text on the audio function. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 17 • Audio Control Changes. – Control attribute changes. – Mixer Unit control request (set/get pairs changed). – Many updates in the control descriptions. • Added support for clock domains, clock description and clock control. • Added additional Audio Controls inside a Feature Unit (Input, Gain, Input Gain Pad …) • Added bit pairs in descriptors to indicate presence and programmability of every Control • Prohibited the use of Alternate Setting switching to change sampling frequencies. Instead, Clock Entities are introduced that can be manipulated (through the AudioControl interface) to select operating sampling frequencies. • Split off the examples in a separate document. • Allowed binding between physical buttons on the audio function and the corresponding Audio Control. Prescribed how this is done. • Added an Effect Unit to group algorithms that work on logical channels separately but require multiple parameters to manipulate the effect (as opposed to basic (single parameter) manipulation, performed in a Feature Unit). • Introduced Parametric Equalizer Section Effect Unit. • Rearranged Reverb, Modulation Delay and Dynamic Compressor PUs under the new Effect Unit. • Added the concept of audio function Category. The Category indicates the primary use of the audio function as envisioned by the manufacturer. • Added the Sampling Rate Converter Unit. • Added a means to express Latency of individual building blocks within the audio function. • Added Encoder support. USB Device Class Definition for Audio Devices 3 Functional Characteristics 3.1 Introduction In many cases, audio functionality does not exist as a standalone device. It is one capability that, together with other functions, constitutes a “composite” device. A perfect example of this is a DVD-ROM player, which can incorporate video, audio, data storage, and transport control. The audio function is thus located at the interface level in the device class hierarchy. It consists of a number of interfaces grouping related pipes that together implement the interface to the audio function. An audio function is considered to be a ‘closed box’ that has very distinct and well defined interfaces to the outside world. Audio functions are addressed through their audio interfaces. Each audio function must have a single AudioControl interface and can have zero or more AudioStreaming and zero or more MIDIStreaming interfaces. The AudioControl (AC) interface is used to access the Audio Controls of the function whereas the AudioStreaming (AS) interfaces are used to transport audio streams into and out of the function. The MIDIStreaming (MS) interfaces can be used to transport MIDI data streams into and out of the audio function. The collection of the single AudioControl interface and the AudioStreaming and MIDIStreaming interfaces that belong to the same audio function is called the Audio Interface Collection (AIC). A device can have multiple Audio Interface Collections active at the same time. These Collections are used to control multiple independent audio functions located in the same composite device. An Audio Interface Collection is described through the standard USB Interface Association mechanism that expresses interface binding via the Interface Association Descriptor (IAD). Note All MIDI-related information is grouped in a separate document, Universal Serial Bus Device Class Definition for MIDI Devices that is considered part of this specification. The remainder of this document will therefore not mention MIDIStreaming interfaces and their specifics anymore. The following figure illustrates the concept Audio FunctionAudio-StreamingInterfaceINUSBAudio-StreamingInterfaceINAudio-StreamingInterfaceINAudio-StreamingInterfaceOUTAudio-StreamingInterfaceOUTAudio-StreamingInterfaceOUTAudioControl InterfaceAudio InterfaceCollectionAlternate Settings Figure 3-1 Audio Function Global View 18 Release 2.0 May 31, 2006 USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 19 All functionality pertaining to controlling parameters that directly influence audio perception (like volume) are located inside the central rectangle and are exclusively controlled through the AudioControl interface. Streaming aspects of the communication to or from the audio function are handled through separate AudioStreaming interfaces. The AudioStreaming interface is primarily used for transporting audio data between the audio function and the outside world. However, all control data that is related specifically to the streaming behavior is also conveyed through the AudioStreaming interface. In particular, all control data that is used to influence the decoder or encoder process that potentially resides between the actual streaming endpoint and the audio function (e.g. conversion from AC-3 encoded stream to 5.1 physical audio channels) is conveyed through the AudioStreaming interface. Note that in some cases an AudioStreaming interface is only used to perform controlling functions while no actual data is transported over the interface. A physical S/PDIF connection to the audio function is a typical example. Although the actual audio data is coming in from the outside world (not through the USB), it might be necessary to control some aspects of the S/PDIF connection. In that case, the S/PDIF connection is represented by an AudioStreaming interface so that it becomes addressable through USB. Also note that the connection between the AudioStreaming interfaces and the audio function is not ‘solid’. The reason for this is that when seen from the inside of the audio function, each audio stream entering or leaving the audio function is represented by a special object, called a Terminal (see further). The Terminal concept abstracts the actual AudioStreaming interface inside the audio function and provides a logical view on the connection rather than a physical view. This abstraction allows audio channels within the audio function to be treated as ‘logical’ audio channels that do not have physical characteristics associated with them anymore (analog vs. digital, format, sampling rate, bit resolution, etc.). 3.2 Audio Interface Collection (AIC) On USB, an audio function is completely defined by its interfaces. An audio function has one AudioControl interface and zero d into an Audio Interface e The Audio Function class and Subclasses can be further qualified by the Function Protocol code. The ion sion of this specification so that enumeration stantiated. or more AudioStreaming interfaces, groupe Collection. The standard USB Interface Association mechanism is used to describe the Audio Interface Collection i.e. to bind those interfaces together. Interface Association is expressed via the standard USB Interface Association Descriptor (IAD). Every Interface Association Descriptor has a FunctionClass, FunctionSubClass and FunctionProtocol field that together identify the function that is represented by thAssociation. The following paragraphs define these fields for the Audio Device Class. 3.3 Audio Function Class An Interface Association has a Function Class code assigned to it. This specification requires that the Function Class code be the same as the Audio Interface Class code. The Audio Function class code is assigned by this specification. For details, see Appendix A.1, “Audio Function Class Code”. 3.4 Audio Function Subclass The Audio Function class is divided into Function Subclasses. At this moment, the Function SubClass codeis not used and must be set to FUNCTION_SUBCLASS_UNDEFINED. The assigned codes can be found in A.2, “Audio Function Subclass Codes” of this specification. All other Subclass codes are unused and reserved by this specification for future use. 3.5 Audio Function Protocol Funct Protocol code is used to reflect the current versoftware can decide which driver versions need to be in The assigned Protocol codes can be found in Appendix A.3, “Audio Function Protocol Codes” of this specification. All other Protocol codes are unused and reserved by this specification for future use. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 20 h USB belong to this class. t, f this class, the only requirement is that it exposes one in aming interfaces for consuming or ss code is assigned by the USB. For details, see Appendix A.4, “Audio Interface Class Code”. re part of a certain Interface • AudioStreaming Interface Subclass ion. the current version of this specification. . ion Category indicates the primary intended use for the audio function. The following . ne A device set up to record audio from audible sources. • Headset A device with at least one speaker and at least one microphone designed to be worn or held ck and voice input capabilities. o another converting audio data from one encoding format to another (e.g. th at least one microphone and at least one speaker that is d optical inputs and outputs for connection to other devices. 3.6 Audio Interface Class The Audio Interface class groups all functions that can interact with USB-compliant audio data streams. All functions that convert between analog and digital audio domains can be part of this class. In addition, those functions that transform USB-compliant audio data streams into other USB-compliant audio data streams can be part of this class. Even analog audio functions that are controlled throug In facor an audio function to be part of AudioControl interface. No further interaction with the function is mandatory, although most functionsthe audio interface class will support one or more optional AudioStre producing one or more isochronous audio data streams. The Audio Interface cla 3.7 Audio Interface Subclass The Audio Interface class is divided into Subclasses. All audio functions a Subclass. The following three Interface Subclasses are currently defined in this specification • AudioControl Interface Subclass • MIDIStreaming Interface Subclass The assigned codes can be found in Appendix A.5, “Audio Interface Subclass Codes” of this specificatAll other Subclass codes are unused and reserved by this specification for future use. 3.8 Audio Interface Protocol The Audio Interface class and Subclasses can be further qualified by the Interface Protocol code. The Interface Protocol code is used to reflect The assigned codes can be found in Appendix A.6, “Audio Interface Protocol Codes” of this specificationAll other Protocol codes are unused and reserved by this specification for future use. 3.9 Audio Function Category The Audio Funct Function Categories are currently defined in this specification • Desktop Speaker One or more speakers set up in a small environment to provide audio intended primarily for one person. • Home Theater Several speakers set up in a moderately sized environment to provide audio levels significantly louder than a Desktop Speaker setup and intended to be clearly heard by multiple people• Micropho by a user to provide personal audio playba• Telephone A Headset or handset type device that also connects to a telephone system, (e.g. POTs, PBX, VoIP) capable of making and receiving telephone calls. • Converter A device that allows conversion of audio from one electrical or optical format t electrical or optical format, and/or AC-3 to PCM, etc.). • Voice/Sound recorder A device set up wi designed to operate, at least some of the time, independently of the Host to record and store audible sources and play back its recorded content. • IO Box A device designed to deliver one or more, possibly different, electrical an 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 - 131 - 136 - 141 ここを編集
https://w.atwiki.jp/mmmtarcade/pages/204.html
追加カラー R8 RS4 no.1 イエロー ライトイエロー no.2 シルバー×ブラック ダークブルー no.3 グリーン×ブラック ライムグリーンメタリック no.4 レッド×ブラック オレンジメタリック no.5 ライトブルー×グレー ガンメタル no.6 ペールブルーメタリック パープルメタリック no.7 ピンク×ブラック ペールブルー no.8 イエロー×グレー オレンジイエロー no.9 ペールブルー×ダークグレー ダークグリーン no.10 シルバー×ダークグレー パールホワイト no.11 ライムグリーン ゴールドメタリック no.12 ワインレッド×ブラック no.13 ライトベージュ×シルバー no.14 グレー×ダークグレー no.15 ホワイト×ブラック no.16 ダークピンクメタリック no.17 ブルーグリーン×ブラックメタリック no.18 パープルシルバーメタリック×ブラックメタリック no.19 ブルーグリーンメタリック no.20 レッド×ブラック2 no.21 ホワイト×シルバー ホワイト no.22 グレーメタリック×シルバー グレーメタリック no.23 ブラック×グレー ブラック no.24 ブルーメタリック×シルバー no.25 レッド no.26 オレンジメタリック×ブラック no.27 ダークブルー×シルバー no.28 ライトシルバー×シルバー no.29 ライトパープル×ブラック no.30 ブラック no.31 オレンジ no.32 ライトイエロー×ブラック no.33 ダークブルー×ダークグレー no.34 グリーン4・メタリック no.35 ブルーメタリック no.36 ダークレッド×ライトベージュ no.37 ダークグリーン×オレンジ no.38 ダークパープル2・メタリック no.39 ピンク×ホワイト no.40 ミントグリーン
https://w.atwiki.jp/so905i/pages/13.html
AUDIO microSDにPRIVATE\DOCOMO\MMFILE\MUSICフォルダを作成し、その中に対応フォーマットの楽曲ファイル単体またはフォルダごとをぶちこむ。
https://w.atwiki.jp/usb_audio/pages/59.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 21 , e.g. piano, guitar, synthesizer, drum machine, etc. he expectation is of audio devices, such as a mixer panel. th e is sufficiently different from the above descriptions as to be ction Category Codes” of this specification. l ot 1 A C c a c t In g me audio function. 3.1 as defi sync y cloc o a c speed endpoi hat occurs at the beginning of every microframe to l lationship between different physical audio channels. Indeed, the virtual spatial position of an audio source is directly related to and influenced by the phase differences that are applied to the different physical audio channels used to reproduce the audio source. Therefore, it is imperative that USB audio functions respect the phase relationship among all related audio channels. However, the responsibility for maintaining the phase • Musical Instrument A musical instrument • Pro-Audio A device not typically used by consumers of audio, e.g. editing equipment, multitrack recording equipment, etc. •Audio/Video The audio from a device that also supplies simultaneous video where t that the audio is tightly coupled to the video, e.g. a camcorder, a DVD player, a television, etc. • Control Panel A device that is used to control the flow of audio through a system • Oer Any device whose primary purposconsidered a completely different form of device. The assigned codes can be found in Appendix A.7, “Audio Fun Alher Category codes are unused and reserved by this specification for future use. 0 Clock Domains 3. lock Domain is defined as a zone within which all sampling clocks are derived from the same master synchronous and their timing clok. Therefore, within the same Clock Domain, all sampling clocks are reltionship is constant. However, the sampling clocks can be at different sampling frequencies. The master clok can be generated in many different ways. An internal crystal could be the master clock, the USB star of frame (SOF) could be used or even an externally supplied clock could serve as a master clock. eneral, multiple different Clock Domains can exist within the sa 1 Audio Synchronization Types Each isochronous audio endpoint used in an AudioStreaming interface belongs to a synchronization type ned in Section 5 of the USB Specification. The following sections briefly describe the possible hronization types. 3.11.1 Asynchronous Asnchronous isochronous audio endpoints produce or consume data at a rate that is locked either to a k external to the USB or to a free-running internal clock. These endpoints cannot be synchronized t start of frame (SOF) or to any other clock in the USB domain. 3.11.2 Synchronous The clock system of synchronous isochronous audio endpoints can be controlled externally through SOF synhronization. Such an endpoint must lock its sample clock to the 1ms SOF tick. Optionally, a high-nt could lock its clock to the 125 μs SOF t improve accuracy. 3.11.3 Adaptive Adaptive isochronous audio endpoints are able to source or sink data at any rate within their operating range. This implies that these endpoints must run an internal process that allows them to match their naturadata rate to the data rate that is imposed at their interface. 3.12 Inter Channel Synchronization An important issue when dealing with audio, and 3-D audio in particular, is the phase re USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 22 dware, and all of the audio peripheral devices or l delay essed in number of (micro)frames and is due to the fact that the audio function must buffer at least one (micro)frame worth of samples to effectively remove Furthermore, some audio functions will introduce extra delay because and process the audio data streams (for example, compression and t of (micro)frame n) is the first sample of the packet it sends over USB during (micro)frame (n+δ). δ is the audio function’s internal delay expressed pplies for an audio sink function. The first sample in the packet, received ust be the first sample that is fully reproduced during (micro)frame ernal delays of all audio functions involved. Clock Entities and they are used to describe and manipulate the clock signals inside the audio function. inal e. ed together is a guarantee (by construction) that the protocol and format, used over these e Unit. Likewise, there is a Terminal descriptor (TD) for every Terminal in the audio function. In addition, these descriptors provide all e audio function. They fully describe how Terminals and relationis shared among the USB host software, har functions. To provide a manageable phase model to the host, an audio function is required to report its internafor every AudioStreaming interface. This delay is expr packet jitter within a (micro)frame. they need time to correctly interpret decompression). However, it is required that an audio function introduces only an integer number of (micro)frames of delay. In the case of an audio source function, this implies that the audio function must guarantee that the first sample it fully acquires after SOFn (star in (micro)frames. The same rule aover USB during (micro)frame n, m (n+δ). By following these rules, phase jitter is limited to ±1 audio sample. It is up to the host software to synchronize the different audio streams by scheduling the correct packets at the correct moment, taking into account the int 3.13 Audio Function Topology To be able to manipulate the physical properties of an audio function, its functionality must be divided into addressable Entities. Two types of such generic Entities are identified and are called Units and Terminals. In addition, a special type of Entity is defined. These Entities are called Units pvide the basic building blocks to fully descri robe most audio functions. Audio functions are built by connecting together several of these Units. A Unit has one or more Input Pins and a single Output Pin, where each Pin represents a cluster of logical audio channels inside the audio function (see Section 3.13.1, “Audio Channel Cluster”). Units are wired together by connecting their I/O Pins according to the required topology. Note that it is perfectly legal to connect the Output Pin of an Entity to multiple Input Pins residing on different other Entities, effectively creating a one-to-many connection. In addition, the concept of a Terminal is introduced. There are two types of Terminals. An Input Term (IT) is an Entity that represents a starting point for audio channels inside the audio function. An Output Terminal (OT) represents an ending point for audio channels. From the audio function’s perspective, a USB endpoint is a typical example of an Input or Output Terminal. It either provides data streams to the audio function (IT) or consumes data streams coming from the audio function (OT). Likewise, a Digital toAnalog converter, built into the audio function is represented as an Output Terminal in the audio function’smodel. Connection to the Terminal is made through its single Input or Output Pin. Input Pins of a Unit are numbered starting from one up to the total number of Input Pins on the Unit. The Output Pin number is always one. Input Terminals have only one Output Pin and its number is always onOutput Terminals have only one Input Pin and it is always numbered one. The information, traveling over I/O Pins is not necessarily of a digital nature. It is perfectly possible to use the Unit model to describe fully analog or even hybrid audio functions. The mere fact that I/O Pins are connect connections (analog or digital), is compatible on both ends. Every Unit in the audio function is fully described by its associated Unit descriptor (UD). The Unit descriptor contains all necessary fields to identify and describe th necessary information about the topology of thUnits are interconnected. This specification describes the following eight different types of standard Units and Terminals that are considered adequate to represent most audio functions available today and in the near future USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 23 re side le Clock Output pin. Clock Input Pins are lock gnal at en or Q. The values P and Q are fixed for a given Clock Multiplier. The new clock u f Clock Source, Clock Selector, and Clock Multiplier Entities, the most complex c nted and exposed to Host software. oc t Pins are fundamentally different from Input and Output Pins defined for Units and rry only clock signals and therefore cannot be connected to Unit or Terminal hey are only used to express clock circuitry topology. a single Clock Input Pin that is connected to a Clock Output Pin of a oc nal carried by that Clock Output Pin determines at which sampling frequency h y the Terminal is operating. ncies between which the Sampling Rate Converter Unit is converting. ch cribed by a Clock Entity descriptor (CED). The Clock Entity descriptor contains n tify and describe the Clock Entity. e etailed in Section 4, “Descriptors” of this document. at eric audio driver should be able to fully control the c • Input Terminal (IT) • Output Terminal (OT) • Mixer Unit (MU) • Selector Unit (SU) • Feature Unit (FU) • Sampling Rate Converter Unit • Effect Unit (EU) • Processing Unit (PU) • Extension Unit (XU) Besides Units and Terminals, the concept of a Clock Entity is introduced. Three types of Clock Entities adefined by this specification • Clock Source (CS) • Clock Selector (CX) • Clock Multiplier (CM) A Clock Source provides a certain sampling clock frequency to all or part of the audio function. A Clock Source can represent an internal sampling frequency generator, but it can also represent an external sampling clock signal input to the audio function. A Clock Source has a single Clock Output Pin that carries the sampling clock signal, represented by the Clock Source. The Clock Output Pin number is always one. A Clock Selector is used to select between multiple sampling clock signals that might be available inan audio function. It has multiple Clock Input Pins and a sing numbered starting from one up to the total number of Clock Input Pins on the Clock Selector. The COutput Pin number is always one. A Clock Multiplier is used to derive a new clock signal with a different frequency from the clock siits single Clock Input Pin. It does this by multiplying that clock signal frequency by a numerator P and thdividing it by a denominat signal is guaranteed to be synchronous with the input clock signal. A Clock Multiplier has one Input Pin and one Output Pin and their numbers are always one. Bysing a combination o clok systems can be represe Clk Input and Outpu Terminals. Clock Pins ca Input and Output Pins. T Each Input and Output Terminal has Clk Entity. The cloc k sigtheardware represented b Each Sampling Rate Converter Unit has two Clock Input Pins that are typically connected to the Clock Output Pins of two different Clock Entities. The clock signals carried by those Clock Output Pins determine the sampling freque Ea Clock Entity is des allecessary fields to iden Th descriptors are further d The ensemble of Unit descriptors, Terminal descriptors and Clock Entity descriptors provide a full description of the audio function to the Host. This information is typically retrieved from the device enumeration time. By parsing the descriptors, a gen audio function, except for the functionality represented by Extension Units. Those require vendor-specifiextensions to the audio class driver. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 24 software must be notified of these changes to remain ‘in sync’ with the ck Sources, a Clock Selector, and two Clock onnected into the overall topology of the the onnector on the audio tion of a Headphone Out jack on the audio device. 6 lt is o nto the audio device and OT 11 could rding purposes. z for ut cy of 48 kHz to OT 9 for driving the headphone. Since all sampling freq he audio function are at all times derived from a single master clock (internal or external a The descri Entity is. F external co t indicates th the Output Pin of IT 1, Input Pin 2 is connected to the Output Important Note The complete set of audio function descriptors provides only a static initial description of the audio function. During operation, a number of events can happen that force the audio function to change its state. Host audio function at all times. An extensive interrupt mechanism is in place to report any and all state changes to Host software. Figure 3-2, “Inside the Audio Function” illustrates the concepts defined above. Using the iconic symbols defined further, it describes a hypothetical audio function that incorporates 16 Entities three Input Terminals, five Units, three Output Terminals, two Clo Multipliers. Each Entity has its unique ID (from 1 to 16) and descriptor that fully describes the functionality of the Entity and also how that particular Entity is c audio function. Input Terminal 1 (IT 1) could be the representation of a USB OUT endpoint used to stream audio fromHost to the audio device. IT 2 could be the representation of an analog Line-In c device whereas IT 3 could be an analog Microphone-In connector on the audio device. Selector Unit 4 (SU4) selects between the audio coming from the Host and the audio present at the Line In connector. Feature Unit 5 (FU 5) is then used to manipulate the audio (Volume, Bass, Treble …) before it is presented to Output Terminal 9 (OT 9). OT 9 could be the representa At the same time, all three input sources (USB OUT, Line In, and Mic In) are connected to a Mixer Unit(MU 6) that effectively mixes the three sources together. The output of the Mixer is then fed into a Processing Unit 7 (PU 7) that could perform some audio processing algorithm(s) on the mix. The resu in turn sent to FU 8 where some final adjustments to the audio (Volume …) are made. FU 8 is connected tOT 10 and OT 11. OT 10 could represent speakers incorporated i represent a USB IN endpoint used to send the processed audio to the Host for reco Clock Source 12 (CS 12) could represent an internal sampling frequency generator, running at 96 kHinstance. Clock Source 15 (CS 15) could be the representation of an external master sampling clock inpthat can be used to synchronize the device to an external source. Clock Selector 13 (CS 13) enables selection between the two available Clock Sources. The output of CS 13 provides a sampling frequency to IT 1, IT 2, IT3, OT 10, and OT 11 of 96 kHz. Clock Multiplier CM 14 further multiplies that clock signal by 0.5, providing a sampling frequen uencies used inside t ), ll audio streams in the audio function are synchronous. ptors, associated with each Entity clearly indicate to the Host what the exact nature of each or instance, the IT 2 descriptor contains a field that indicates to the Host that it represents an nnector on the device, used as an analog Line In. Likewise, the MU 6 descriptor has a field thaat its Input Pin 1 is connected to Pin of IT 2, and Input Pin 3 is connected to the Output Pin of IT 3. For further details on descriptor contents, refer to Section 4, “Descriptors” of this document. USB Device Class Definition for Audio Devices Audio Function Release 2.0 May 31, 2006 25 PU SU Descr.FU Descr.MU Descr Selector UnitFeature UnitMixer UnitProcessing Unit Feature Unit PU Descr.FU Descr.1234567891011USB OUTAnalogLine INOUT AnalogMic INHeadphone Speakers USB IN 12 Clock Source Clock Multiplier 14 15 ITITITIT Descr.IT Descr.IT Descr.OTOT OT OT Descr.OT Descr.OT Descr. Clock Source Clock Selector 13 CSD CSD CXD P/Q CMD Figure 3-2 Inside the Audio Function Inside an Entity, functionality is further described through Audio Controls. A Control typically provides access to a specific audio or clock property. Each Control has a set of attributes that can be manipulated or that present additional information on the behavior of the Control. A Control can have the following attributes • Current setting attribute • Range attribute triplet consisting of • Minimum setting attribute • Maximum setting attribute • Resolution attribute As an example, consider a Volume Control inside a Feature Unit. By issuing the appropriate Get requests, the Host software can obtain values for the Volume Control’s attributes and, for instance, use them to correctly display the Control on the screen. Setting the Volume Control’s current attribute allows the Host software to change the volume setting of the Volume Control. Additionally, each Entity in an audio function can have a memory space attribute. This attribute optionally provides generic access to the internal memory space of the Entity. This could be used to implement vendor-specific control of an Entity through generically provided access. information. Inside the audio function, complete abstraction is made of the actual physical representation 3.13.1 Audio Channel Cluster An audio channel cluster is a grouping of audio channels that carry tightly related synchronous audio 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 - 131 - 136 - 141 ここを編集
https://w.atwiki.jp/usb_audio/pages/34.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 31 Table 3-1 Status Word Format Offset Field Size Value Description 0 bStatusType 1 Bitmap D7 Interrupt PendingD6 Memory Contents ChangedD5..4 ReservedD3..0 Originator0 = AudioControl interface1 = AudioStreaming interface2 = AudioStreaming endpoint3..15 = Reserved 1 bOriginator 1 Number ID of the Terminal, Unit, interface, orendpoint that reports the interrupt. 3.7.2 AudioStreaming Interface AudioStreaming interfaces are used to interchange digital audio data streams between the Host and the audio function. They are optional. An audio function can have zero or more AudioStreaming interfaces associated with it, each possibly carrying data of a different nature and format. Each AudioStreaming interface can have at most one isochronous data endpoint. This construction guarantees a one-to-one relationship between the AudioStreaming interface and the single audio data stream, related to the endpoint. In some cases, the isochronous data endpoint is accompanied by an associated isochronous synch endpoint for synchronization purposes. The isochronous data endpoint is required to be the first endpoint in the AudioStreaming interface. The synch endpoint always follows its associated data endpoint. An AudioStreaming interface can have alternate settings that can be used to change certain characteristics of the interface and underlying endpoint. A typical use of alternate settings is to provide a way to change the bandwidth requirements an active AudioStreaming interface imposes on the USB. By incorporating a low-bandwidth or even zero-bandwidth alternate setting for each AudioStreaming interface, a device offers to the Host software the option to temporarily relinquish USB bandwidth by switching to this lowbandwidth alternate setting. If such an alternate setting is implemented, it must be the default alternate setting (alternate setting zero). A zero-bandwidth alternate setting can be implemented by specifying zero endpoints in the standard AudioStreaming interface descriptor. All other interface and endpoint descriptors (both standard and class-specific) need not be specified in this case. The AudioStreaming interface is essentially used to provide an access point for the Host software (drivers) to manipulate the behavior of the physical interface it represents. Therefore, even external connections to the audio function (S/PDIF interface, analog input, etc.) can be represented by an AudioStreaming interface so that the Host software can control certain aspects of those connections. This type of AudioStreaming interface has no associated USB endpoints. The related audio data stream is not using USB as a transport medium. In addition, the concepts of dynamic interfaces as described in the Universal Serial Bus Class Specification can be used to notify the Host software that changes have occurred on the external connection. This is analogous to switching alternate settings on an AudioStreaming interface with USB endpoints, except that the switch is now device-initiated instead of Host-initiated. As an example, consider an S/PDIF connection to an audio function. If nothing is connected to this external S/PDIF interface, the AudioStreaming interface is idle and reports itself as being dynamic and non-configured (bInterfaceClass=0x00). If the user connects a standard IEC958 signal to the audio function, the S/PDIF receiver inside the audio function detects this and notifies the Host that the AudioStreaming interface has switched to its IEC958 mode (alternate setting x). If, on the other hand, an USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 32 IEC1937 signal, carrying MPEG-encoded audio is connected, the AudioStreaming interface switches to the appropriate setting (alternate setting y) to handle the MPEG decoding process. For every isochronous OUT or IN endpoint defined in any of the AudioStreaming interfaces, there must be a corresponding Input or Output Terminal defined in the audio function. For the Host to fully understand the nature and behavior of the connection, it must take into account the interface- and endpoint-related descriptors as well as the Terminal-related descriptor. 3.7.2.1 Isochronous Audio Data Stream Endpoint In general, the data streams that are handled by an isochronous audio data endpoint do not necessarily map directly to the logical channels that exist within the audio function. As an example, consider a “stereo” audio data stream that contains audio data, encoded in Dolby Prologic format. Although there is only one data stream, carrying interleaved samples for Left and Right (or more precisely LT and RT), these two channels carry information for four logical channels (Left, Right, Center, and Surround). Other examples include cases in which multiple logical audio channels are compressed into a single data stream. The format of such a data stream can be entirely different from the native format of the logical channels (for example, 256 Kbits/s MPEG1 stereo audio as opposed to 176.4 Kbytes/s 16 bit stereo 44.1 kHz audio). Therefore, to describe the data transfer at the endpoint level correctly, the notion of logical channel is replaced by the notion of audio data stream. It is the responsibility of the AudioStreaming interface which contains the OUT endpoint to convert between the audio data stream and the embedded logical channels before handing the data over to the Input Terminal. In many cases, this conversion process involves some form of decoding. Likewise, the AudioStreaming interface which contains the IN endpoint must convert logical channels from the Output Terminal into an audio data stream, often using some form of encoding. Consequently, requests to control properties that exist within an audio function, such as volume or mute cannot be sent to the endpoint in an AudioStreaming interface. An AudioStreaming interface operates on audio data streams and is unaware of the number of logical channels it eventually serves. Instead, these requests must be directed to the proper audio function’s Units or Terminals via the AudioControl interface. As already mentioned, an AudioStreaming interface can have zero or one isochronous audio data endpoint. If multiple synchronous audio channels must be communicated between Host and audio function, they must be clustered into one audio channel cluster by interleaving the individual audio data, and the result can be directed to the single endpoint. Furthermore, a single synch endpoint, if needed, can service the entire cluster. In this way, a minimum number of endpoints are consumed to transport related data streams. If an audio function needs more than one cluster to operate, each cluster is directed to the endpoint of a separate AudioStreaming interface, belonging to the same Audio Interface Collection (all servicing the same audio function). If there is a need to manipulate a number of AudioStreaming interfaces as a whole, these interfaces can be tied together. The techniques for associating interfaces, described in the Universal Serial Bus Class Specification should be used to create the binding. 3.7.2.2 Isochronous Synch Endpoint For adaptive audio source endpoints and asynchronous audio sink endpoints, an explicit synch mechanism is needed to maintain synchronization during transfers. For details about synchronization, see Section 5, “USB Data Flow Model,” in the USB Specification and the relevant parts of the Universal Serial Bus Class Specification. The information carried over the synch path consists of a 3-byte data packet. These three bytes contain the Ff value in a 10.14 format as described in Section 5.10.4.2, “Feedback” of the USB Specification. Ff represents the average number of samples the endpoint must produce or consume per frame to match the desired sampling frequency Fs exactly. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 33 A new Ff value is available every 2(10 – P) ms (frames) where P can range from 1 to 9, inclusive. The sample clock Fs is always derived from a master clock Fm in the device. P is related to the ratio between those clocks through the following relationship 数式 In worst case conditions, only Fs is available and Fm = Fs, giving P = 1 because one can always use phase information to resolve the estimation of Fs within half a clock cycle. An adaptive audio source IN endpoint is accompanied by an associated isochronous synch OUT endpoint that carries Ff. An asynchronous audio sink OUT endpoint is accompanied by an associated isochronous synch IN endpoint. For adaptive IN endpoints and asynchronous OUT endpoints, the standard endpoint descriptor provides the bSynchAddress field to establish a link to the associated synch endpoint. It contains the address of the synch endpoint. The bSynchAddress field of the synch standard endpoint descriptor must be set to zero. As indicated earlier, a new Ff value is available every 2(10 – P) frames with P ranging from 1 to 9. The bRefresh field of the synch standard endpoint descriptor is used to report the exponent (10-P) to the Host. It can range from 9 down to 1. (512 ms down to 2 ms) 3.7.2.3 Audio Channel Cluster Format An audio channel cluster is a grouping of logical audio channels that share the same characteristics like sampling frequency, bit resolution, etc. Channel numbering in the cluster starts with channel one up to the number of channels in the cluster. The virtual channel zero is used to address a master Control in a Unit, effectively influencing all the channels at once. The maximum number of independent channels in an audio channel cluster is limited to 254. Indeed, Channel zero is used to reference the master channel and code 0xFF (255) is used in requests to indicate that the request parameter block holds values for all available addressed Controls. For further details, refer to Section 5.2.2, “AudioControl Requests” and the sections that follow, describing the second form of requests. In many cases, each channel in the audio cluster is also tied to a certain location in the listening space. A trivial example of this is a cluster that contains Left and Right logical audio channels. To be able to describe more complex cases in a manageable fashion, this specification imposes some limitations and restrictions on the ordering of logical channels in an audio channel cluster. There are twelve predefined spatial locations · Left Front (L) · Right Front (R) · Center Front (C) · Low Frequency Enhancement (LFE) [Super woofer] · Left Surround (LS) · Right Surround (RS) · Left of Center (LC) [in front] · Right of Center (RC) [in front] · Surround (S) [rear] · Side Left (SL) [left wall] · Side Right (SR) [right wall] · Top (T) [overhead] If there are logical channels present in the audio channel cluster that correspond to some of the previously defined spatial positions, then they must appear in the order specified in the above list. For instance, if a USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 34 cluster contains logical channels Left, Right and LFE, then channel 1 is Left, channel 2 is Right, and channel 3 is LFE. To characterize an audio channel cluster, a cluster descriptor is introduced. This descriptor is embedded within one of the following descriptors · Input Terminal descriptor · Mixer Unit descriptor · Processing Unit descriptor · Extension Unit descriptor The cluster descriptor contains the following fields · bNrChannels a number that specifies how many logical audio channels are present in the cluster. · wChannelConfig a bit field that indicates which spatial locations are present in the cluster. The bit allocations are as follows § D0 Left Front (L) § D1 Right Front (R) § D2 Center Front (C) § D3 Low Frequency Enhancement (LFE) § D4 Left Surround (LS) § D5 Right Surround (RS) § D6 Left of Center (LC) § D7 Right of Center (RC) § D8 Surround (S) § D9 Side Left (SL) § D10 Side Right (SR) § D11 Top (T) § D15..12 Reserved · Each bit set in this bit map indicates there is a logical channel in the cluster that carries audio information, destined for the indicated spatial location. The channel ordering in the cluster must correspond to the ordering, imposed by the above list of predefined spatial locations. If there are more channels in the cluster than there are bits set in the wChannelConfig field, (i.e. bNrChannels [Number_Of_Bits_Set]), then the first [Number_Of_Bits_Set] channels take the spatial positions, indicated in wChannelConfig. The remaining channels have ‘non-predefined’ spatial positions (positions that do not appear in the predefined list). If none of the bits in wChannelConfig are set, then all channels have non-predefined spatial positions. If one or more channels have non-predefined spatial positions, their spatial location description can optionally be derived from the iChannelNames field. · iChannelNames index to a string descriptor that describes the spatial location of the first nonpredefined logical channel in the cluster. The spatial locations of all remaining logical channels must be described by string descriptors with indices that immediately follow the index of the descriptor of the first non-predefined channel. Therefore, iChannelNames inherently describes an array of string descriptor indices, ranging from iChannelNames to (iChannelNames + (bNrChannels- [Number_Of_Bits_Set]) - 1) Example 1 An audio channel cluster that carries Dolby Prologic logical channels has the following cluster descriptor Table 3-2 Dolby Prologic Cluster Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 35 Offset Field Size Value Description 0 bNrChannels 1 4 There are 4 logical channels in the cluster. 1 wChannelConfig 2 0x0107 Left, Right, Center and Surround are present. 3 iChannelNames 1 Index Because there are no non-predefined logical channels, this index must be set to 0. Example 2 A hypothetical audio channel cluster inside an audio function could carry Left, Left Surround, Left of Center, and two auxiliary channels that contain each a different weighted mix of the Left, Left Surround and Left of Center channels. The corresponding cluster descriptor would be Table 3-3 Left Group Cluster Descriptor Offset Field Size Value Description 0 bNrChannels 1 5 There are 5 logical channels in the cluster 1 wChannelConfig 2 0x0051 Left, Left Surround, Left of Center and two undefined channels are present. (bNrChannels [Number_Of_Bits_Set]) 3 iChannelNames 1 Index Optional index of the first non-predefined string descriptor Optional string descriptors String (Index) = ‘Left Down Mix 1’ String (Index+1) = ‘Left Down Mix 2’ 3.7.2.4 Audio Data Format The format used to transport audio data over the USB is entirely determined by the code, located in the wFormatTag field of the class-specific interface descriptor. Therefore, each defined Format Tag must document in detail the audio data format it uses. Consequently, format-specific descriptors are needed to fully describe the format. For details about the predefined Format Tags and associated data formats and descriptors, see the separate document, USB Audio Data Formats, that is considered part of this specification. Vendor-specific protocols must be fully documented by the manufacturer. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/33.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 26 · Reverb Level sets the amount of reverberant sound. · Reverb Time sets the time over which the reverberation will continue. · Reverb Delay Feedback used with Reverb Types Delay and Delay Panning. Sets the way in which delay repeats The effects of the Reverberation Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired reverberation effect influences all channels as a whole. It is entirely left to the designer how a certain reverberation effect is obtained. It is not the intention of this specification to precisely define all the parameters that influence the reverberation experience (for instance in a multi-channel system, it is possible to create very similar reverberation impressions, using different algorithms and parameter settings on all channels). The symbol for the Reverberation Processing Unit can be found in the following figure ここに画像 Figure 3-9 Reverberation Processing Unit Icon 3.5.6.5 Chorus Processing Unit The Chorus Processing Unit is used to add chorus effects to the original audio information. A number of parameters can be manipulated to obtain the desired chorus effects. · Chorus Level controls the amount of the effect sound of chorus. · Chorus Modulation Rate sets the speed (frequency) of the modulator of the chorus. · Chorus Modulation Depth sets the depth at which the chorus sound is modulated. The effects of the Chorus Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired chorus effect influences all channels as a whole. It is entirely left to the designer how a certain chorus effect is obtained. It is not the intention of this specification to precisely define all the parameters that influence the chorus experience. The symbol for the Chorus Processing Unit can be found in the following figure ここに画像 Figure 3-10 Chorus Processing Unit Icon 3.5.6.6 Dynamic Range Compressor Processing Unit The Dynamic Range Compressor Processing Unit is used to intelligently limit the dynamic range of the original audio information. A number of parameters can be manipulated to influence the desired compression. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 27 ここに画像 Figure 3-11 Dynamic Range Compressor Transfer Characteristic · Compression ratio R determines the slope of the static input-to-output transfer characteristic in the compressor’s active input range. The compression is defined in terms of the compression ratio R, which is the inverse of the derivative of the output power PO as a function of the input power PI when PO and PI are expressed in dB. 数式 PR is the reference level and it is made equal to the so-called line level. All levels are expressed relative to the line level (0 dB), which is usually 15-20 dB below the maximum level. Compression is obtained when R 1, R = 1 does not affect the signal and R 1 gives rise to expansion. · Maximum Amplitude the upper boundary of the active input range, relative to the line level (0 dB). Expressed in dB. · Threshold level the lower boundary of the active input level, relative to the line level (0 dB). · Attack Time determines the response of the compressor as a function of time to a step in the input level. Expressed in ms. · Release Time relates to the recovery time of the gain of the compressor after a loud passage. Expressed in ms. The effects of the Dynamic Range Compressor Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. In principle, the algorithm to produce the desired dynamic range compression influences all channels as a whole. It is entirely left to the designer how a certain dynamic range compression is obtained. The symbol for the Dynamic Range Compressor Processing Unit can be found in the following figure ここに画像 {Figure 3-12 Dynamic Range Compressor Processing Unit Icon}} USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 28 3.5.7 Extension Unit The Extension Unit (XU) is the method provided by this specification to easily add vendor-specific building blocks to the specification. The Extension Unit provides one or more logical input channels, grouped into one or more audio channel clusters and transforms them into a number of logical output channels, grouped into one audio channel cluster. Therefore, the Extension Unit can have multiple Input Pins and has a single Output Pin. Extension Units are required to support at least the Enable Processing Control, allowing the Host software to bypass whatever functionality is incorporated in the Extension Unit. Although a generic audio driver will not be able to determine what functionality is implemented in the Extension Unit, let alone manipulate it, it still will be capable of recognizing the presence of vendorspecific extensions and assume default behavior for those units. The symbol for the Extension Unit can be found in the following figure ここに画像 {Figure 3-13 Extension Unit Icon 3.5.8 Associated Interfaces In some cases, an audio function building block (Terminal, Mixer Unit, Feature Unit, and so on) needs to be associated with interfaces that are not part of the Audio Interface Collection. As an example, consider a speaker system with front-panel volume knob. The manufacturer might want to impose a binding between the front-panel volume Control and the speaker system’s volume setting. The volume knob could be represented by a HID interface that coexists with the Audio Interface Collection. To create a binding between the Feature Unit inside the audio function that deals with master Volume Control and the frontpanel volume knob, the Feature Unit descriptor can be supplemented by a special Associated Interface descriptor that holds a link to the associated HID interface. In general, each Terminal or Unit descriptor can be supplemented by one or more optional Associated Interface descriptors that hold a reference to an interface. This interface is external to the audio function and interacts in a certain way with the Terminal or Unit. The layout of the Associated Interface descriptor is open-ended and is qualified by the Entity type it succeeds and by the target interface Class type it references. For the time being, this specification does not define any specific Associated Interface descriptor layout. 3.6 Copy Protection Because the Audio Device Class is primarily dealing with digital audio streams, the issue of protecting these – often-copyrighted – streams can not be ignored. Therefore, this specification provides the means to preserve whatever copyright information is available. However, it is the responsibility of the Host software to manage the flow of copy protection information throughout the audio function. Copy protection issues come into play whenever digital audio streams enter or leave the audio function. Therefore, the copy protection mechanism is implemented at the Terminal level in the audio function. Streams entering the audio function can be accompanied by specific information, describing the copy protection level of that audio stream. Likewise, streams leaving the audio function should be accompanied by the appropriate copy protection information, if the hardware permits it. This specification provides for two dedicated requests that can be used to manage the copy protection mechanism. The Get Copy Protect USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 29 request can be used to retrieve copy protection information from an Input Terminal whereas the Set Copy Protect request is used to preset the copy protection level of an Output Terminal. This specification provides for three levels of copy permission, similar to CGMS (Copy Generation Management System) and SCMS (Serial Copy Management System). · Level 0 Copying is permitted without restriction. The material is either not copyrighted, or the copyright is not asserted. · Level 1 One generation of copies may be made. The material is copyright protected and is the original. · Level 2 The material is copyright protected and no digital copying is permitted. 3.7 Operational Model A device can support multiple configurations. Within each configuration can be multiple interfaces, each possibly having alternate settings. These interfaces can pertain to different functions that co-reside in the same composite device. Even several independent audio functions can exist in the same device. Interfaces, belonging to the same audio function are grouped into an Audio Interface Collection. If the device contains multiple independent audio functions, there must be multiple Audio Interface Collections, each providing full access to their associated audio function. As an example of a composite device, consider a PC monitor equipped with a built-in stereo speaker system. Such a device could be configured to have one interface dealing with configuration and control of the monitor part of the device (HID Class), while a Collection of two other interfaces deals with its audio aspects. One of those, the AudioControl interface, is used to control the inner workings of the function (Volume Control etc.) whereas the other, the AudioStreaming interface, handles the data traffic, sent to the monitor’s audio subsystem. The AudioStreaming interface could be configured to operate in mono mode (alternate setting x) in which only a single channel data stream is sent to the audio function. The receiving Input Terminal could duplicate this audio stream into two logical channels, and those could then be reproduced on both speakers. From an interface point of view, such a setup requires one isochronous endpoint in the AudioStreaming interface to receive the mono audio data stream, in addition to the mandatory control endpoint and optional interrupt endpoint in the AudioControl interface. The same system could be used to play back stereo audio. In this case, the stereo AudioStreaming interface must be selected (alternate setting y). This interface also consists of a single isochronous endpoint, now receiving a data stream that interleaves left and right channel samples. The receiving Input Terminal now splits the stream into a Left and Right logical channel. The AudioControl interface remains unchanged. If the above AudioStreaming interface were an asynchronous sink, one extra isochronous synch endpoint would also be necessary. Audio Interface Collections can be dynamic. Because the AudioControl interface, together with its associated AudioStreaming interface(s), constitute the ‘logical interface’ to the audio function, they must all come into existence at the same moment in time. As stated earlier, audio functionality is located at the interface level in the device class hierarchy. The following sections describe the Audio Interface Collection, containing a single AudioControl interface and optional AudioStreaming interfaces, together with their associated endpoints that are used for audio function control and for audio data stream transfer. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 30 3.7.1 AudioControl Interface To control the functional behavior of a particular audio function, the Host can manipulate the Units and Terminals inside the audio function. To make these objects accessible, the audio function must expose a single AudioControl interface. This interface can contain the following endpoints · A control endpoint for manipulating Unit and Terminal settings and retrieving the state of the audio function. This endpoint is mandatory, and the default endpoint 0 is used for this purpose. · An interrupt endpoint for status returns. This endpoint is optional. The AudioControl interface is the single entry point to access the internals of the audio function. All requests that are concerned with the manipulation of certain audio Controls within the audio function’s Units or Terminals must be directed to the AudioControl interface of the audio function. Likewise, all descriptors related to the internals of the audio function are part of the class-specific AudioControl interface descriptor. The AudioControl interface of an audio function may support multiple alternate settings. Alternate settings of the AudioControl interface could for instance be used to implement audio functions that support multiple topologies by presenting different class-specific AudioControl interface descriptors for each alternate setting. 3.7.1.1 Control Endpoint The audio interface class uses endpoint 0 (the default pipe) as the standard way to control the audio function using class-specific requests. These requests are always directed to one of the Units or Terminals that make up the audio function. The format and contents of these requests are detailed further in this document. 3.7.1.2 Status Interrupt Endpoint A USB AudioControl interface can support an optional interrupt endpoint to inform the Host about the status the status of the different addressable Entities (Terminals, Units, interfaces and endpoints) inside the audio function. In fact, the interrupt endpoint is used by the entire Audio Interface Collection to convey status information to the Host. It is considered part of the AudioControl interface because this is the anchor interface for the Collection. The interrupt data is a 2-byte entity. The bStatusType field contains information in D7 indicating whether there is still an interrupt pending or not. This bit remains set until all pending interrupts are properly serviced. The other bits are used to report the cause of the interrupt in more detail. Bit D6 of the bStatusType field indicates a change in memory contents on one of the addressable Entities inside the audio function. This bit is cleared by a Get Memory request on the appropriate Entity. Bits D3..0 indicate the originator of the current interrupt. All addressable Entities inside an audio function can be originator. The contents of the bOriginator field must be interpreted according to the code in D3..0 of the bStatusType field. If the originator is the AudioControl interface, the bOriginator field contains the TerminalID or UnitID of the Entity that caused the interrupt to occur. If the bOriginator field is set to zero, the ‘virtual’ Entity interface is the originator. This can be used to report global AudioControl interface changes to the Host. If the originator is an AudioStreaming interface, the bOriginator field contains the interface number of the AudioStreaming interface. Likewise, it contains the endpoint number if the originator were an AudioStreaming endpoint. The proper response to an interrupt is either a Get Status request (D6=0) or a Get Memory request (D6=1). Issuing these requests to the appropriate originator must clear the Interrupt Pending bit and the Memory Contents Changed bit, if applicable. The following table specifies the format of the status word 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/micspeed/pages/61.html
#blognavi Q7にこんなAll Road Quattro風にするボディーキットが発売されるみたいです。 何ともガンダムチックな感じになっちゃいますね(^^; TOUAREG にこんなキットがあれば喜ぶ人も多い様な気がします。フロントのアルミのアンダーカバーはちょっと魅力的ですが(笑) カテゴリ [New Model] - trackback- 2005年12月11日 16 33 48 オフロード用パックっぽいですが、私なら同色化して、前面のバンパーモール(正式名称はわかりません^^;)は付けません。 TREG用もあったらいいのに・・ デビューから2年たった今でも、オプション少なすぎですよね・・ でも、このQ7のオバフェンってどうなってるんでしょう?? もともと結構膨らみを持ったフェンダー故にミラーギリギリまでとかくるんでしょうか? (ARもそれくらいありますよね) だとしたら迫力ありそうです^^ スポーツパックといい、これといい、バリエーションが増えてきましたね。 P☆さんのチョイスが楽しみです^^ -- MIC (2005-12-11 19 33 09) 私ウレタン結構好きなので こんな感じが良いかも^^ -- バッカス (2005-12-12 12 43 54) サイドステップはやはり板型なんですね。丸太型は不評だったのでしょうか? -- かもめ (2005-12-14 09 24 40) 昨日12/16午後2時頃お台場レインボーブリッジ上り線を、6台のQ7がトレーラーに乗せられて走ってました。残念ながら写真は取れませんでしたが、白いaudiのシートに包まれながらも一目で判るモノでした。発売時期早まるんでしょうかねぇ・・それともテスト用&プレス用にでしょうか。 -- nagi (2005-12-17 11 00 08) 都内を今から陸送してるという事は何か動きがあるんでしょうか? でも本国での納車も来年3月頃でしょうから、日本ですとお役所の型式認定の問題もありますので早くても来年の今頃かと思います。例のツノの問題も残ってますし、これから入念にテストを繰り返すんでしょうね。テスト車両でもいいから、早く走っている実車を拝んでみたいですね〜(笑) -- P☆ (2005-12-17 23 46 46) 名前 コメント #blognavi
https://w.atwiki.jp/usb_audio/pages/32.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 21 · Resolution attribute As an example, consider a Volume Control inside a Feature Unit. By issuing the appropriate Get requests, the Host software can obtain values for the Volume Control’s attributes and, for instance, use them to correctly display the Control on the screen. Setting the Volume Control’s current attribute allows the Host software to change the volume setting of the Volume Control. Additionally, each Entity (Unit or Terminal) in an audio function can have a memory space attribute. This attribute optionally provides generic access to the internal memory space of the Entity. This could be used to implement vendor-specific control of an Entity through generically provided access. 3.5.1 Input Terminal The Input Terminal (IT) is used to interface between the audio function’s ‘outside world’ and other Units in the audio function. It serves as a receptacle for audio information flowing into the audio function. Its function is to represent a source of incoming audio data after this data has been properly extracted from the original audio stream into the separate logical channels that are embedded in this stream (the decoding process). The logical channels are grouped into an audio channel cluster and leave the Input Terminal through a single Output Pin. An Input Terminal can represent inputs to the audio function other than USB OUT endpoints. A Line-In connector on an audio device is an example of such a non-USB input. However, if the audio stream is entering the audio function by means of a USB OUT endpoint, there is a one-to-one relationship between that endpoint and its associated Input Terminal. The class-specific endpoint descriptor contains a field that holds a direct reference to this Input Terminal. The Host needs to use both the endpoint descriptors and the Input Terminal descriptor to get a full understanding of the characteristics and capabilities of the Input Terminal. Stream-related parameters are stored in the endpoint descriptors. Control-related parameters are stored in the Terminal descriptor. The conversion process from incoming, possibly encoded audio streams to logical audio channels always involves some kind of decoding engine. This specification defines several types of decoding. These decoding types range from rather trivial decoding schemes like converting interleaved stereo 16 bit PCM data into a Left and Right logical channel to very sophisticated schemes like converting an MPEG-2 7.1 encoded audio stream into Left, Left Center, Center, Right Center, Right, Right Surround, Left Surround and Low Frequency Enhancement logical channels. The decoding engine is considered part of the Entity that actually receives the encoded audio data streams (like a USB AudioStreaming interface). The type of decoding is therefore implied in the wFormatTag value, located in the AudioStreaming interface descriptor. Requests specific to the decoding engine must be directed to the AudioStreaming interface. The associated Input Terminal deals with the logical channels after they have been decoded. The symbol for the Input Terminal is depicted in the following figure ここに画像 Figure 3-1 Input Terminal Icon 3.5.2 Output Terminal The Output Terminal (OT) is used to interface between Units inside the audio function and the ‘outside world’. It serves as an outlet for audio information, flowing out of the audio function. Its function is to represent a sink of outgoing audio data before this data is properly packed from the original separate logical channels into the outgoing audio stream (the encoding process). The audio channel cluster enters the Output Terminal through a single Input Pin. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 22 An Output Terminal can represent outputs from the audio function other than USB IN endpoints. A speaker built into an audio device or a Line Out connector is an example of such a non-USB output. However, if the audio stream is leaving the audio function by means of a USB IN endpoint, there is a oneto- one relationship between that endpoint and its associated Output Terminal. The class-specific endpoint descriptor contains a field that holds a direct reference to this Output Terminal. The Host needs to use both the endpoint descriptors and the Output Terminal descriptor to fully understand the characteristics and capabilities of the Output Terminal. Stream-related parameters are stored in the endpoint descriptors. Control-related parameters are stored in the Terminal descriptor. The conversion process from incoming logical audio channels to possibly encoded audio streams always involves some kind of encoding engine. This specification defines several types of encoding, ranging from rather trivial to very sophisticated schemes. The encoding engine is considered part of the Entity that actually transmits the encoded audio data streams (like a USB AudioStreaming interface). The type of encoding is therefore implied in the wFormatTag value, located in the AudioStreaming interface descriptor. Requests specific to the encoding engine must be directed to the AudioStreaming interface. The associated Output Terminal deals with the logical channels before encoding. The symbol for the Output Terminal is depicted in the following figure ここに画像 Figure 3-2 Output Terminal Icon 3.5.3 Mixer Unit The Mixer Unit (MU) transforms a number of logical input channels into a number of logical output channels. The input channels are grouped into one or more audio channel clusters. Each cluster enters the Mixer Unit through an Input Pin. The logical output channels are grouped into one audio channel cluster and leave the Mixer Unit through a single Output Pin. Every input channel can virtually be mixed into all of the output channels. If n is the total number of input channels and m is the number of output channels, then there are n x m mixing Controls in the Mixer Unit. Not all of these Controls have to be physically implemented. Some Controls can have a fixed setting and be non-programmable. The Mixer Unit Descriptor reports which Controls are programmable in the bmControls bitmap field. Using this model, a permanent connection can be implemented by reporting the Control as non-programmable and by returning a Control setting of 0 dB when requested. Likewise, a missing connection can be implemented by reporting the Control as non-programmable and by returning a Control setting of -¥ dB. The symbol for the Mixer Unit can be found in the following figure ここに画像 Figure 3-3 Mixer Unit Icon 3.5.4 Selector Unit The Selector Unit (SU) selects from n audio channel clusters, each containing m logical input channels and routes them unaltered to the single output audio channel cluster, containing m output channels. It USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 23 represents a multi-channel source selector, capable of selecting between n m-channel sources. It has n Input Pins and a single Output Pin. The symbol for the Selector Unit can be found in the following figure ここに画像 Figure 3-4 Selector Unit Icon 3.5.5 Feature Unit The Feature Unit (FU) is essentially a multi-channel processing unit that provides basic manipulation of the incoming logical channels. For each logical channel, the Feature Unit optionally provides audio Controls for the following features · Volume · Mute · Tone Control (Bass, Mid, Treble) · Graphic Equalizer · Automatic Gain Control · Delay · Bass Boost · Loudness In addition, the Feature Unit optionally provides the above audio Controls but now influencing all channels of the cluster at once. In this way, ‘master’ Controls can be implemented. The master Controls are cascaded after the individual channel Controls. This setup is especially useful in multi-channel systems where the individual channel Controls can be used for channel balancing and the master Controls can be used for overall settings. The logical channels in the cluster are numbered from one to the total number of channels in the cluster. The ‘master’ channel has channel number zero and is always virtually present. The Feature Unit Descriptor reports which Controls are present for every channel in the Feature Unit and for the ‘master’ channel. All logical channels in a Feature Unit are fully independent. There exist no cross couplings among channels within the Feature Unit. There are as many logical output channels, as there are input channels. These are grouped into one audio channel cluster that enters the Feature Unit through a single Input Pin and leaves the Unit through a single Output Pin. The symbol for the Feature Unit is depicted in the following figure ここに画像 Figure 3-5 Feature Unit Icon 3.5.6 Processing Unit The Processing Unit (PU) represents a functional block inside the audio function that transforms a number of logical input channels, grouped into one or more audio channel clusters into a number of logical output channels, grouped into one audio channel cluster. Therefore, the Processing Unit can have multiple Input USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 24 Pins and has a single Output Pin. This specification defines several standard transforms (algorithms) that are considered necessary to support additional audio functionality; these transforms are not covered by the other Unit types but are commonplace enough to be included in this specification so that a generic driver can provide control for it. Processing Units are encouraged to support at least the Enable Processing Control, allowing the Host software to bypass whatever functionality is incorporated in the Processing Unit. 3.5.6.1 Up/Down-mix Processing Unit The Up/Down-mix Processing Unit provides facilities to derive m output audio channels from n input audio channels. The algorithms and transforms applied to accomplish this are not defined by this specification and can be proprietary. The input channels are grouped into one input channel cluster that enters the Processing Unit over a single Input Pin. Likewise, all output channels are grouped into one output channel cluster, leaving the Processing Unit over a single Output Pin. The Up/Down-mix Processing Unit can support multiple modes of operation (besides the bypass mode, controlled by the Enable Processing Control). The available input audio channels are dictated by the Unit or Terminal to which the Up/Down-mix Processing Unit is connected. The Up/Down-mix Processing Unit descriptor reports which up/down-mixing modes the Unit supports through its waModes() array. Each element of the waModes() array indicates which output channels in the output cluster are effectively used in a particular mode. The unused output channels in the output cluster must produce muted output. Mode selection is implemented using the Get/Set Control request. As an example, consider the case where an Up/Down-mix Processing Unit is connected to an Input Terminal, producing DolbyÔ AC-3 5.1 decoded audio. The input audio channel cluster to the Up/Downmix Processing Unit therefore contains Left, Right, Center, Left Surround, Right Surround and LFE logical channels. Suppose the audio function’s hardware is limited to reproducing only dual channel audio. Then the Up/Down-mix Processing Unit could use some (sophisticated) algorithms to down-mix the available spatial audio information into two (‘enriched’) channels so that the maximum spatial effects can be experienced, using only two channels. It is left to the audio function’s discretion to use the appropriate down-mix algorithm depending on the physical nature of the Output Terminal to which the Up/Down-mix Processing Unit is routed. For instance, a different down-mix algorithm is needed whether the ‘enriched’ stereo stream is sent to a pair of speakers or to a headphone set. However, this knowledge already resides within the audio function and deciding which down-mix algorithm to use does not need Host intervention. As a second interesting example, suppose the hardware is capable of servicing eight discrete audio channels for instance a full-fledged MPEG-2 7.1 system. Now the Up/Down-mix Processing Unit could use certain techniques to derive meaningful content for the extra audio channels (Left of Center, Right of Center) that are present in the output cluster and are missing in the input channel cluster (AC-3 5.1). This is a typical example of an up-mix situation. The symbol for the Up/Down-mix Processing Unit is depicted in the following figure ここに画像 Figure 3-6 Up/Down-mix Processing Unit Icon 3.5.6.2 Dolby Prologic Processing Unit The Dolby PrologicÔ decoding process can be seen as an operator on the Left and Right logical channels of the input cluster of the Unit. It is capable of extracting additional audio data (Center and/or Surround USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 25 channels) from information that is transparently ‘superimposed’ on the Left and Right audio channels. It therefore differs from a true decoding process as defined for an Input Terminal. It can be applied on a logical audio stream anywhere in the audio function. The Dolby Prologic Processing Unit is a specialized derivative of the Up/Down-mix Processing Unit. The Dolby Prologic Processing Unit can have the following modes of operation (besides the bypass mode, controlled by the Enable Processing Control) · Left, Right, Center channel decoding · Left, Right, Surround channel decoding · Left, Right, Center, Surround decoding The Dolby Prologic Processing Unit descriptor reports which modes the Unit supports. Mode selection is then implemented using the Get/Set Control request. Dolby Prologic Surround Delay Control is considered not to be part of the Dolby PrologicÔ Processing Unit and must be handled by a separate Feature Unit. Dolby Prologic Bass Management is the local responsibility of the audio function and should not be controllable from the Host. The symbol for the Dolby Prologic Processing Unit can be found in the following picture ここに画像 Figure 3-7 Dolby Prologic Processing Unit Icon 3.5.6.3 3D-Stereo Extender Processing Unit The 3D-Stereo Extender Processing Unit operates on Left and Right channels only. It processes an existing stereo (two channel) soundtrack to add spaciousness and to make it appear to originate from outside the Left/Right speaker locations. Extended stereo effects can be achieved via various, straightforward methods. The algorithms and transforms applied to accomplish this are not defined by this specification and can be proprietary. The effects of the 3D-Stereo Extender Processing Unit can be bypassed at all times through manipulation of the Enable Processing Control. The size of the listening area (area in which the listener has to be placed with respect to speakers to hear the effect, also called sweet spot) can be controlled using the proper Get/Set Control request. The symbol for the 3D-Stereo Extender Unit is depicted in the following figure ここに画像 Figure 3-8 3D-Stereo Extender Processing Unit Icon 3.5.6.4 Reverberation Processing Unit The Reverberation Processing Unit is used to add room acoustics effects to the original audio information. These effects can range from small room reverberation effects to simulation of a large concert hall reverberation. A number of parameters can be manipulated to obtain the desired reverberation effects. · Reverb Type Room1, Room2, Room3, Hall1, Hall2, Plate, Delay, and Panning Delay. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/30.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 xi Table A-18 Extension Unit Control Selectors ...............................................................104 Table A-19 Endpoint Control Selectors ........................................................................104 Table B-1 USB Microphone Device Descriptor.............................................................106 Table B-2 USB Microphone Configuration Descriptor .................................................107 Table B-3 USB Microphone Standard AC Interface Descriptor....................................107 Table B-4 USB Microphone Class-specific AC Interface Descriptor ...........................108 Table B-5 USB Microphone Input Terminal Descriptor................................................109 Table B-6 USB Microphone Output Terminal Descriptor.............................................109 Table B-7 USB Microphone Standard AS Interface Descriptor (Alt. Set. 0) ................110 Table B-8 USB Microphone Standard AS Interface Descriptor....................................110 Table B-9 USB Microphone Class-specific AS General Interface Descriptor .............111 Table B-10 USB Microphone Type I Format Type Descriptor......................................111 Table B-11 USB Microphone Standard Endpoint Descriptor.......................................112 Table B-12 USB Microphone Class-specific Isoc. Audio Data Endpoint Descriptor ..112 Table B-13 USB Microphone Manufacturer String Descriptor.....................................112 Table B-14 USB Microphone Product String Descriptor..............................................113 Table C-1 USB Telephone Device Descriptor ...............................................................115 Table C-2 USB Telephone Configuration Descriptor ...................................................116 Table C-3 USB Telephone Standard AC Interface Descriptor......................................117 Table C-4 USB Telephone Class-specific Interface Descriptor ...................................117 Table C-5 USB Telephone Input Terminal Descriptor (ID1) .........................................118 Table C-6 USB Telephone Input Terminal Descriptor (ID2) .........................................118 Table C-7 USB Telephone Input Terminal Descriptor (ID3) .........................................119 Table C-8 USB Telephone Output Terminal Descriptor (ID4) ......................................119 Table C-9 USB Telephone Output Terminal Descriptor (ID5) ......................................120 Table C-10 USB Telephone Output Terminal Descriptor (ID6) ....................................120 Table C-11 USB Telephone Selector Unit Descriptor (ID7) ..........................................121 Table C-12 USB Telephone Selector Unit Descriptor (ID8) ..........................................121 Table C-13 USB Telephone Selector Unit Descriptor (ID9) ..........................................122 Table C-14 USB Telephone Standard Interface Descriptor (Alt. Set. 0).......................123 Table C-15 USB Telephone Standard AS Interface Descriptor ....................................123 Table C-16 USB Telephone Class-specific AS Interface Descriptor............................123 Table C-17 USB Telephone Type I Format Type Descriptor ........................................124 Table C-18 USB Telephone Standard Endpoint Descriptor.........................................124 Table C-19 USB Telephone Class-specific Isoc. Audio Data Endpoint Descriptor ....125 USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 xii Table C-20 USB Telephone Standard Interface Descriptor (Alt. Set. 0).......................125 Table C-21 USB Telephone Standard AS Interface Descriptor ....................................126 Table C-22 USB Telephone Class-specific AS Interface Descriptor............................126 Table C-23 USB Telephone Type I format type descriptor...........................................127 Table C-24 USB Telephone Standard Endpoint descriptor .........................................127 Table C-25 USB Telephone Class-specific Isoc. Audio Data Endpoint Descriptor ....127 Table C-26 USB Telephone Manufacturer String Descriptor .......................................128 Table C-27 USB Telephone Product String Descriptor ................................................128 Table 5-28 Set Interface Request Values.......................................................................129 Table C-29 Set Selector Unit Control Request Values .................................................129 Table C-30 Get Selector Unit Control Request Values.................................................130 USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 xiii List of Figures Figure 3-1 Input Terminal Icon ........................................................................................21 Figure 3-2 Output Terminal Icon .....................................................................................22 Figure 3-3 Mixer Unit Icon................................................................................................22 Figure 3-4 Selector Unit Icon ...........................................................................................23 Figure 3-5 Feature Unit Icon ............................................................................................23 Figure 3-6 Up/Down-mix Processing Unit Icon...............................................................24 Figure 3-7 Dolby Prologic Processing Unit Icon ............................................................25 Figure 3-8 3D-Stereo Extender Processing Unit Icon.....................................................25 Figure 3-9 Reverberation Processing Unit Icon..............................................................26 Figure 3-10 Chorus Processing Unit Icon.......................................................................26 Figure 3-11 Dynamic Range Compressor Transfer Characteristic ................................27 Figure 3-12 Dynamic Range Compressor Processing Unit Icon ...................................27 Figure 3-13 Extension Unit Icon ......................................................................................28 Figure B-1 USB Microphone Topology .........................................................................105 Figure B-2 USB Microphone Descriptor Hierarchy.......................................................106 Figure C-1 USB Telephone Topology ...........................................................................114 Figure C-2 USB Telephone Descriptor Hierarchy.........................................................115 USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 14 1 Introduction 1.1 Scope The Audio Device Class Definition applies to all devices or functions embedded in composite devices that are used to manipulate audio, voice, and sound-related functionality. This includes both audio data (analog and digital) and the functionality that is used to directly control the audio environment, such as Volume and Tone Control. The Audio Device Class does not include functionality to operate transport mechanisms that are related to the reproduction of audio data, such as tape transport mechanisms or CDROM drive control. Handling of MIDI data streams over the USB is directly related to audio and thus covered in this document. 1.2 Purpose The purpose of this document is to describe the minimum capabilities and characteristics an audio device must support to comply with the USB. This document also provides recommendations for optional features. 1.3 Related Documents · Universal Serial Bus Specification, 1.0 final draft revision (also referred to as the USB Specification). In particular, see Section 9, “USB Device Framework.” · Universal Serial Bus Device Class Definition for Audio Data Formats (referred to in this document as USB Audio Data Formats). · Universal Serial Bus Device Class Definition for Terminal Types (referred to in this document as USB Audio Terminal Types). · ANSI S1.11-1986 standard. · MPEG-1 standard ISO/IEC 111172-3 1993. · MPEG-2 standard ISO/IEC 13818-3 Feb. 20, 1997. · Digital Audio Compression Standard (AC-3), ATSC A/52 Dec. 20, 1995. (available from http //www.atsc.org) · ANSI/IEEE-754 floating-point standard. · ISO/IEC 958 International Standard Digital Audio Interface and Annexes. · ISO/IEC 1937 standard. · ITU G.711 standard. 1.4 Terms and Abbreviations This section defines terms used throughout this document. For additional terms that pertain to the Universal Serial Bus, see Section 2, “Terms and Abbreviations,” in the USB Specification. Audio Channel Cluster Group of logical audio channels that carry tightly related synchronous audio information. A stereo audio stream is a typical example of a two-channel audio channel cluster. Audio Control Attribute Parameter of an Audio Control. Examples are Current, Minimum, Maximum and Resolution attributes of a Volume Control. Audio Control Logical object that is used to manipulate a specific audio property. Examples are Volume Control, Mute Control, etc. Audio data stream Transport medium that can carry audio information. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 15 Audio Function Independent part of a USB device that deals with audiorelated functionality. Audio Interface Collection (AIC) Grouping of a single AudioControl interface, zero or more AudioStreaming interfaces and zero or more MIDIStreaming interfaces that together constitute a complete interface to an audio function. AudioControl interface (ACI) USB interface used to access the Audio Controls inside an audio function. AudioStreaming interface (ASI) USB interface used to transport audio streams into or out of the audio function. Entity Addressable logical object inside an audio function. Extension Unit (XU) Applies an undefined process to a number of logical input channels. Feature Unit (FU) Provides basic audio manipulation on the incoming logical audio channels. FUD Acronym for Feature Unit Descriptor. Input Pin Logical input connection to an Entity. Carries a single audio channel cluster. Input Terminal (IT) Receptacle for audio information flowing into the audio function. ITD Acronym for Input Terminal Descriptor. Logical Audio Channel Logical transport medium for a single audio channel. Makes abstraction of the physical properties and formats of the connection. Is usually identified by spatial location. Examples are Left channel, Right Surround channel, etc. MIDIStreaming interface (MSI) USB interface used to transport MIDI data streams into or out of the audio function. Mixer Unit (MU) Mixes a number of logical input channels into a number of logical output channels. MUD Acronym for Mixer Unit Descriptor. OTD Acronym for Output Terminal Descriptor. Output Pin Logical output connection to an Entity. Carries a single audio channel cluster. Output Terminal (OT) An outlet for audio information flowing out of the audio function. Processing Unit (PU) Applies a predefined process to a number of logical input channels. PUD Acronym for Processing Unit Descriptor. Selector Unit (SU) Selects from a number of input audio channel clusters. SUD Acronym for Selector Unit Descriptor. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/usb_audio/pages/57.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 11 Table A-16 Decoder Type Codes.......................................................................................135 Table A-17 Clock Source Control Selectors....................................................................135 Table A-18 Clock Selector Control Selectors..................................................................136 Table A-19 Clock Multiplier Control Selectors.................................................................136 Table A-20 Terminal Control Selectors............................................................................136 Table A-21 Mixer Control Selectors..................................................................................136 Table A-22 Selector Control Selectors.............................................................................137 Table A-23 Feature Unit Control Selectors......................................................................137 Table A-24 Reverberation Effect Unit Control Selectors................................................138 Table A-25 Reverberation Effect Unit Control Selectors................................................138 Table A-26 Modulation Delay Effect Unit Control Selectors..........................................139 Table A-27 Dynamic Range Compressor Effect Unit Control Selectors.......................139 Table A-28 Up/Down-mix Processing Unit Control Selectors........................................140 Table A-29 Dolby Prologic Processing Unit Control Selectors.....................................140 Table A-30 Stereo Extender Processing Unit Control Selectors...................................141 Table A-31 Extension Unit Control Selectors..................................................................141 Table A-32 AudioStreaming Interface Control Selectors...............................................141 Table A-33 Encoder Control Selectors.............................................................................142 Table A-34 MPEG Decoder Control Selectors.................................................................142 Table A-35 AC-3 Decoder Control Selectors....................................................................143 Table A-36 WMA Decoder Control Selectors...................................................................143 Table A-37 DTS Decoder Control Selectors.....................................................................143 Table A-38 Endpoint Control Selectors............................................................................144 USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 12 List of Figures Figure 3-1 Audio Function Global View..............................................................................18 Figure 3-2 Inside the Audio Function.................................................................................25 Figure 3-3 Input Terminal Icon............................................................................................28 Figure 3-4 Output Terminal Icon.........................................................................................29 Figure 3-5 Mixer Unit Icon....................................................................................................29 Figure 3-6 Selector Unit Icon...............................................................................................30 Figure 3-7 Feature Unit Icon................................................................................................30 Figure 3-8 Sampling Rate Converter Unit Icon..................................................................31 Figure 3-9 PEQS Effect Unit Icon........................................................................................32 Figure 3-10 Reverberation Effect Unit Icon........................................................................32 Figure 3-11 Modulation Delay Effect Unit Icon..................................................................33 Figure 3-12 Dynamic Range Compressor Transfer Characteristic.................................33 Figure 3-13 Dynamic Range Compressor Effect Unit Icon...............................................33 Figure 3-14 Up/Down-mix Processing Unit Icon...............................................................34 Figure 3-15 Dolby Prologic Processing Unit Icon.............................................................35 Figure 3-16 Stereo Extender Processing Unit Icon...........................................................35 Figure 3-17 Extension Unit Icon..........................................................................................36 Figure 3-18 Clock Source Icon............................................................................................37 Figure 3-19 Clock Selector Icon..........................................................................................37 Figure 3-20 Clock Multiplier Icon........................................................................................37 Figure 4-1 Mixer internals....................................................................................................56 USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 13 1 Introduction 1.1 Scope The Audio Device Class Definition applies to all devices or functions embedded in composite devices that are used to manipulate audio, voice, and sound-related functionality. This includes both audio data (analog and digital) and the functionality that is used to directly control the audio environment, such as Volume and Tone Control. The Audio Device Class does not include functionality to operate transport mechanisms that are related to the reproduction of audio data, such as tape transport mechanisms or CD-ROM drive control. Handling of MIDI data streams over the USB is directly related to audio and thus covered in this document. 1.2 Purpose The purpose of this document is to describe the minimum capabilities and characteristics an audio device must support to comply with the USB. This document also provides recommendations for optional features. 1.3 Related Documents • Universal Serial Bus Specification, Revision 2.0 (referred to in this document as the USB Specification). In particular, see Chapter 5, “USB Data Flow Model” and Chapter 9, “USB Device Framework.” • Universal Serial Bus Device Class Definition for Audio Data Formats (referred to in this document as USB Audio Data Formats). • Universal Serial Bus Device Class Definition for Terminal Types (referred to in this document as USB Audio Terminal Types). • ANSI S1.11-1986 standard. • MPEG-1 standard ISO/IEC 111172-3 1993. • MPEG-2 standard ISO/IEC 13818-3 Feb. 20, 1997. • Digital Audio Compression Standard (AC-3), ATSC A/52A Aug. 20, 2001. (available from Error! Hyperlink reference not valid.) • ANSI/IEEE-754 floating-point standard. • ISO/IEC 60958 International Standard Digital Audio Interface and Annexes. • ISO/IEC 61937 standard. 1.4 Terms and Abbreviations This section defines terms used throughout this document. For additional terms that pertain to the Universal Serial Bus, see Chapter 2, “Terms and Abbreviations,” in the USB Specification. Audio Channel Cluster Group of logical audio channels that carry tightly related synchronous audio information. A stereo audio stream is a typical example of a two-channel audio channel cluster. Audio Control Attribute Parameter of an Audio Control. Examples are Current, Minimum, Maximum and Resolution attributes of a Volume Control. Audio Control Logical object that is used to manipulate a specific audio property. Examples are Volume Control, Mute Control, etc. Audio data stream Transport medium that can carry audio information. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 14 Audio Function Independent part of a USB device that deals with audio-related functionality. Audio Interface Collection (AIC) Grouping of a single AudioControl interface, zero or more AudioStreaming interfaces and zero or more MIDIStreaming interfaces that together constitute a complete interface to an audio function. AudioControl interface (ACI) USB interface used to access the Audio Controls inside an audio function. AudioStreaming interface (ASI) USB interface used to transport audio streams into or out of the audio function. Effect Unit (EU) Provides advanced audio manipulation on the incoming logical audio channels. Entity Addressable logical object inside an audio function. Extension Unit (XU) Applies an undefined process to a number of logical input channels. Feature Unit (FU) Provides basic audio manipulation on the incoming logical audio channels. FUD Acronym for Feature Unit Descriptor. Input Pin Logical input connection to an Entity. Carries a single audio channel cluster. Input Terminal (IT) Receptacle for audio information flowing into the audio function. ITD Acronym for Input Terminal Descriptor. Logical Audio Channel Logical transport medium for a single audio channel. Makes abstraction of the physical properties and formats of the connection. Is usually identified by spatial location. Examples are Left channel, Right Surround channel, etc. MIDIStreaming interface (MSI) USB interface that may be used to transport MIDI data streams into or out of the audio function. Mixer Unit (MU) Mixes a number of logical input channels into a number of logical output channels. MUD Acronym for Mixer Unit Descriptor. OTD Acronym for Output Terminal Descriptor. Output Pin Logical output connection to an Entity. Carries a single audio channel cluster. Output Terminal (OT) An outlet for audio information flowing out of the audio function. Processing Unit (PU) Applies a predefined process to a number of logical input channels. PUD Acronym for Processing Unit Descriptor. Selector Unit (SU) Selects from a number of input audio channel clusters. SUD Acronym for Selector Unit Descriptor. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 15 Terminal Addressable logical object inside an audio function that represents a connection to the audio function’s outside world. Unit Addressable logical object inside an audio function that represents a certain audio subfunctionality. XUD Acronym for Extension Unit Descriptor. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 - 131 - 136 - 141 ここを編集