約 1,060,173 件
https://w.atwiki.jp/usb_audio/pages/24.html
原文:Audio Data Formats 1.0(PDF) USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 26 Offset Field Size Value Description 0 bLowScale 1 Number The setting for the attribute of the lowlevel Scaling Control. 1 bHighScale 1 Number The setting for the attribute of the highlevel Scaling Control. 2.4 Type III Formats These formats are based upon the IEC1937 standard. The IEC1937 standard describes a method to transfer non-PCM encoded audio bitstreams over an IEC958 digital audio interface, together with the transfer of the accompanying “Channel Status” and “User Data.” The IEC958 standard specifies a widely used method of interconnecting digital audio equipment with twochannel linear PCM audio. The IEC1937 standard describes a way in which the IEC958 interface shall be used to convey non-PCM encoded audio bit streams for consumer applications. The same basic techniques used in IEC1937 are reused here to convey non-PCM encoded audio bit streams over a Type III formatted audio stream. 2.4.1 Type III Format Type Descriptor The Type III Format Type is identical to the Type I PCM Format Type, set up for two-channel 16-bit PCM data. It therefore uses two audio subframes per audio frame. The subframe size is two bytes and the bit resolution is 16 bits. The Type III Format Type descriptor is identical to the Type I Format Type descriptor but with the bNrChannels field set to two, the bSubframeSize field set to two and the bBitResolution field set to 16. All the techniques used to correctly transport Type I PCM formatted streams over USB equally apply to Type III formatted streams. The non-PCM encoded audio bitstreams that are transferred within the basic 16-bit data area of the IEC1937 subframes (time-slots 12 [LSB] to 27 [MSB]) are placed unaltered in the two available 16-bit audio subframes per audio frame of the Type III formatted USB stream. The additional information in the IEC1937 subframes (channel status, user bits etc.) is discarded. Refer to the IEC1937 standard for a detailed description of the exact contents of the subframes. The layout of the Type III Format Type descriptor is given here for clarity. All preassigned fields have been filled in. Table 2-23 Type III Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 8+(ns*3) 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_III. Constant identifying the Format Type the AudioStreaming interface is using. 4 bNrChannels 1 Number Indicates the number of ‘virtual’ physical channels in the audio data stream. Must be set to two. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 27 Offset Field Size Value Description 5 bSubframeSize 1 Number The number of bytes occupied by one audio subframe. Must be set to 2. 6 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subframe. 7 bSamFreqType 1 Number Indicates how the sampling frequency can be programmed 0 Continuous sampling frequency1..255 The number of discrete sampling frequencies supported by the isochronous data endpoint of the AudioStreaming interface (ns) 8... See sampling frequency tables, below. Depending on the value in the bSamFreqType field, the layout of the next part of the descriptor is as shown in the following tables. Table 2-24 Continuous Sampling Frequency Offset Field Size Value Description 8 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 11 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-25 Discrete Number of Sampling Frequencies Offset Field Size Value Description 8 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … 8+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 28 3 Adding New Audio Data Formats Adding new Audio Data Formats to this specification is achieved by proposing a fully documented Audio Data Format to the Audio Device Class Working Group. Upon acceptance, they will register the new Audio Data Format (attribute a unique wFormatTag) and update this document accordingly. This process will also guarantee that new releases of generic USB audio drivers will support the newly registered Audio Data Formats. It is always possible to use vendor-specific definitions if the above procedure is considered unsatisfactory. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 29 Appendix A. Additional Audio Device Class Codes A.1 Audio Data Format Codes A.1.1 Audio Data Format Type I Codes Table A-1 Audio Data Format Type I Codes Name wFormatTag TYPE_I_UNDEFINED 0x0000 PCM 0x0001 PCM8 0x0002 IEEE_FLOAT 0x0003 ALAW 0x0004 MULAW 0x0005 A.1.2 Audio Data Format Type II Codes Table A-2 Audio Data Format Type II Codes Name wFormatTag TYPE_II_UNDEFINED 0x1000 MPEG 0x1001 AC-3 0x1002 A.1.3 Audio Data Format Type III Codes Table A-3 Audio Data Format Type III Codes Name wFormatTag TYPE_III_UNDEFINED 0x2000 IEC1937_AC-3 0x2001 IEC1937_MPEG-1_Layer1 0x2002 IEC1937_MPEG-1_Layer2/3 orIEC1937_MPEG-2_NOEXT 0x2003 IEC1937_MPEG-2_EXT 0x2004 IEC1937_MPEG-2_Layer1_LS 0x2005 USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 30 Name wFormatTag IEC1937_MPEG-2_Layer2/3_LS 0x2006 A.2 Format Type Codes Table A-4 Format Type Codes Format Type Code Value FORMAT_TYPE_UNDEFINED 0x00 FORMAT_TYPE_I 0x01 FORMAT_TYPE_II 0x02 FORMAT_TYPE_II 0x03 A.3 Format-Specific Control Selectors A.3.1 MPEG Control Selectors Table A-5 MPEG Control Selectors Control Selector Value MPEG_CONTROL_UNDEFINED 0x00 MP_DUAL_CHANNEL_CONTROL 0x01 MP_SECOND_STEREO_CONTROL 0x02 MP_MULTILINGUAL_CONTROL 0x03 MP_DYN_RANGE_CONTROL 0x04 MP_SCALING_CONTROL 0x05 MP_HILO_SCALING_CONTROL 0x06 A.3.2 AC-3 Control Selectors Table A-6 AC-3 Control Selectors Control Selector Value AC_CONTROL_UNDEFINED 0x00 AC_MODE_CONTROL 0x01 AC_DYN_RANGE_CONTROL 0x02 AC_SCALING_CONTROL 0x03 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/22.html
原文:Audio Data Formats 1.0(PDF) USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 11 Offset Field Size Value Description 8 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 11 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-3 Discrete Number of Sampling Frequencies Offset Field Size Value Description 8 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … 8+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.2.6 Supported Formats The following paragraphs list all currently supported Type I Audio Data Formats. 2.2.6.1 PCM Format The PCM (Pulse Coded Modulation) format is the most commonly used audio format to represent audio data streams. The audio data is not compressed and uses a signed two’s-complement fixed point format. It is left-justified (the sign bit is the Msb) and data is padded with trailing zeros to fill the remaining unused bits of the subframe. The binary point is located to the right of the sign bit so that all values lie within the range [-1,+1). 2.2.6.2 PCM8 Format The PCM8 format is introduced to be compatible with the legacy 8-bit wave format. Audio data is uncompressed and uses 8 bits per sample (bBitResolution = 8). In this case, data is unsigned fixed-point, left-justified in the audio subframe, Msb first. The range is [0,255]. 2.2.6.3 IEEE_FLOAT Format The IEEE_FLOAT format is based on the ANSI/IEEE-754 floating-point standard. Audio data is represented using the basic single-precision format. The basic single-precision number is 32 bits wide and has an 8-bit exponent and a 24-bit mantissa. Both mantissa and exponent are signed numbers, but neither is represented in two s-complement format. The mantissa is stored in sign magnitude format and the exponent in biased form (also called excess-n form). In biased form, there is a positive integer (called the bias) which is subtracted from the stored number to get the actual number. For example, in an eight-bit exponent, the bias is 127. To represent 0, the number 127 is stored. To represent -100, 27 is stored. An USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 12 exponent of all zeroes and an exponent of all ones are both reserved for special cases, so in an eight-bit field, exponents of -126 to +127 are possible. In the basic floating-point format, the mantissa is assumed to be normalized so that the most significant bit is always one, and therefore is not stored. Only the fractional part is stored. The 32-bit IEEE-754 floating-point word is broken into three fields. The most significant bit stores the sign of the mantissa, the next group of 8 bits stores the exponent in biased form, and the remaining 23 bits store the magnitude of the fractional portion of the mantissa. For further information, refer to the ANSI/IEEE-754 standard. The data is conveyed over USB using 32 bits per sample (bBitResolution = 32; bSubframeSize = 4). 2.2.6.4 ALaw Format and mLaw Format Starting from 12- or 16-bits linear PCM samples, simple compression down to 8-bits per sample (one byte per sample) can be achieved by using logarithmic companding. The compressed audio data uses 8 bits per sample (bBitsPerSample = 8). Data is signed fixed point, left-justified in the subframe, Msb first. The compressed range is [-128,128]. The difference between Alaw and mLaw compression lies in the formulae used to achieve the compression. Refer to the ITU G.711 standard for further details. 2.3 Type II Formats Type II formats are used to transmit non-PCM encoded audio data into bitstreams that consist of a sequence of encoded audio frames. 2.3.1 Encoded Audio Frames An encoded audio frame is a sequence of bits that contains an encoded representation of one or more physical audio channels. The encoding takes place over a fixed number of audio samples. Each encoded audio frame contains enough information to entirely reconstruct the audio samples (albeit not lossless), encoded in the encoded audio frame. No information from adjacent encoded audio frames is needed during decoding. The number of samples used to construct one encoded audio frame depends on the encoding scheme. (For MPEG, the number of samples per encoded audio frame (nf) is 384 for Layer I or 1152 for Layer II. For AC-3, the number of samples is 1536.) In most cases, the encoded audio frame represents multiple physical audio channels. The number of bits per encoded audio frame may be variable. The content of the encoded audio frame is defined according to the implemented encoding scheme. Where applicable, the bit ordering shall be MSB first, relative to existing standards of serial transmission or storage of that encoding scheme. An encoded audio frame represents an interval longer than the USB frame time of 1 ms. This is typical of audio compression algorithms that use psycho-acoustic or vocal tract parametric models. Note It is important to make a clear distinction between an audio frame (see Section 2.2.3, “Audio Frame”) and an encoded audio frame. The overloaded use of the term audio frame could cause confusion. Therefore, this specification will always use the qualifier ‘encoded’ to refer to MPEG or AC-3 encoded audio frames. 2.3.2 Audio Bitstreams An encoded audio bitstream is a concatenation of a potentially very large number of encoded audio frames, ordered according to ascending time. Subsequent encoded audio frames are independent and can be decoded separately. USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 13 2.3.3 USB Packets Encoded audio bitstreams are packetized when transported over an isochronous pipe. Each USB packet contains only part of a single encoded audio frame. Packet sizes are determined according to the shortpacket protocol. The encoded audio frame is broken down into a number of packets, each containing wMaxPacketSize bytes except for the last packet, which may be smaller and contains the remainder of the encoded audio frame. If the MaxPacketsOnly bit D7 in the bmAttributes field of the class-specific endpoint descriptor is set, the last (short) packet must be padded with zero bytes to wMaxPacketSize length. No USB packet may contain bits belonging to different encoded audio frames. If the encoded audio frame length is not a multiple of 8 bits, the last byte in the last packet is padded with zero bits. The decoder must ignore all padded extra bits and bytes. Consecutive encoded audio frames are separated by at least one Transfer Delimiter. A Transfer Delimiter must be sent in all consecutive USB frames until the next encoded audio frame is due. The above rules guarantee that a new encoded audio frame always starts on a USB packet boundary. 2.3.4 Bandwidth Allocation The encoded audio frame time tf equals the number of audio samples per encoded audio frame nf divided by the sampling rate fs of the original audio samples. 数式 The allocated bandwidth for the pipe must accommodate for the largest possible encoded audio frame to be transmitted within an encoded audio frame time. This should take into account the Transfer Delimiter requirement and any differences between the time base of the stream and the USB frame timer. The device may choose to consume more bandwidth than necessary (by increasing the reported wMaxPacketSize) to minimize the time needed to transmit an entire encoded audio frame. This can be used to enable early decoding and therefore minimize system latency. 2.3.5 Timing The timing reference point is the beginning of an encoded audio frame. Therefore, the USB packet that contains the first bits (usually the encoded audio frame sync word) of the encoded audio frame is used as a timing reference in USB space. This USB packet is called the reference packet. The transmission of the reference packet of an encoded audio frame should begin at the target playback time of that frame (minus the endpoint’s reported delay) rounded to the nearest USB frame time. This guarantees that, at the receiving end, the arrival of subsequent reference packets matches the encoded audio frame time tf as closely as possible. 2.3.6 Type II Format Type Descriptor The Type II Format Type descriptor starts with the usual three fields bLength, bDescriptorType and bDescriptorSubtype. The bFormatType field indicates this is a Type II descriptor. The wMaxBitRate field contains the maximum number of bits per second this interface can handle. It is a measure for the buffer size available in the interface. The wSamplesPerFrame field contains the number of non-PCM encoded audio samples contained within a single encoded audio frame The sampling frequency capabilities of the endpoint are reported using the bSamFreqType field andfollowing fields. Table 2-4 Type II Format Type Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 14 Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9+(ns*3) 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSamplesPerFrame 2 Number Indicates the number of PCM audio samples contained in one encoded audio frame. 8 bSamFreqType 1 Number Indicates how the sampling frequency can be programmed 0 Continuous sampling frequency1..255 The number of discrete sampling frequencies supported by the isochronous data endpoint of the AudioStreaming interface (ns) 9... See sampling frequency tables, below. Depending on the value in the bSamFreqType field, the layout of the next part of the descriptor is as shown in the following tables. Table 2-5 Continuous Sampling Frequency Offset Field Size Value Description 9 tLowerSamFreq 3 Number Lower bound in Hz of the sampling frequency range for this isochronous data endpoint. 12 tUpperSamFreq 3 Number Upper bound in Hz of the sampling frequency range for this isochronous data endpoint. Table 2-6 Discrete Number of Sampling Frequencies Offset Field Size Value Description 9 tSamFreq [1] 3 Number Sampling frequency 1 in Hz for this isochronous data endpoint. … … … … … USB Device Class Definition for Audio Data Formats Release 1.0 March 18, 1998 15 Offset Field Size Value Description 9+(ns-1)*3 tSamFreq [ns] 3 Number Sampling frequency ns in Hz for this isochronous data endpoint. Note In the case of adaptive isochronous data endpoints that support only a discrete number of sampling frequencies, the endpoint must at least tolerate ±1000 PPM inaccuracy on the reported sampling frequencies. 2.3.7 Rate feedback If the isochronous data endpoint needs explicit rate feedback (adaptive source, asynchronous sink), the feedback pipe shall report the number of equivalent PCM audio samples. The host will accumulate this data and start transmission of an encoded audio frame whenever the current number of samples exceeds the number of samples per encoded audio frame. The remainder is kept in the accumulator. 2.3.8 Supported Formats The following sections list all currently supported Type II Audio Data Formats. Format-specific descriptors and format-specific requests are explained in more detail. 2.3.8.1 MPEG Format In the current specification, only MPEG decoding aspects are considered. Real-time MPEG encoding peripherals are not (yet) available and consequently are not covered by this specification. 2.3.8.1.1 MPEG Format-Specific Descriptor The wFormatTag field is a duplicate of the wFormatTag field in the class-specific AudioStreaming interface descriptor. The same field is used here to identify the format-specific descriptor. The bmMPEGCapabilities bitmap field describes the capabilities of the MPEG decoder built into the AudioStreaming interface. Some general information must be retrieved from the Format Type-specific descriptor. For instance, the sampling frequencies supported by the decoder are reported through the Format Type-specific descriptor. This includes the ability of the decoder to handle low sampling frequencies (16 kHz, 22.05 kHz and 24 kHz) besides the standard 32 kHz, 44.1 kHz and 48 kHz sampling frequencies. Bits D2..0 of the bmMPEGCapabilities field are used to indicate which layers this decoder is capable of processing. The different layers relate to the different algorithms that are used during encoding and decoding. Bit D3 indicates that the decoder can only process the MPEG-1 base stream. Therefore, only Left and Right channels will be output. Bit D4 indicates that the decoder can handle MPEG-2 streams that contain two independent stereo pairs instead of the normal 3/2 encoding scheme. This bit is only applicable for MPEG-2 decoders. Bit D5 indicates that the decoder supports the MPEG dual channel mode. In this case, the MPEG-1 base stream does not contain Left and Right channels of a stereo pair but instead contains two independent mono channels. One of these channels can be selected through the proper request (Dual Channel Control) and reproduced over the Left and Right output channels simultaneously. Bit D6 indicates that the decoder supports the DVD MPEG-2 augmentation to 7.1 channels instead of the standard 5.1 channels. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/64.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 21 Setting and encoded data streams (IEC61937) in another Alternate Setting of the interface. Note however that the external connection could also be vendor specific (like a parallel data interface). 2.3.4.1 Type IV Format Type Descriptor The bFormatType field indicates this is a Type IV descriptor. Table 2-5 Type IV Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 4 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant FORMAT_TYPE_IV. Constant identifying the Format Type the AudioStreaming interface is using. 2.3.4.2 Type IV Supported Formats This specification supports all Audio Data Formats on an external connection that are defined on a USB pipe (Type I, II, and III). See Section 2.3.1.7, “Type I Supported Formats”, Section 2.3.2.8, “Type II Supported Formats”, and Section 2.3.3.2, “Type III Supported Formats”. The bit allocations in the bmFormats field of the class-specific AS interface descriptor for the different Type IV Audio Data Formats can be found in Appendix A.2.4, “Audio Data Format Type IV Bit Allocations.” 2.4 Extended Audio Data Formats Extended Audio Data Formats add support for a Packet Header to the previously defined Simple Audio Data Formats Type I, II, and III. For the Extended Audio Data Format Type I, an additional optional synchronous Control Channel is defined. 2.4.1 Extended Type I Formats Extended Audio Data Format Type I adds support for both a Packet Header and a synchronous Control Channel to the Simple Type I Format definition. All three elements (Packet Header, audio data, and Control Channel) of an Extended Type I packet are optional. The Extended Format Type I descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Control Channel, without Packet Header or audio data. The following figure further illustrates the concept. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 22 ここに画像 Figure 2-3 Extended Type I Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by a number of Extended Audio Slots. An Extended Audio Slot is the concatenation of a Control Word, followed by the Type I Audio Slot. The Control Channel therefore consists of a stream of Control Words, where each Control Word is synchronous to its associated Audio Slot. There are as many Control Channel Words per VFP as there are Audio Slots in the VFP. The byte size of the Control Words is independent of the Audio Subslot size and is the same for each Audio Slot. 2.4.1.1 Extended Type I Format Type Descriptor The first part of the Extended Type I Format Type descriptor is identical to the Simple Type I Format Type descriptor (See Section 2.3.1.6, “Type I Format Type Descriptor”.) Three additional fields are added to describe the Packet Header and the Control Channel. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bControlSize field indicates the size in bytes of each Control Channel Word in the stream. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header and Control Channel. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). If the Packet Header is not used, then the bHeaderLength field must be set to 0. Likewise, if the Control Channel is not implemented, then the bControlSize field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-6 Extended Type I Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant EXT_FORMAT_TYPE_I. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Can be 1, 2, 3 or 4. Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 23 Offset Field Size Value Description 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subslot. 6 bHeaderLength 1 Number Size of the Packet Header, in bytes. 7 bControlSize 1 Number Size of the Control Channel Words, in bytes. 8 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header and Control Channel content. 2.4.2 Extended Type II Formats Extended Audio Data Format Type II adds support for a Packet Header to the Simple Type II Format definition. The elements (Packet Header and audio data) of an Extended Type II packet are optional. The Extended Format Type II descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Packet Header without audio data. The following figure further illustrates the concept. ここに画像 Figure 2-4 Extended Type II Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by the actual encoded audio frame data. 2.4.2.1 Extended Type II Format Type Descriptor The first part of the Extended Type II Format Type descriptor is identical to the Simple Type II Format Type descriptor (See Section 2.3.2.6, “Type II Format Type Descriptor”.) Two additional fields are added to describe the Packet Header. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). If the Packet Header is not used, then the bHeaderLength field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-7 Extended Type II Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 10 Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 24 Offset Field Size Value Description 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant Ext_FORMAT_TYPE_II. Constant identifying the Format Type the AudioStreaming interface is using. 4 wMaxBitRate 2 Number Indicates the maximum number of bits per second this interface can handle. Expressed in kbits/s. 6 wSamplesPerFrame 2 Number Indicates the number of PCM audio samples contained in one encoded audio frame. 8 bHeaderLength 1 Number Size of the Packet Header, in bytes. 9 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header content. 2.4.3 Extended Type III Formats Extended Audio Data Format Type III adds support for a Packet Header to the Simple Type III Format definition. The elements (Packet Header and audio data) of an Extended Type III packet are optional. The Extended Format Type III descriptor (see further) indicates which elements are present. It is therefore possible to provide only a Packet Header without audio data. The following figure further illustrates the concept. ここに画像 Figure 2-5 Extended Type III Format Each Virtual Frame Packet (VFP) can start with an optional Packet Header. If Packet Headers are used, they must be present in every VFP. The length of the Packet Header must be the same for every VFP. The Packet Header is then followed by the actual encoded audio frame data. 2.4.3.1 Extended Type III Format Type Descriptor The first part of the Extended Type III Format Type descriptor is identical to the Simple Type III Format Type descriptor (See Section 2.3.3.1, “Type III Format Type Descriptor”.) Two additional fields are added to describe the Packet Header. The bHeaderLength field indicates the number of bytes contained in the Packet Header. The bSideBandProtocol field contains a constant identifying the Side Band Protocol that is used for the Packet Header. This specification defines a number of Side Band Protocols (See Section 2.4.4, “Side Band Protocols”). Universal Serial Bus Device Class Definition for Audio Data Formats Release 2.0 May 31, 2006 25 If the Packet Header is not used, then the bHeaderLength field must be set to 0. If the stream does not contain actual audio data, the bNrChannels, bmChannelConfig and iChannelNames in the class-specific AS Interface descriptor (See the USB Audio Device Class document) must all be set to 0. Table 2-8 Extended Type III Format Type Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 8 1 bDescriptorType 1 Constant CS_INTERFACE descriptor type. 2 bDescriptorSubtype 1 Constant FORMAT_TYPE descriptor subtype. 3 bFormatType 1 Constant EXT_FORMAT_TYPE_III. Constant identifying the Format Type the AudioStreaming interface is using. 4 bSubslotSize 1 Number The number of bytes occupied by one audio subslot. Must be set to two. 5 bBitResolution 1 Number The number of effectively used bits from the available bits in an audio subslot. 6 bHeaderLength 1 Number Size of the Packet Header, in bytes. 7 bSideBandProtocol 1 Constant Constant, identifying the Side Band Protocol used for the Packet Header content. 2.4.4 Side Band Protocols This specification currently defines a single Side Band Protocol. Additional Protocols can be added later if needed. 2.4.4.1 Presentation Timestamp Side Band Protocol The Presentation Timestamp protocol only uses the Packet Header to convey high resolution time information over the isochronous pipe. The Packet header is 12 bytes in size. It must occur at the start of each VFP. Bit D0 in the bmFlags field indicates whether this is a valid timestamp (D0 = 0b1) or a repeated or non-valid timestamp (D0 = 0b0). When D0 is set to zero, the time fields of the Packet Header must be ignored. The qNanoSeconds field indicates the time T at which the first sample in the VFP needs to be rendered with respect to the start of the stream (T = 0). The qNanoSeconds field can range from 0 to 263-1 ns (Bit 63 is considered to be a sign bit and must be set to zero). It is up to the entity that generates the timestamp to decide to which accuracy the timestamp will be rendered. Table 2-9 Hi-Res Presentation TimeStamp Layout Offset Field Size Value Description 0 bmFlags 4 Bitmap D30..0 Reserved. Must be set to 0.D31 Valid. 1 - 6 - 11 - 16 - 21 - 26 - 31 ここを編集
https://w.atwiki.jp/usb_audio/pages/28.html
原文:Audio Terminal Types 1.0(PDF) USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 11 Terminal Type Code I/O Description MiniDisk 0x0706 I/O Minidisk player. Analog Tape 0x0707 I/O Analog Audio Tape. Phonograph 0x0708 I Analog vinyl record player. VCR Audio 0x0709 I Audio track of VCR. Video Disc Audio 0x070A I Audio track of VideoDisc player. DVD Audio 0x070B I Audio track of DVD player. TV Tuner Audio 0x070C I Audio track of TV tuner. Satellite Receiver Audio 0x070D I Audio track of satellite receiver. Cable Tuner Audio 0x070E I Audio track of cable tuner. DSS Audio 0x070F I Audio track of DSS receiver. Radio Receiver 0x0710 I AM/FM radio receiver. Radio Transmitter 0x0711 O AM/FM radio transmitter. Multi-track Recorder 0x0712 I/O A multi-track recording system. Synthesizer 0x0713 I Synthesizer. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 12 3 Adding New Terminal Types Adding new Terminal Types to this specification is achieved by proposing a fully documented Terminal Type to the Audio Device Class Working Group. Upon acceptance, the group will register the new Terminal Type (attribute a unique Terminal Type Code) and update this document accordingly. This process will also guarantee that new releases of generic USB audio drivers will support the newly registered Terminal Types. It is always possible to use vendor-specific definitions if the above procedure is considered unsatisfactory. 1 - 6 - 11 ここを編集
https://w.atwiki.jp/758tetudou/pages/24.html
FGO アストルフォ
https://w.atwiki.jp/kemotar/pages/526.html
Baudin エルヴァーン♂、NPC専用フェイス(少年)、ジュノ上層G-8 家族構成:姉(Audia) 姉について悩んでいる少年。御守として「ダボイ村紋章」を渡されている。 関連イベント クエスト「姉ちゃんを助けて」「署名を集めろ」 等 代表セリフ集 「えっ、あの時計塔が?おいら、ガルムート兄ちゃんに時計塔の中身を見せてもらったことあるんだよ。脂の臭いがすごかったけど、兄ちゃんかっこよかった。」 「え、クァールの肉なんか何に使うかって?実はうちの姉ちゃん、変になっちまって……。」 「うぅっ……、姉ちゃん……。グスッ……。ホントは姉ちゃんのあんな姿、人に見せたくないんだけど……。おいら、もうどうしていいか分かんないよ。」 関連事項(補足) ある日突然豹変した姉Audiaの世話をしている。 本当はとてもやさしい姉であり、どうしてこうなったか判らないようだ。 周りは呪いだというが、恨みを買うような事をする姉ではないと信じている。 姉の婚約者Albrechtが上層聖堂で祈ってばかりいる事に苛立ち、不満を隠せない。 彼自身まだまだ子供であり、不安と恐怖でいっぱいのようである。 Audiaに密かに恋心を寄せている人物を知っている。 (ホの字という、何ともアレな表現をしている) 関連事項 Audia Albrecht Alista
https://w.atwiki.jp/sevenlives/pages/2383.html
source? video Audio Data API? HTML5
https://w.atwiki.jp/usb_audio/pages/27.html
原文:Audio Terminal Types 1.0(PDF) USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 6 1 Introduction The intention of this document is to describe in detail all the Terminal Types that are supported by the Audio Device Class. This document is considered an integral part of the Audio Device Class Specification, although subsequent revisions of this document are independent of the revision evolution of the main Audio Device Class Specification. This is to easily accommodate the addition of new Terminal Types without impeding the core Audio Device Class Specification. 1.1 Scope The Audio Device Class Definition applies to all devices or functions embedded in composite devices. All audio signals inside an audio function start at an Input Terminal, pass through some Units, and leave the function through an Output Terminal. Units can manipulate the signal in various ways. Terminals represent the connections of the function to the outside world. As part of the Terminal descriptor, the wTerminalType field specifies the vendor’s suggested use of the Terminal. For example, a pair of speakers is a more suitable target for music output than a telephone line. This feature allows a vendor to ensure that applications use the device in a consistent and meaningful way. 1.2 Related Documents · Universal Serial Bus Specification, 1.0 final draft revision (also referred to as the USB Specification). In particular, see Chapter 9, “USB Device Framework”. · Universal Serial Bus Device Class Definition for Audio Data Formats (referred to in this document as USB Audio Data Formats). · Universal Serial Bus Device Class Definition for Terminal Types (referred to in this document as USB Audio Terminal Types). · ANSI S1.11-1986 standard. · MPEG-1 standard ISO/IEC 111172-3 1993. · MPEG-2 standard ISO/IEC 13818-3 Feb. 20, 1997. · Digital Audio Compression Standard (AC-3), ATSC A/52 Dec. 20, 1995. (available from http //www.atsc.org) · ANSI/IEEE-754 floating-point standard. · ISO/IEC 958 International Standard Digital Audio Interface and Annexes. · ISO/IEC 1937 standard. · ITU G.711 standard. 1.3 Terms and Abbreviations None. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 7 2 Terminal Types The following is a list of possible Terminal Types. This list is non-exhaustive and will only be expandedin the future. 2.1 USB Terminal Types These Terminal Types describe Terminals that handle signals carried over the USB, usually throughisochronous pipes. These Terminal Types are valid for both Input and Output Terminals. Table 2-1 USB Terminal Types Terminal Type Code I/O Description USB Undefined 0x0100 I/O USB Terminal, undefined Type. USB streaming 0x0101 I/O A Terminal dealing with a signal carried over an endpoint in an AudioStreaming interface. The AudioStreaming interface descriptor points to the associated Terminal through the bTerminalLink field. USB vendor specific 0x01FF I/O A Terminal dealing with a signal carried over a vendor-specific interface. The vendor-specific interface descriptor must contain a field that references the Terminal. 2.2 Input Terminal Types These Terminal Types describe Terminals that are designed to record sounds. They either are physically part of the audio function or can be assumed to be connected to it in normal operation. These Terminal Types are valid only for Input Terminals Table 2-2 Input Terminal Types Termina Type Code I/O Description Input Undefined 0x0200 I Input Terminal, undefined Type. Microphone 0x0201 I A generic microphone that does not fit under any of the other classifications. Desktop microphone 0x0202 I A microphone normally placed on the desktop or integrated into the monitor. Personal microphone 0x0203 I A head-mounted or clip-on microphone. Omni-directional microphone 0x0204 I A microphone designed to pick up voice from more than one speaker at relatively long ranges. Microphone array 0x0205 I An array of microphones designed for directional processing using host-based signal processing algorithms. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 8 Terminal Type Code I/O Description Processing microphone array 0x0206 I An array of microphones with an embedded signal processor. 2.3 Output Terminal Types These Terminal Types describe Terminals that produce audible signals that are intended to be heard by the user of the audio function. They either are physically part of the audio function or can be assumed to be connected to it in normal operation. These Terminal Types are only valid for Output Terminals. The distinction between headphones, desktop speakers, and room speakers may be used by applications to select different 3D signal processing algorithms. Table 2-3 Output Terminal Types Terminal Type Code I/O Description Output Undefined 0x0300 O Output Terminal, undefined Type. Speaker 0x0301 O A generic speaker or set of speakers that does not fit under any of the other classifications. Headphones 0x0302 O A head-mounted audio output device. Head Mounted Display Audio 0x0303 O The audio part of a VR head mounted display. The Associated Interfaces descriptor can be used to reference the HID interface used to report the position and orientation of the HMD. Desktop speaker 0x0304 O Relatively small speaker or set of speakers normally placed on the desktop or integrated into the monitor. These speakers are close to the user and have limited stereo separation. Room speaker 0x0305 O Larger speaker or set of speakers that are heard well anywhere in the room. Communication speaker 0x0306 O Speaker or set of speakers designed for voice communication. Low frequency effects speaker 0x0307 O Speaker designed for low frequencies (subwoofer). Not capable of reproducing speech or music. 2.4 Bi-directional Terminal Types These Terminal Types describe an Input and an Output Terminal for voice communication that are closely related. They should be used together for bi-directional voice communication. They may be used separately for input only or output only. These types require two Terminal descriptors. Both have the same type. The two Terminals are linked together through the bAssocTerminal fields in their respective Terminal descriptors. The Associated Interfaces descriptor can be used to reference a HID interface for conferencing functions. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 9 Table 2-4 Bi-directional Terminal Types Terminal Type Code I/O Description Bi-directional Undefined 0x0400 I/O Bi-directional Terminal, undefined Type. Handset 0x0401 I/O Hand-held bi-directional audio device. Headset 0x0402 I/O Head-mounted bi-directional audio device. Speakerphone, no echo reduction 0x0403 I/O A hands-free audio device designed for host-based echo cancellation. Echo-suppressing speakerphone 0x0404 I/O A hands-free audio device with echo suppression capable of half-duplexoperation. Echo-canceling speakerphone 0x0405 I/O A hands-free audio device with echo cancellation capable of full-duplex operation. 2.5 Telephony Terminal Types These Terminal Types describe Terminals that connect to the PSTN or PBX. Initiating calls and monitoring call progress will be done through an associated interface which may be Communication, HID or Vendor-Specific class. These Terminals are bi-directional and follow the rules for bi-directional Terminals. Table 2-5 Telephony Terminal Types Terminal Type Code I/O Description Telephony Undefined 0x0500 I/O Telephony Terminal, undefined Type. Phone line 0x0501 I/O May be an analog telephone line jack, an ISDN line, a proprietary PBX interface, or a wireless link. Telephone 0x0502 I/O Device can be used as a telephone. When not in use as a telephone, handset is used as a bi-directional audio device. Down Line Phone 0x0503 I/O A standard telephone set connected to the device. When not in use as a telephone, it can be used as a bidirectional audio device. 2.6 External Terminal Types These Terminal Types describe external resources and connections that do not fit under the categories of Input or Output Terminals because they do not necessarily translate acoustic signals to or from the user of the computer. Most of them may be either Input or Output Terminals. USB Device Class Definition for Terminal Types Release 1.0 March 18, 1998 10 Table 2-6 External Terminal Types Terminal Type Code I/O Description External Undefined 0x0600 I/O External Terminal, undefined Type. Analog connector 0x0601 I/O A generic analog connector. Digital audio interface 0x0602 I/O A generic digital audio interface. Line connector 0x0603 I/O An analog connector at standard line levels. Usually uses 3.5mm. Legacy audio connector 0x0604 I/O An input connector assumed to be connected to the lineout of the legacy audio system of the host computer. Used for backward compatibility. S/PDIF interface 0x0605 I/O An S/PDIF digital audio interface. The Associated Interface descriptor can be used to reference an interface used for controlling special functions of this interface. 1394 DA stream 0x0606 I/O An interface to audio streams on a 1394 bus. 1394 DV stream soundtrack 0x0607 I/O An interface to soundtrack of A/V stream on a 1394 bus. 2.7 Embedded Function Terminal Types These Terminal Types represent connections to internal audio sources or sinks in a device. All have associated interfaces for control. These interfaces may be HID or other classes (vendor-specific, mass storage for CD-ROM, etc.). Devices capable of both playback and recording should follow the rules for bidirectional Terminals. Table 2-7 Embedded Terminal Types Terminal Type Code I/O Description Embedded Undefined 0x0700 I/O Embedded Terminal, undefined Type. Level Calibration Noise Source 0x0701 O Internal Noise source for level calibration (MPEG decoding, Dolby PrologicÔ, AC-3 etc.) Equalization Noise 0x0702 O Internal Noise source for measurements. CD player 0x0703 I Audio compact disc player or CD-ROM capable of audio playback. DAT 0x0704 I/O Digital Audio Tape. DCC 0x0705 I/O Digital Compact Cassette. 1 - 6 - 11 ここを編集
https://w.atwiki.jp/audiosound/pages/12.html
圧倒的な低音。だが中高音も低音に引っ張られない底力がある。満足度は高い audio-technica SOLID BASS 密閉型オンイヤーヘッドホン ポータブル ブラック ATH-WS55X BK SOLID BASS SYSTEMを搭載し、低音重視を謳うオーディオテクニカのヘッドホン。 側圧がそこそこあり、締め付けは若干きつめ。 折りたたみ可能で携行性にも優れる。 音質はまず低音重低音の深みと重厚さのある音が音場をしっかり支えるのが素直に心地よい。 中高音も低音に引っ張られずに分離されていて、やや尖り気味ながら明瞭さの強い一貫性のある音に仕上がっている。 突き抜け感に優れたところがあり、低音に頼りすぎない表現力の追求が見られて好ましい。 低音と中音の間にエアポケットのようなものが感じられて、やや不自然に思えるところもあるのだけ気になる。 しかしながら低音にボーカルを埋もれさせない確かな定位感と表現力は評価すべきと思う。 個人的には好みな音質で評価が甘いかも知れないが、この実売価格帯ではお値打ち以上と思う。 低音に支えられたしっかりとしたロックやポップスを楽しみたい人にはこのヘッドホン、おすすめです。 audio-technica SOLID BASS 密閉型オンイヤーヘッドホン ポータブル ブラック ATH-WS55X BK Amazon.co.jp ウィジェット 大学受験参考書 The amazon review amazon loves me まひもひamazon audio-technica SOLID BASS 密閉型オンイヤーヘッドホン ポータブル ブラック ATH-WS55X BK Amazon.co.jp ウィジェット 大学受験参考書 The amazon review amazon loves me まひもひamazon
https://w.atwiki.jp/usb_audio/pages/31.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 16 Terminal Addressable logical object inside an audio function that represents a connection to the audio function’s outside world. Unit Addressable logical object inside an audio function that represents a certain audio subfunctionality. XUD Acronym for Extension Unit Descriptor. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 17 2 Management Overview The USB is very well suited for transport of audio (voice and sound). PC-based voice telephony is one of the major drivers of USB technology. In addition, the USB has more than enough bandwidth for sound, even high-quality audio. Many applications related to voice telephony, audio playback, and recording can take advantage of the USB. In principle, a versatile bus specification like the USB provides many ways to propagate and control digital audio. For the industry, however, it is very important that audio transport mechanisms be well defined and standardized on the USB. Only in this way can interoperability be guaranteed among the many possible audio devices on the USB. Standardized audio transport mechanisms also help to keep software drivers as generic as possible. The Audio Device Class described in this document satisfies those requirements. It is written and revised by experts in the audio field. Other device classes that address audio in some way should refer to this document for their audio interface specification. An essential issue in audio is synchronization of the data streams. Indeed, the smallest artifacts are easily detected by the human ear. Therefore, a robust synchronization scheme on isochronous transfers has been developed and incorporated in the USB Specification. The Audio Device Class definition adheres to this synchronization scheme to transport audio data reliably over the bus. This document contains all necessary information for a designer to build a USB-compliant device that incorporates audio functionality. It specifies the standard and class-specific descriptors that must be present in each USB audio function. It further explains the use of class-specific requests that allow for full audio function control. A number of predefined data formats are listed and fully documented. Each format defines a standard way of transporting audio over USB. However, provisions have been made so that vendor-specific audio formats and compression schemes can be handled. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 18 3 Functional Characteristics In many cases, audio functionality does not exist as a standalone device. It is one capability that, together with other functions, constitutes a “composite” device. A perfect example of this is a CD-ROM player, which can incorporate video, audio, data storage, and transport control. The audio function is thus located at the interface level in the device class hierarchy. It consists of a number of interfaces grouping related pipes that together implement the interface to the audio function. Audio functions are addressed through their audio interfaces. Each audio function has a single AudioControl interface and can have several AudioStreaming and MIDIStreaming interfaces. The AudioControl (AC) interface is used to access the audio Controls of the function whereas the AudioStreaming (AS) interfaces are used to transport audio streams into and out of the function. The MIDIStreaming (MS) interfaces are used to transport MIDI data streams into and out of the audio function. The collection of the single AudioControl interface and the AudioStreaming and MIDIStreaming interfaces that belong to the same audio function is called the Audio Interface Collection (AIC). A device can have multiple Audio Interface Collections active at the same time. These Collections are used to control multiple independent audio functions located in the same composite device. Note All MIDI-related information is grouped in a separate document, Universal Serial Bus Device Class Definition for MIDIStreaming Interfacesthat is considered part of this specification. 3.1 Audio Interface Class The Audio Interface class groups all functions that can interact with USB-compliant audio data streams. All functions that convert between analog and digital audio domains can be part of this class. In addition, those functions that transform USB-compliant audio data streams into other USB-compliant audio data streams can be part of this class. Even analog audio functions that are controlled through USB belong to this class. In fact, for an audio function to be part of this class, the only requirement is that it exposes one AudioControl interface. No further interaction with the function is mandatory, although most functions in the audio interface class will support one or more optional AudioStreaming interfaces for consuming or producing one or more isochronous audio data streams. The Audio Interface class code is assigned by the USB. For details, see Section A.1, “Audio Interface Class Code.” 3.2 Audio Interface Subclass and Protocol The Audio Interface class is divided into Subclasses that can be further qualified by the Interface Protocol code. However, at this moment, the Interface Protocol is not used and must be set to 0x00. All audio functions are part of a certain Subclass. The following three Subclasses are currently defined in this specification · AudioControl Interface Subclass · AudioStreaming Interface Subclass · MIDIStreaming Interface Subclass The assigned codes can be found in Sections A.2, “Audio Interface Subclass Codes” and A.3, “Audio Interface Protocol Codes” of this specification. All other Subclass codes are unused and reserved except code 0xFF which is by specification reserved for vendor-specific extensions. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 19 3.3 Audio Synchronization Types Each isochronous audio endpoint used in an AudioStreaming interface belongs to a synchronization type as defined in Section 5 of the USB Specification. The following sections briefly describe the possible synchronization types. 3.3.1 Asynchronous Asynchronous isochronous audio endpoints produce or consume data at a rate that is locked either to a clock external to the USB or to a free-running internal clock. These endpoints cannot be synchronized to a start of frame (SOF) or to any other clock in the USB domain. 3.3.2 Synchronous The clock system of synchronous isochronous audio endpoints can be controlled externally through SOF synchronization. Such an endpoint must do one of the following · Slave its sample clock to the 1ms SOF tick. · Control the rate of USB SOF generation so that its data rate becomes automatically locked to SOF. 3.3.3 Adaptive Adaptive isochronous audio endpoints are able to source or sink data at any rate within their operating range. This implies that these endpoints must run an internal process that allows them to match their natural data rate to the data rate that is imposed at their interface. 3.4 Inter Channel Synchronization An important issue when dealing with audio, and 3-D audio in particular, is the phase relationship between different physical audio channels. Indeed, the virtual spatial position of an audio source is directly related to and influenced by the phase differences that are applied to the different physical audio channels used to reproduce the audio source. Therefore, it is imperative that USB audio functions respect the phase relationship among all related audio channels. However, the responsibility for maintaining the phase relation is shared among the USB host software, hardware, and all of the audio peripheral devices or functions. To provide a manageable phase model to the host, an audio function is required to report its internal delay for every AudioStreaming interface. This delay is expressed in number of frames (ms) and is due to the fact that the audio function must buffer at least one frame worth of samples to effectively remove packet jitter within a frame. Furthermore, some audio functions will introduce extra delay because they need time to correctly interpret and process the audio data streams (for example, compression and decompression). However, it is required that an audio function introduces only an integer number of frames of delay. In the case of an audio source function, this implies that the audio function must guarantee that the first sample it fully acquires after SOFn (start of frame n) is the first sample of the packet it sends over USB during frame (n+d). d is the audio function’s internal delay expressed in ms. The same rule applies for an audio sink function. The first sample in the packet, received over USB during frame n, must be the first sample that is fully reproduced during frame (n+d). By following these rules, phase jitter is limited to ±1 audio sample. It is up to the host software to synchronize the different audio streams by scheduling the correct packets at the correct moment, taking into account the internal delays of all audio functions involved. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 20 3.5 Audio Function Topology To be able to manipulate the physical properties of an audio function, its functionality must be divided into addressable Entities. Two types of such generic Entities are identified and are called Units and Terminals. Units provide the basic building blocks to fully describe most audio functions. Audio functions are built by connecting together several of these Units. A Unit has one or more Input Pins and a single Output Pin, where each Pin represents a cluster of logical audio channels inside the audio function. Units are wired together by connecting their I/O Pins according to the required topology. In addition, the concept of a Terminal is introduced. There are two types of Terminals. An Input Terminal (IT) is an Entity that represents a starting point for audio channels inside the audio function. An Output Terminal (OT) represents an ending point for audio channels. From the audio function’s perspective, a USB endpoint is a typical example of an Input or Output Terminal. It either provides data streams to the audio function (IT) or consumes data streams coming from the audio function (OT). Likewise, a Digital to Analog converter, built into the audio function is represented as an Output Terminal in the audio function’s model. Connection to the Terminal is made through its single Input or Output Pin. Input Pins of a Unit are numbered starting from one up to the total number of Input Pins on the Unit. The Output Pin number is always one. Terminals only have one Input or Output Pin that is always numbered one. The information, traveling over I/O Pins is not necessarily of a digital nature. It is perfectly possible to use the Unit model to describe fully analog or even hybrid audio functions. The mere fact that I/O Pins are connected together is a guarantee (by construction) that the protocol and format, used over these connections (analog or digital), is compatible on both ends. Every Unit in the audio function is fully described by its associated Unit Descriptor (UD). The Unit Descriptor contains all necessary fields to identify and describe the Unit. Likewise, there is a Terminal Descriptor (TD) for every Terminal in the audio function. In addition, these descriptors provide all necessary information about the topology of the audio function. They fully describe how Terminals and Units are interconnected. This specification describes the following seven different types of standard Units and Terminals that are considered adequate to represent most audio functions available today and in the near future · Input Terminal · Output Terminal · Mixer Unit · Selector Unit · Feature Unit · Processing Unit · Extension Unit The ensemble of UDs and TDs provide a full description of the audio function to the Host. A generic audio driver should be able to fully control the audio function, except for the functionality, represented by Extension Units. Those require vendor-specific extensions to the audio class driver. The descriptors are further detailed in Section 4, “Descriptors” of this document. Inside a Unit, functionality is further described through audio Controls. A Control typically provides access to a specific audio property. Each Control has a set of attributes that can be manipulated or that present additional information on the behavior of the Control. A Control can have the following four attributes · Current setting attribute · Minimum setting attribute · Maximum setting attribute 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集