約 5,664,034 件
https://w.atwiki.jp/usb_audio/pages/41.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 61 4.6.1 AS Isochronous Audio Data Endpoint Descriptors The standard and class-specific audio data endpoint descriptors provide pertinent information on how audio data streams are communicated to the audio function. In addition, specific endpoint capabilities and properties are reported. 4.6.1.1 Standard AS Isochronous Audio Data Endpoint Descriptor The standard AS isochronous audio data endpoint descriptor is identical to the standard endpoint descriptor defined in Section 9.6.4, “Endpoint,” of the USB Specification and further expanded as defined in the Universal Serial Bus Class Specification. D7 of the bEndpointAddress field indicates whether the endpoint is an audio source (D7 = 1) or an audio sink (D7 = 0). The bmAttributes Field bits are set to reflect the isochronous type of the endpoint. The synchronization type is indicated by D3..2 and must be set to Asynchronous, Adaptive or Synchronous. For further details, refer to Section 5.10.4.1, “Synchronous Type,” of the USB Specification. Table 4-20 Standard AS Isochronous Audio Data Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant ENDPOINT descriptor type 2 bEndpointAddress 1 Endpoint The address of the endpoint on the USB device described by this descriptor. The address is encoded as follows D7 Direction. 0 = OUT endpoint 1 = IN endpoint D6..4 Reserved, reset to zero D3..0 The endpoint number, determined by the designer. 3 bmAttributes 1 Bit Map D3..2 Synchronization type 01 = Asynchronous 10 = Adaptive 11 = Synchronous D1..0 Transfer type 01 = Isochronous All other bits are reserved. 4 wMaxPacketSize 2 Number Maximum packet size this endpoint is capable of sending or receiving when this configuration is selected. This is determined by the audio bandwidth constraints of the endpoint. 6 bInterval 1 Number Interval for polling endpoint for data transfers expressed in milliseconds. Must be set to 1. 7 bRefresh 1 Number Reset to 0. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 62 Offset Field Size Value Description 8 bSynchAddress 1 Endpoint The address of the endpoint used to communicate synchronization information if required by this endpoint. Reset to zero if no synchronization pipe is used. 4.6.1.2 Class-Specific AS Isochronous Audio Data Endpoint Descriptor The bmAttributes field indicates which endpoint-specific Controls this endpoint supports through bits D6..0. Bit D7 is reserved to indicate whether the endpoint always needs USB packets of wMaxPacketSize length (D7 = 1) or that it can handle short packets (D7 = 0). In any case, the endpoint is required to support null packets. This bit must be used by the Host software to determine if the driver should pad all potential short packets (except null packets) with zero bytes to wMaxPacketSize length before sending them to an OUT endpoint. Likewise, when receiving data from an IN endpoint, the Host software must be prepared to receive more bytes than expected and discard the superfluous zero bytes. The bLockDelayUnits and wLockDelay fields are used to indicate to the Host how long it takes for the clock recovery circuitry of this endpoint to lock and reliably produce or consume the audio data stream. This information can be used by the Host to take appropriate action so that no meaningful data gets lost during the locking period. (for instance, sending digital silence during lock period) Depending on the implementation, the locking period can be a fixed amount of time or can be proportional to the sampling frequency. In this case, it usually takes a fixed amount of samples to become locked. To accommodate both cases, the bLockDelayUnits field indicates whether the wLockDelay field is expressed in time (milliseconds) or number of samples. Note Some implementations may use locking strategies that do not lead to either fixed time or fixed number of samples lock delay. In this case, a worst case value can be reported back to the Host. The bLockDelayUnits and wLockDelay fields are only applicable for synchronous and adaptive endpoints. For asynchronous endpoints, the clock is generated internally in the audio function and is completely independent. In this case, bLockDelayUnits and wLockDelay must be set to zero. Table 4-21 Class-Specific AS Isochronous Audio Data Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 7 1 bDescriptorType 1 Constant CS_ENDPOINT descriptor type. 2 bDescriptorSubtype 1 Constant EP_GENERAL descriptor subtype. 3 bmAttributes 1 Bit Map A bit in the range D6..0 set to 1 indicates that the mentioned Control is supported by this endpoint. D0 Sampling Frequency D1 Pitch D6..2 Reserved Bit D7 indicates a requirement for wMaxPacketSize packets. D7 MaxPacketsOnly USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 63 Offset Field Size Value Description 4 bLockDelayUnits 1 Number Indicates the units used for the wLockDelay field 0 Undefined 1 Milliseconds 2 Decoded PCM samples 3..255 Reserved 5 wLockDelay Number Indicates the time it takes this endpoint to reliably lock its internal clock recovery circuitry. Units used depend on the value of the bLockDelayUnits field. 4.6.2 AS Isochronous Synch Endpoint Descriptor This descriptor is present only when one or more isochronous audio data endpoints of the adaptive source type or the asynchronous sink type are implemented. 4.6.2.1 Standard AS Isochronous Synch Endpoint Descriptor The isochronous synch endpoint descriptor is identical to the standard endpoint descriptor defined in Section 9.6.4, “Endpoint,” of the USB Specification and further expanded as defined in the Universal Serial Bus Class Specification. The bmAttributes field bits are set to reflect the isochronous type and synchronization type of the endpoint. Table 4-22 Standard AS Isochronous Synch Endpoint Descriptor Offset Field Size Value Description 0 bLength 1 Number Size of this descriptor, in bytes 9 1 bDescriptorType 1 Constant ENDPOINT descriptor type. 2 bEndpointAddress 1 Endpoint The address of the endpoint on the USB device described by this descriptor. The address is encoded as follows D7 Direction. 0 = OUT endpoint for sources 1 = IN endpoint for sinks D6..4 Reserved, reset to zero D3..0 The endpoint number, determined by the designer. 3 bmAttributes 1 Bit Map D3..2 Synchronization type 00 = None D1..0 Transfer type 01 = Isochronous All other bits are reserved. 4 wMaxPacketSize 2 Number Maximum packet size this endpoint is capable of sending or receiving when this configuration is selected. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 64 Offset Field Size Value Description 6 bInterval 1 Number Interval for polling endpoint for data transfers expressed in milliseconds. Must be set to 1. 7 bRefresh 1 Number This field indicates the rate at which an isochronous synchronization pipe provides new synchronization feedback data. This rate must be a power of 2, therefore only the power is reported back and the range of this field is from 1 (2 ms) to 9 (512 ms). 8 bSynchAddress 1 Endpoint Must be reset to zero. 4.6.2.2 Class-Specific AS Isochronous Synch Endpoint Descriptor There is no class-specific AS isochronous synch endpoint descriptor. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 65 5 Requests 5.1 Standard Requests The Audio Device Class supports the standard requests described in Section 9, “USB Device Framework,” of the USB Specification. The Audio Device Class places no specific requirements on the values for the standard requests. 5.2 Class-Specific Requests Most class-specific requests are used to set and get audio related Controls. These Controls fall into two main groups those that manipulate the audio function Controls, such as volume, tone, selector position, etc. and those that influence data transfer over an isochronous endpoint, such as the current sampling frequency. · AudioControl Requests. Control of an audio function is performed through the manipulation of the attributes of individual Controls that are embedded in the Units of the audio function. The classspecific AudioControl interface descriptor contains a collection of Unit descriptors, each indicating which Controls are present in every Unit. AudioControl requests are always directed to the single AudioControl interface of the audio function. The request contains enough information (Unit ID, Channel Number, and Control Selector) for the audio function to decide to where a specific request must be routed. The same request layout can be used for vendor-specific requests to Extension Units. However, they are not covered by this specification. · AudioStreaming Requests. Control of the class-specific behavior of an AudioStreaming interface is performed through manipulation of either interface Controls or endpoint Controls. These can be either standard Controls, as defined in this specification or vendor-specific. In either case, the same request layout can be used. AudioStreaming requests are directed to the recipient where the Control resides. This can be either the interface or its associated isochronous endpoint. The Audio Device Class supports additional class-specific request · Memory Requests. Every addressable Entity in the audio function (Terminal, Unit, and endpoint) can expose a memory-mapped interface that provides the means to generically manipulate the Entity. Vendor-specific Control implementations could be based on this type of request. · The Get Status request is a general query to an Entity in the AudioControl interface or one of the Audio Streaming interfaces and does not manipulate Controls. In principle, all requests are optional. If an audio function does not support a certain request, it must indicate this by stalling the control pipe when that request is issued to the function. However, if a certain Set request is supported, the associated Get request must also be supported. Get requests may be supported without the associated Set request being supported. The rest of this section describes the class-specific requests used to manipulate both audio Controls and endpoint Controls. 5.2.1 Request Layout The following paragraphs describe the general structure of the Set and Get requests. Subsequent paragraphs detail the use of the Set/Get requests for the different request types. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/sm64susc18summer/pages/12.html
You MUST use "ALWAYS display timer", NOT use "Display Timer in Star-Grab timing". If you don t use "ALWAYS display timer", I reject your record. ※タイマーを常に表示するようにしてください。スター取得時のみ出現するタイマーは禁止です。常にタイマーを表示するコードを使用していなかった場合、記録として認められません。 ☆Stage Rainbow Ride(Level★★☆ normal) ☆ Challenge Name "SWITCHLESS" ☆ Description - This challenge is no special GScodes challenge. ☆ Pre Conditions - You must equip special triple jump(yoshi jump). - You must NOT activate the purple switch. (shown in the picture) ☆ Main Conditions - None ☆ End Timing Get "Tricky Triangles!" star (timed by IGT) ☆ Time condition You must get 14.9x or lower IGT Example 14.8x OK, 14.9x OK, 15.0x FAIL, 15.1x FAIL Calculation 13.16(my test) + 1.80(correction) = 14.96 → 14.9x ☆ Leaderboard - HERE ☆ステージ レインボークルーズ(難しさ★★☆ ふつう) ☆ チャレンジ名 "スイッチレス" ☆ 本チャレンジについて - このチャレンジでは専用のGSコードはありません。 ☆ 前提条件 - 4段ジャンプ有りの状態にすること - 紫スイッチの起動禁止 ☆ メイン条件 - 無し ☆ 終了条件 "おおぞらアスレチック"スターを取得する (タイムはIGT) ☆ 合格タイム 14.9x以下(IGT) 例 14.8x OK, 14.9x OK, 15.0x FAIL, 15.1x FAIL 計算式 13.16(運営テスト) + 1.80(補正) = 14.96 → 14.9x ☆ 記録リスト - ここ
https://w.atwiki.jp/usb_audio/pages/19.html
原文:Audio Devices Rev. 2.0 Spec and Adopters Agreement(ZIP) USB Device Class Definition for Audio Devices Release 2.0 Release 2.0 May 31, 2006 May 31, 2006 1 Universal Serial Bus Device Class Definition for Audio Devices USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 2 Scope of This Release This document is the Release 2.0 of this device class definition. Contributors Geert Knapen (Editor) Philips Applied Technologies AppTech-USA 1101 McKay Drive M/S 16 San Jose, CA 95131 USA Phone +1 (408) 474-8774 E-mail geert.knapen(at)philips.com Mike Kent Roland Corporation Kaoru Ishimine Roland Corporation Shoichi Kojima Roland Corporation Robert Gilsdorf Creative Labs Daniel (D.J.) Sisolak Microsoft Corporation Jack Unverferth Microsoft Corporation Niel Warren Apple Computer, Inc. Len Layton C-Media Electronics Mark Cookson M-Audio Revision History Revision Date Filename Author Description 1.7 Sep. 3, 02 Audio17.doc USB-IF DWG Initial version. Based on Audio10.doc. This version will be used to capture the areas where the spec needs adjustments. Areas are indicated by comments. 1.7a Oct. 24, 02 Audio17a.doc Geert Knapen Areas are identified where changes need to be made. Some minor changes already introduced. 1.7b Oct. 24, 02 Audio17b.doc Geert Knapen Intermediate version 1.7c Dec. 10, 02 Audio17c.doc Geert Knapen Discussions from 12-18-2002 f2f meeting captured. Additional comments added. 1.7d Feb. 3, 03 Audio17d.doc Geert Knapen Changes from 1.7c accepted. Additional changes introduced. 1.7e Feb. 19, 03 Audio17e.doc Geert Knapen Introduced physical vs. logical channel cluster 1.7f Feb. 19, 03 Audio17f.doc Geert Knapen Accepted all changes in 1.7e. Fixed some typos. 1.7g Jun. 2, 03 Audio17g.doc Geert Knapen Major overhaul with the introduction of the RANGE attribute. 1.7h Jun. 3, 03 Audio17h.doc Geert Knapen Accepted all changes USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 3 Revision Date Filename Author Description 1.7i Jul. 10, 03 Audio17i.doc Geert Knapen Introduced clock domain, interface association descriptor 1.7j Jul. 10, 03 Audio17j.doc Geert Knapen Accepted all changes 1.7k Sep. 8, 03 Audio17k.doc Geert Knapen Introduced Function Subclass codes, extended interrupt usage, cleaned up clock domains and removed clock domain group concept. Replaced by Clock Source Entity. 1.7l Sep. 10, 03 Audio17l.doc Geert Knapen Accepted all the changes 1.7m Sep. 15, 03 Audio17m.doc Geert Knapen Cleaned up Interrupt description 1.7n Sep. 30, 03 Audio17n.doc Geert Knapen Accepted all changes 1.7o Sep. 30, 03 Audio17o.doc Geert Knapen Major rewrite w.r.t. Controls. 1.7p Nov. 05, 03 Audio17p.doc Geert Knapen Accepted all the changes. Added bit pairs for indicating Control availability 1.7q Nov. 07, 03 Audio17q.doc Geert Knapen Introduced the new concept of controlling sampling frequency 1.7r Dec. 01, 03 Audio17r.doc Geert Knapen Accepted all the changes 1.7s Dec. 10, 03 Audio17s.doc Geert Knapen Changed physical-logical cluster mapping. Added explanation on binding between physical buttons and Audio Controls 1.7t Feb. 04, 04 Audio17t.doc Geert Knapen Accepted all changes 1.7u Feb. 05, 04 Audio17u.doc Geert Knapen Introduced Effect Unit. Regrouped some PUs into the EU concept. Added Parametric EQ as an EU. Accepted all changes 1.7v Mar. 30, 04 Audio17v.doc Geert Knapen Full proof-read. Changed formatting and wording throughout the document 1.7w Mar. 30, 04 Audio17w.doc Geert Knapen Accepted all the changes. Added new Function Categories. Added physical cluster desctriptor to AS interface descriptor. 1.7x Apr. 13, 04 Audio17x.doc Geert Knapen Accepted all the changes. Added new Function Categories. Added support for encoders and decoders. 1.7y Apr. 28, 04 Audio17y.doc Geert Knapen Accepted all the changes. 1.7z May 15, 04 Audio17z.doc Geert Knapen Added some fields to encoder descriptors. 1.8 May 26, 04 Audio18.doc Geert Knapen Accepted all changes and promoted to 1.8 level USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 4 Revision Date Filename Author Description 1.8a Sep. 15, 04 Audio18a.doc Geert Knapen Corrected some errors in table offsets etc as indicated by Len Layton (CMedia) Identified the need to address ASR converter Unit 1.8b Mar. 15, 05 Audio18b.doc Geert Knapen Minor editorial changes 1.8c Aug. 10, 05 Audio18c.doc Geert Knapen Minor editorial changes 1.8d Aug. 16, 05 Audio18d.doc Geert Knapen Accepted editorial changes, based on F2F meeting review. Added and accepted an ID field for all encoder and decoder descriptors. This ID must also be passed into the requests that address the encoder or decoder. 1.8e Aug. 17, 05 Audio18e.doc Geert Knapen Redid the encoder sections. Added generic latency support. Added SRC Unit. 1.8f Aug. 31, 05 Audio18f.doc Geert Knapen Fixed some heading levels. Added DTS. 1.8g Sep. 02, 05 Audio18g.doc Geert Knapen Added Encoder and Decoder Error Codes. Accepted all the changes. 1.9RC1 Sep. 02, 05 Audio19RC1.doc Geert Knapen Republished unchanged as 1.9RC1. 1.9RC2 Oct. 05, 05 Audio19RC2.doc Geert Knapen Made several small editorial changes. Accepted all the changes. 1.9 Oct. 07, 05 Audio19.doc Geert Knapen Promoted to 1.9 without change. 2.0RC1 May 19, 06 Audio20RC1.doc Geert Knapen Addressed and accepted some minor changes. Declared this document as the Release Candidate for the 2.0 version. 2.0 May 31, 06 Audio20.doc Geert Knapen Added new Intellectual Property Disclaimer. Final version. USB Device Class Definition for Audio Devices Release 2.0 May 31, 2006 5 Copyright © 1997-2006 USB Implementers Forum, Inc. All rights reserved. INTELLECTUAL PROPERTY DISCLAIMER A LICENSE IS HEREBY GRANTED TO REPRODUCE THIS SPECIFICATION FOR INTERNAL USE ONLY. NO OTHER LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, IS GRANTED OR INTENDED HEREBY. USB-IF AND THE AUTHORS OF THIS SPECIFICATION EXPRESSLY DISCLAIM ALL LIABILITY FOR INFRINGEMENT OF INTELLECTUAL PROPERTY RIGHTS RELATING TO IMPLEMENTATION OF INFORMATION IN THIS SPECIFICATION. USB-IF AND THE AUTHORS OF THIS SPECIFICATION ALSO DO NOT WARRANT OR REPRESENT THAT SUCH IMPLEMENTATION(S) WILL NOT INFRINGE THE INTELLECTUAL PROPERTY RIGHTS OF OTHERS. THIS SPECIFICATION IS PROVIDED “AS IS” AND WITH NO WARRANTIES, EXPRESS OR IMPLIED, STATUTORY OR OTHERWISE. ALL WARRANTIES ARE EXPRESSLY DISCLAIMED. USB-IF, ITS MEMBERS AND THE AUTHORS OF THIS SPECIFICATION PROVIDE NO WARRANTY OF MERCHANTABILITY, NO WARRANTY OF NON-INFRINGEMENT, NO WARRANTY OF FITNESS FOR ANY PARTICULAR PURPOSE, AND NO WARRANTY ARISING OUT OF ANY PROPOSAL, SPECIFICATION, OR SAMPLE. IN NO EVENT WILL USB-IF, MEMBERS OR THE AUTHORS BE LIABLE TO ANOTHER FOR THE COST OF PROCURING SUBSTITUTE GOODS OR SERVICES, LOST PROFITS, LOSS OF USE, LOSS OF DATA OR ANY INCIDENTAL, CONSEQUENTIAL, INDIRECT, OR SPECIAL DAMAGES, WHETHER UNDER CONTRACT, TORT, WARRANTY, OR OTHERWISE, ARISING IN ANY WAY OUT OF THE USE OF THIS SPECIFICATION, WHETHER OR NOT SUCH PARTY HAD ADVANCE NOTICE OF THE POSSIBILITY OF SUCH DAMAGES. NOTE VARIOUS USB-IF MEMBERS PARTICIPATED IN THE DRAFTING OF THIS SPECIFICATION. CERTAIN OF THESE MEMBERS MAY HAVE DECLINED TO ENTER INTO A SPECIFIC AGREEMENT LICENSING INTELLECTUAL PROPERTY RIGHTS THAT MAY BE INFRINGED IN THE IMPLEMENTATION OF THIS SPECIFICATION. PERSONS IMPLEMENT THIS SPECIFICATION AT THEIR OWN RISK. Dolby™, AC-3™, Pro Logic™ and Dolby Surround™ are trademarks of Dolby Laboratories, Inc. All other product names are trademarks, registered trademarks, or service marks of their respective owners. Please send comments via electronic mail to audio-chair(at)usb.org 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 - 131 - 136 - 141 ここを編集
https://w.atwiki.jp/usb_audio/pages/34.html
原文:Audio Device Document 1.0(PDF) USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 31 Table 3-1 Status Word Format Offset Field Size Value Description 0 bStatusType 1 Bitmap D7 Interrupt PendingD6 Memory Contents ChangedD5..4 ReservedD3..0 Originator0 = AudioControl interface1 = AudioStreaming interface2 = AudioStreaming endpoint3..15 = Reserved 1 bOriginator 1 Number ID of the Terminal, Unit, interface, orendpoint that reports the interrupt. 3.7.2 AudioStreaming Interface AudioStreaming interfaces are used to interchange digital audio data streams between the Host and the audio function. They are optional. An audio function can have zero or more AudioStreaming interfaces associated with it, each possibly carrying data of a different nature and format. Each AudioStreaming interface can have at most one isochronous data endpoint. This construction guarantees a one-to-one relationship between the AudioStreaming interface and the single audio data stream, related to the endpoint. In some cases, the isochronous data endpoint is accompanied by an associated isochronous synch endpoint for synchronization purposes. The isochronous data endpoint is required to be the first endpoint in the AudioStreaming interface. The synch endpoint always follows its associated data endpoint. An AudioStreaming interface can have alternate settings that can be used to change certain characteristics of the interface and underlying endpoint. A typical use of alternate settings is to provide a way to change the bandwidth requirements an active AudioStreaming interface imposes on the USB. By incorporating a low-bandwidth or even zero-bandwidth alternate setting for each AudioStreaming interface, a device offers to the Host software the option to temporarily relinquish USB bandwidth by switching to this lowbandwidth alternate setting. If such an alternate setting is implemented, it must be the default alternate setting (alternate setting zero). A zero-bandwidth alternate setting can be implemented by specifying zero endpoints in the standard AudioStreaming interface descriptor. All other interface and endpoint descriptors (both standard and class-specific) need not be specified in this case. The AudioStreaming interface is essentially used to provide an access point for the Host software (drivers) to manipulate the behavior of the physical interface it represents. Therefore, even external connections to the audio function (S/PDIF interface, analog input, etc.) can be represented by an AudioStreaming interface so that the Host software can control certain aspects of those connections. This type of AudioStreaming interface has no associated USB endpoints. The related audio data stream is not using USB as a transport medium. In addition, the concepts of dynamic interfaces as described in the Universal Serial Bus Class Specification can be used to notify the Host software that changes have occurred on the external connection. This is analogous to switching alternate settings on an AudioStreaming interface with USB endpoints, except that the switch is now device-initiated instead of Host-initiated. As an example, consider an S/PDIF connection to an audio function. If nothing is connected to this external S/PDIF interface, the AudioStreaming interface is idle and reports itself as being dynamic and non-configured (bInterfaceClass=0x00). If the user connects a standard IEC958 signal to the audio function, the S/PDIF receiver inside the audio function detects this and notifies the Host that the AudioStreaming interface has switched to its IEC958 mode (alternate setting x). If, on the other hand, an USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 32 IEC1937 signal, carrying MPEG-encoded audio is connected, the AudioStreaming interface switches to the appropriate setting (alternate setting y) to handle the MPEG decoding process. For every isochronous OUT or IN endpoint defined in any of the AudioStreaming interfaces, there must be a corresponding Input or Output Terminal defined in the audio function. For the Host to fully understand the nature and behavior of the connection, it must take into account the interface- and endpoint-related descriptors as well as the Terminal-related descriptor. 3.7.2.1 Isochronous Audio Data Stream Endpoint In general, the data streams that are handled by an isochronous audio data endpoint do not necessarily map directly to the logical channels that exist within the audio function. As an example, consider a “stereo” audio data stream that contains audio data, encoded in Dolby Prologic format. Although there is only one data stream, carrying interleaved samples for Left and Right (or more precisely LT and RT), these two channels carry information for four logical channels (Left, Right, Center, and Surround). Other examples include cases in which multiple logical audio channels are compressed into a single data stream. The format of such a data stream can be entirely different from the native format of the logical channels (for example, 256 Kbits/s MPEG1 stereo audio as opposed to 176.4 Kbytes/s 16 bit stereo 44.1 kHz audio). Therefore, to describe the data transfer at the endpoint level correctly, the notion of logical channel is replaced by the notion of audio data stream. It is the responsibility of the AudioStreaming interface which contains the OUT endpoint to convert between the audio data stream and the embedded logical channels before handing the data over to the Input Terminal. In many cases, this conversion process involves some form of decoding. Likewise, the AudioStreaming interface which contains the IN endpoint must convert logical channels from the Output Terminal into an audio data stream, often using some form of encoding. Consequently, requests to control properties that exist within an audio function, such as volume or mute cannot be sent to the endpoint in an AudioStreaming interface. An AudioStreaming interface operates on audio data streams and is unaware of the number of logical channels it eventually serves. Instead, these requests must be directed to the proper audio function’s Units or Terminals via the AudioControl interface. As already mentioned, an AudioStreaming interface can have zero or one isochronous audio data endpoint. If multiple synchronous audio channels must be communicated between Host and audio function, they must be clustered into one audio channel cluster by interleaving the individual audio data, and the result can be directed to the single endpoint. Furthermore, a single synch endpoint, if needed, can service the entire cluster. In this way, a minimum number of endpoints are consumed to transport related data streams. If an audio function needs more than one cluster to operate, each cluster is directed to the endpoint of a separate AudioStreaming interface, belonging to the same Audio Interface Collection (all servicing the same audio function). If there is a need to manipulate a number of AudioStreaming interfaces as a whole, these interfaces can be tied together. The techniques for associating interfaces, described in the Universal Serial Bus Class Specification should be used to create the binding. 3.7.2.2 Isochronous Synch Endpoint For adaptive audio source endpoints and asynchronous audio sink endpoints, an explicit synch mechanism is needed to maintain synchronization during transfers. For details about synchronization, see Section 5, “USB Data Flow Model,” in the USB Specification and the relevant parts of the Universal Serial Bus Class Specification. The information carried over the synch path consists of a 3-byte data packet. These three bytes contain the Ff value in a 10.14 format as described in Section 5.10.4.2, “Feedback” of the USB Specification. Ff represents the average number of samples the endpoint must produce or consume per frame to match the desired sampling frequency Fs exactly. USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 33 A new Ff value is available every 2(10 – P) ms (frames) where P can range from 1 to 9, inclusive. The sample clock Fs is always derived from a master clock Fm in the device. P is related to the ratio between those clocks through the following relationship 数式 In worst case conditions, only Fs is available and Fm = Fs, giving P = 1 because one can always use phase information to resolve the estimation of Fs within half a clock cycle. An adaptive audio source IN endpoint is accompanied by an associated isochronous synch OUT endpoint that carries Ff. An asynchronous audio sink OUT endpoint is accompanied by an associated isochronous synch IN endpoint. For adaptive IN endpoints and asynchronous OUT endpoints, the standard endpoint descriptor provides the bSynchAddress field to establish a link to the associated synch endpoint. It contains the address of the synch endpoint. The bSynchAddress field of the synch standard endpoint descriptor must be set to zero. As indicated earlier, a new Ff value is available every 2(10 – P) frames with P ranging from 1 to 9. The bRefresh field of the synch standard endpoint descriptor is used to report the exponent (10-P) to the Host. It can range from 9 down to 1. (512 ms down to 2 ms) 3.7.2.3 Audio Channel Cluster Format An audio channel cluster is a grouping of logical audio channels that share the same characteristics like sampling frequency, bit resolution, etc. Channel numbering in the cluster starts with channel one up to the number of channels in the cluster. The virtual channel zero is used to address a master Control in a Unit, effectively influencing all the channels at once. The maximum number of independent channels in an audio channel cluster is limited to 254. Indeed, Channel zero is used to reference the master channel and code 0xFF (255) is used in requests to indicate that the request parameter block holds values for all available addressed Controls. For further details, refer to Section 5.2.2, “AudioControl Requests” and the sections that follow, describing the second form of requests. In many cases, each channel in the audio cluster is also tied to a certain location in the listening space. A trivial example of this is a cluster that contains Left and Right logical audio channels. To be able to describe more complex cases in a manageable fashion, this specification imposes some limitations and restrictions on the ordering of logical channels in an audio channel cluster. There are twelve predefined spatial locations · Left Front (L) · Right Front (R) · Center Front (C) · Low Frequency Enhancement (LFE) [Super woofer] · Left Surround (LS) · Right Surround (RS) · Left of Center (LC) [in front] · Right of Center (RC) [in front] · Surround (S) [rear] · Side Left (SL) [left wall] · Side Right (SR) [right wall] · Top (T) [overhead] If there are logical channels present in the audio channel cluster that correspond to some of the previously defined spatial positions, then they must appear in the order specified in the above list. For instance, if a USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 34 cluster contains logical channels Left, Right and LFE, then channel 1 is Left, channel 2 is Right, and channel 3 is LFE. To characterize an audio channel cluster, a cluster descriptor is introduced. This descriptor is embedded within one of the following descriptors · Input Terminal descriptor · Mixer Unit descriptor · Processing Unit descriptor · Extension Unit descriptor The cluster descriptor contains the following fields · bNrChannels a number that specifies how many logical audio channels are present in the cluster. · wChannelConfig a bit field that indicates which spatial locations are present in the cluster. The bit allocations are as follows § D0 Left Front (L) § D1 Right Front (R) § D2 Center Front (C) § D3 Low Frequency Enhancement (LFE) § D4 Left Surround (LS) § D5 Right Surround (RS) § D6 Left of Center (LC) § D7 Right of Center (RC) § D8 Surround (S) § D9 Side Left (SL) § D10 Side Right (SR) § D11 Top (T) § D15..12 Reserved · Each bit set in this bit map indicates there is a logical channel in the cluster that carries audio information, destined for the indicated spatial location. The channel ordering in the cluster must correspond to the ordering, imposed by the above list of predefined spatial locations. If there are more channels in the cluster than there are bits set in the wChannelConfig field, (i.e. bNrChannels [Number_Of_Bits_Set]), then the first [Number_Of_Bits_Set] channels take the spatial positions, indicated in wChannelConfig. The remaining channels have ‘non-predefined’ spatial positions (positions that do not appear in the predefined list). If none of the bits in wChannelConfig are set, then all channels have non-predefined spatial positions. If one or more channels have non-predefined spatial positions, their spatial location description can optionally be derived from the iChannelNames field. · iChannelNames index to a string descriptor that describes the spatial location of the first nonpredefined logical channel in the cluster. The spatial locations of all remaining logical channels must be described by string descriptors with indices that immediately follow the index of the descriptor of the first non-predefined channel. Therefore, iChannelNames inherently describes an array of string descriptor indices, ranging from iChannelNames to (iChannelNames + (bNrChannels- [Number_Of_Bits_Set]) - 1) Example 1 An audio channel cluster that carries Dolby Prologic logical channels has the following cluster descriptor Table 3-2 Dolby Prologic Cluster Descriptor Offset Field Size Value Description USB Device Class Definition for Audio Devices Release 1.0 March 18, 1998 35 Offset Field Size Value Description 0 bNrChannels 1 4 There are 4 logical channels in the cluster. 1 wChannelConfig 2 0x0107 Left, Right, Center and Surround are present. 3 iChannelNames 1 Index Because there are no non-predefined logical channels, this index must be set to 0. Example 2 A hypothetical audio channel cluster inside an audio function could carry Left, Left Surround, Left of Center, and two auxiliary channels that contain each a different weighted mix of the Left, Left Surround and Left of Center channels. The corresponding cluster descriptor would be Table 3-3 Left Group Cluster Descriptor Offset Field Size Value Description 0 bNrChannels 1 5 There are 5 logical channels in the cluster 1 wChannelConfig 2 0x0051 Left, Left Surround, Left of Center and two undefined channels are present. (bNrChannels [Number_Of_Bits_Set]) 3 iChannelNames 1 Index Optional index of the first non-predefined string descriptor Optional string descriptors String (Index) = ‘Left Down Mix 1’ String (Index+1) = ‘Left Down Mix 2’ 3.7.2.4 Audio Data Format The format used to transport audio data over the USB is entirely determined by the code, located in the wFormatTag field of the class-specific interface descriptor. Therefore, each defined Format Tag must document in detail the audio data format it uses. Consequently, format-specific descriptors are needed to fully describe the format. For details about the predefined Format Tags and associated data formats and descriptors, see the separate document, USB Audio Data Formats, that is considered part of this specification. Vendor-specific protocols must be fully documented by the manufacturer. 1 - 6 - 11 - 16 - 21 - 26 - 31 - 36 - 41 - 46 - 51 - 56 - 61 - 66 - 71 - 76 - 81 - 86 - 91 - 96 - 101 - 106 - 111 - 116 - 121 - 126 ここを編集
https://w.atwiki.jp/cohstatsjp/pages/292.html
Vehicle Kettenkrad Contents 1 Kettenkrad Veterancy 2Tactics 3History 4Built From 4.1 Panzer Elite Headquarters 5Doctrinal Abilities 5.1 Rudimentary Repair 5.2 Kettenkrad Camouflage 5.3 Booby Traps 6Vehicle Abilities 6.1 Vehicle Cover Kettenkrad Health 90 Max Speed 7.5 Sight 55 Cost 165 Acceleration 14 Detection 30/10 Time 25 Deceleration 9 Hotkey S Population 1 Rotation 120 Target Type vehicle_22x Upkeep 2.688 Crush Human Critical Type supply_truck Crush Mode Rear Damage Enabled true Kettenkrad Veterancy [Expand][Hide] Received Damage Received Accuracy Maximum Speed Maximum Health 0.9 0.9 1.15 1.15 8 Vet-Exp Received Damage Received Accuracy Maximum Speed Maximum Health 0.95 0.95 1.05 1.15 22 Vet-Exp Received Damage Received Accuracy Maximum Speed Maximum Health 0.95 0.95 1.05 1.15 38 Vet-Exp Tactics Kettenkrads can quickly capture command points. Determined by what Doctrine you use, the Kettenkrad has many abilities that it can use. Kettenkrads are good to use if you want to spot snipers (a camouflaged Kettenkrad can spot one as well). Kettenkrads can sometimes be destroyed by US jeep. However, the US player need great skill and micro to pull it off. The doctrine choice can be discovered by looking at what kind of tools are on the back of the kettenkrad. Kettenkrads with the Camouflage ablity (Obtained through Luftwaffe tactics) can be used as a scout, similar to camouflaged snipers, or the American Air Recon Ability. Especially in smaller matches with few chokepoints, having 2 Kettenkrads is helpful for harassing enemy economy. Use them to cut off main supply sectors in 1v1 games (This tactic is extremely effective against the British, who field fewer units in the early game than Americans). History The Kettenkrad, short for Kettenkraftrad (Tracked Motorcycle), was a light utility vehicle for transport and towing. It was more mobile in off-road conditions than a motorcycle, and could tow equipment trailers and light guns such as the 3.7cm PaK36 anti-tank gun. It was also sometimes used as the main transport for motorized scout units in Panzer and Panzergrenadier divisions. Built From Panzer Elite Headquarters [Expand][Hide] Health 1500 Target Type building Cost 500 Critical Type panel_building Time 61 Hotkey Effects Deploy Pioneers and escalate Battle Phases to call in additional Reinforcements. ?ESeeStructure Panzer Elite Headquartersfor details. Doctrinal Abilities Rudimentary Repair [Expand][Hide] Cost Activation targeted Duration 0 Target tp_entity_and_squad_entity Recharge 0 Hotkey E Effects Low quality repair of a Structure or Vehicle for a nominal Resource cost. ?ESeeAbility Rudimentary Repairfor details. Kettenkrad Camouflage [Expand][Hide] Cost Activation toggle Duration 0 Target tp_any Recharge 10 Hotkey C Effects Camouflaged Units can only be detected if they engage in combat or if an Enemy unit is in close proximity. While Camouflage is active, movement speed is reduced. ?ESeeAbility Kettenkrad Camouflagefor details. Booby Traps [Expand][Hide] Cost Activation targeted Duration _ Target tp_any Recharge 25 Hotkey Y Effects Wire strategic points and ambient buildings to explode when enemies begin capturing point, or enter building. ?ESeeAbility Booby Trapsfor details. Vehicle Abilities Vehicle Cover [Expand][Hide] Cost Activation always_on Duration _ Target tp_any Recharge 0 Hotkey Effects $0 no key ?ESeeAbility Vehicle Coverfor details. Retrieved from http //coh-stats.com/Vehicle Kettenkrad
https://w.atwiki.jp/fieds_labo4/pages/17.html
ライフゲームもどき はじめてのActionScript 3~ライフゲームを作ってみる http //codezine.jp/article/detail/627 をFlex3SDKで演習。 ライフゲームがよくわかってなかったりします^^; Timerの値が小さいとクリックがうまく拾えなかった。PCの問題かな? START-STOP-CLEAR-STARTを繰り返すと300Kずつ確保メモリが増えていきます。致命的@@; どこが悪いんだぁ~ Creature.as package { import flash.display.*; import flash.events.*; /** * ... * @author ss */ public class Creature extends Sprite { // embed image - relational path [Embed(source= ../image/dead2.gif )] private static const ImageDead Class; [Embed(source= ../image/live.gif )] private static const ImageLive Class; // bitmap status private var cur_bmp Bitmap = null; private var isLive Boolean = false; public function Creature() { setDead(); addEventListener(MouseEvent.CLICK, onClick); } private function onClick(evt MouseEvent) void { (isLive) ? setDead() setLive(); } private function removeBmp() void { if (cur_bmp != null) removeChild(cur_bmp); } private function changeImage(bmpClass Class) void { removeBmp(); cur_bmp = new bmpClass as Bitmap; addChild(cur_bmp); isLive = (bmpClass == ImageDead) ? false true; } public function setLive() void { changeImage(ImageLive); } public function setDead() void { changeImage(ImageDead); } public function getAlive() Boolean { return isLive; } } } Main.as package { import flash.display.*; import flash.events.*; import flash.text.*; import flash.utils.Timer; /** * ... * @author ss */ public class Main extends Sprite { [Embed(source= ../image/start.gif )] private static const StartImg Class; [Embed(source= ../image/stop.gif )] private static const StopImg Class; [Embed(source= ../image/clear.gif )] private static const ClearImg Class; private var map Array; private const MAX_ROWS int = 20; private const MAX_COLS int = 32; private var ns_timer Timer; private var startBtn SimpleButton; private var stopBtn SimpleButton; private var clearBtn SimpleButton; public function Main() void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); pre_process(); } private function init(e Event = null) void { removeEventListener(Event.ADDED_TO_STAGE, init); // entry point } private function pre_process() void { var btnimg1 Bitmap = new StartImg(); var btnimg2 Bitmap = new StopImg(); var btnimg3 Bitmap = new ClearImg(); // SimpleButton(upState DisplayObject = null, overState DisplayObject = null, //downState DisplayObject = null, hitTestState DisplayObject = null) startBtn = new SimpleButton(btnimg1, btnimg1, btnimg1, btnimg1); stopBtn = new SimpleButton(btnimg2, btnimg2, btnimg2, btnimg2); clearBtn = new SimpleButton(btnimg3, btnimg3, btnimg3, btnimg3); startBtn.x = 0; startBtn.y = 0; stopBtn.x = 80; stopBtn.y = 0; clearBtn.x = 160; clearBtn.y = 0; // map object declare map = new Array(); for (var i int = 0; i (MAX_ROWS * MAX_COLS); i++) map[i] = new Creature(); // addChild(startBtn); addChild(stopBtn); addChild(clearBtn); startBtn.addEventListener(MouseEvent.CLICK, startBtnClickListener); stopBtn.addEventListener(MouseEvent.CLICK, stopBtnClickListener); clearBtn.addEventListener(MouseEvent.CLICK, clearBtnClickListener); } private function startBtnClickListener(e MouseEvent) void { // startBtn.enabled = false; LifeGame(); removeChild(clearBtn); removeChild(startBtn); } private function stopBtnClickListener(e MouseEvent) void { ns_timer.stop(); // removeEventListener(TimerEvent.TIMER, onTick); // startBtn.enabled = true; addChild(startBtn); addChild(clearBtn); } private function clearBtnClickListener(e MouseEvent) void { for (var i int = 0; i (MAX_ROWS * MAX_COLS); i++) { //image clear //if (map[i].getAlive() == true) { //map[i].setDead(); //addChild(map[i]); //} if (map[i].getAlive() == true) map[i].setDead(); addChild(map[i]); } } public function LifeGame() void { //map = new Array(); for (var i int = 0; i (MAX_ROWS * MAX_COLS); i++) { var x int = i % MAX_COLS; var y int = i / MAX_COLS; map[i].x = x * map[i].width; map[i].y = y * map[i].height + 30; if (Math.random() 0.7) map[i].setLive(); addChild(map[i]); } ns_timer = new Timer(1000); ns_timer.addEventListener(TimerEvent.TIMER, onTick); ns_timer.start(); } private function onTick(eve TimerEvent) void { var i_pos int = Math.floor(Math.random() * 640); var x_pos int = i_pos % MAX_COLS; var y_pos int = i_pos / MAX_COLS; var live_counter int = 0; // x-1,y-1 if ((x_pos 0) (y_pos 0)) { if (map[(i_pos - 33)].getAlive() == true) live_counter++ ; } // x,y-1 if (y_pos 0) { if (map[(i_pos - 32)].getAlive() == true) live_counter++ ; } // x+1,y-1 if ((x_pos 31) (y_pos 0)) { if (map[(i_pos - 31)].getAlive() == true) live_counter++ ; } // x-1,y if (x_pos 0) { if (map[(i_pos - 1)].getAlive() == true) live_counter++ ; } // x+1,y if (x_pos 31) { if (map[(i_pos + 1)].getAlive() == true) live_counter++ ; } // x-1,y+1 if ((x_pos 0) (y_pos 19)) { if (map[(i_pos + 31)].getAlive() == true) live_counter++ ; } // x,y+1 if (y_pos 19) { if (map[(i_pos + 32)].getAlive() == true) live_counter++ ; } // x+1,y+1 if ((x_pos 31) (y_pos 19)) { if (map[(i_pos + 33)].getAlive() == true) live_counter++ ; } if (map[i_pos].getAlive() == true) { // live if ((live_counter == 2) || (live_counter == 3)) map[i_pos].setLive(); else map[i_pos].setDead(); } else { // dead if (live_counter == 3) map[i_pos].setLive(); else map[i_pos].setDead(); } } } }
https://w.atwiki.jp/api_programming/pages/234.html
Automate/Documentation/Values - LlamaLab Values Automate support the following value types Null Number Text Array Dictionary Null Null is a special keyword denoting a undefined/missing value. Number Numbers are stored internally as double-precision 64-bit IEEE 754 floating point values. See Arithmetic operators. Number literal Numbers can be represented in expressions with following literals decimal number (base-10) with or without a fractional part and exponent; 123.45 hexadecimal (base-16) using the 0x prefix; 0xCAFEBABE binary number (base-2) 頭に0bを付ける例:0b00110011 Text Text, or string, is a sequence of characters. Text literal テキストはダブルクォーテーションで囲む "Hello world". In addition to ordinary characters, you can also include special characters within text literals Character Description {expression} String interpolation, see below. \b Backspace \f Form feed \n 改行 \r Carriage return \t Tab \' Single quote/Apostrophe \" Double quote \\ バックスラッシュ \{ Avoid interpretation of left curly-bracket as start of a string interpolation \uXXXX The Unicode character specified by the four hexadecimal digits XXXX. For example, \u00A9 is the Unicode sequence for the copyright symbol. String interpolation String interpolation とは実行中に評価される値を含む文字列を生成する方法。 Each “interpolation” inside a text literal is wrapped in curly brackets; "1 times 3 is {1*3}". To format the inserted value add a function name after the expression; "1 times 3 is {1*3;numberFormat}". Any additional arguments are passed to the function as text; "Today is {now;dateFormat;MMM dd}. Array An array is a container object that holds a dynamic number of values of any type. Each item in an array is called an element, and are accessed by its numerical integer index. The index is zero-based, first element has index 0, last element has index length - 1. A negative index will access the array from the end (length + index). To access an array use the subscript operator, length operator and for each block. To modify an array use the array add block, array remove block and array set block. Array literal An array literal is a list of zero or more expressions, each of which represents an array element, enclosed in square brackets [ ] [ 1, "two", 3.0, null, dict ] Dictionary A dictionary is a container composed of エントリ(entry)と呼ばれる"key-value" のペアで構成される。 各キーは高々1回登場する。 キーはテキストのみ、null を含む文字以外の値は、テキストに変換される。 値はどのタイプでも良い Each entry can also have an associated conversion type, used when communicating with apps supporting other value types. To access a dictionary use the subscript operator, length operator and for each block. To modify a dictionary use the dictionary put block and dictionary remove block. Dictionary literal dictionary literal は、0以上のエントリを持つリストであり、 "{}" で囲まれて表現している。 { "a" 1, "b" as int 3.333, "c" as uri "http //llamalab.com" } Dictionary conversion types Automate では "number", "text", "array", "dictionary" の4つのタイプのみをサポートしている。 Android OS を含むその他のアプリでは、それ以外の値をサポートしている場合がある。そのため、他のアプリから送られてきたエントリの値は変換される必要がある。 ある値をどんな値に変換するかを特定するために "as" キーワードをキーの後に用いる。 "link" as uri "http //llamalab.com". The following conversion types are allowed Boolean BooleanArray Bundle BundleArray BundleList Byte ByteArray Char CharArray CharSequence CharSequenceArray CharSequenceList ComponentName ComponentNameArray ComponentNameList Double DoubleArray Float FloatArray Int IntArray IntList Intent IntentArray IntentList Long LongArray Short ShortArray String StringArray StringList Uri UriArray UriList
https://w.atwiki.jp/reisiki/pages/34.html
stg/game/Scene0.h ver090130 #ifndef __STG_GAME_SCENE0_H__#define __STG_GAME_SCENE0_H__ #include windows.h #include d3d9.h #include d3dx9.h #include boost/smart_ptr.hpp #include "siki/Siki.h"#include "siki/d3d/Scene.h"#include "siki/d3d/CustomVertex.h" namespace stg{namespace game{ class Scene0 public siki d3d Scene{public Scene0(); ~Scene0(); virtual bool SetDevice(sharedptr IDirect3D9 d3dobj, sharedptr IDirect3DDevice9 device); virtual bool Update(); virtual bool Draw(sharedptr IDirect3D9 d3dobj, sharedptr IDirect3DDevice9 device); private siki d3d TransformedVertex *m_v; IDirect3DVertexBuffer9 *m_pvertex; boost shared_ptr IDirect3DVertexBuffer9 m_vertices;}; } // namespace game} // namespace stg #endif // __STG0_GAME_SCENE0_H__ stg/game/Scene0.cpp ver090130 #include "../StdAfx.h"#include boost/smart_ptr.hpp #include boost/signal.hpp #include "Scene0.h" namespace stg{namespace game{ Scene0 Scene0() siki d3d Scene(1), m_pvertex(0){ static siki d3d TransformedVertex v[4] = { {300.0f, 200.0f, 0.0f, 1.0f, 0xffffffff, 0.0f, 0.0f}, {300.0f, 300.0f, 0.0f, 1.0f, 0xffffffff, 0.0f, 1.0f}, {200.0f, 200.0f, 0.0f, 1.0f, 0xffffffff, 1.0f, 0.0f}, {200.0f, 300.0f, 0.0f, 1.0f, 0xffffffff, 1.0f, 1.0f} }; m_v = v;} Scene0 ~Scene0(){ } bool Scene0 SetDevice(sharedptr IDirect3D9 d3dobj, sharedptr IDirect3DDevice9 device){ siki d3d TransformedVertex v[4] = { {300.0f, 200.0f, 0.0f, 1.0f, 0xffffffff, 0.0f, 0.0f}, {300.0f, 300.0f, 0.0f, 1.0f, 0xffffffff, 0.0f, 1.0f}, {200.0f, 200.0f, 0.0f, 1.0f, 0xffffffff, 1.0f, 0.0f}, {200.0f, 300.0f, 0.0f, 1.0f, 0xffffffff, 1.0f, 1.0f} }; //IDirect3DVertexBuffer9* pVertex; HRESULT hr = device- CreateVertexBuffer( sizeof(siki d3d TransformedVertex)*4, D3DUSAGE_WRITEONLY, D3DFVF_TRANSFORMED, D3DPOOL_MANAGED, m_pvertex, NULL ); if(!SUCCEEDED(hr)){ return false; } //IDirect3DVertexBuffer9* pVertex; hr = device- CreateVertexBuffer( sizeof(siki d3d TransformedVertex)*4, D3DUSAGE_WRITEONLY, D3DFVF_TRANSFORMED, D3DPOOL_MANAGED, m_pvertex, NULL ); void *pData; hr = m_pvertex- Lock(0, sizeof(siki d3d TransformedVertex)*4, (void**) pData, 0); if(hr == D3D_OK){ memcpy(pData, v, sizeof(siki d3d TransformedVertex)*4); m_pvertex- Unlock(); } return true;} bool Scene0 Update(){ return true;} bool Scene0 Draw(sharedptr IDirect3D9 d3dobj, sharedptr IDirect3DDevice9 device){ device- SetStreamSource(0, m_pvertex, 0, sizeof(siki d3d TransformedVertex)); device- SetFVF(D3DFVF_TRANSFORMED); device- DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2); return true;} } // namespace game} // namespace stg [09/01/30 22 09][][編集]
https://w.atwiki.jp/minecraftinbackrooms/pages/11.html
Level 0 サブタイトル The Lobby(ロビー) 警告 現在、MEGによる分類における、"C-32地点"において危険地帯が出現しています。 その危険性から、基本的に侵入するべきではありません。 The Frontroomから外れ落ちたと思われる車などの危険物も確認できるため、注意して探索してください。 探索していた放浪者がどこかへと消滅したという報告が上がっています。 階段が出現しており、その先はとても不安定な空間となっているようです。 概要 Level 0は小売店のバックヤードのような空間です。 その壁紙と床は黄ばんでおり、天井には蛍光灯が設置されています。 しかし、蛍光灯が設置されておらず、暗闇になっている空間も存在します。 壁にはコンセントや何者かの爪痕が残されていることがあります。 また、Level 0では、たまに珍しい構造を見かけることがあります。 Level 0で発生した空間の消失によって、Level 0は階層的構造を持つことが発覚しました。 下層の構造は、我々がよく知るものとは異なっています。 また、現在MEGによる臨時基地がこのレベルに設置されているようで、多くの隊員が基地に集まっています。 Items アイテム 入手方法Almond water 放浪者から20Gで購入Energy bar opened 放浪者から25Gで購入 鋼鉄のインゴット 上の階層にあるタルから入手 Entities バクテリア 施設 アーモンドウォーターショップ 入口 The Frontroomで外れ落ちると、Level 0に到達します。 Level 1で何もせずに廊下の階段を上ると、Level 0に移動します。 Level 0.7でリスタートの表示があるドアを見つけて入ると、Level 0に移動します。 Level 34でマンホールから出ると、Level 0に移動します。 Level 36でFL-000便に搭乗すると、Level 0に到着します。 Level 188で黄ばんだ壁紙の景色を移す窓に外れ落ちると、Level 0に到達します。 Level 368で橋から落ちると、Level 0に外れ落ちます。 Level 9223372036854775807でエレベーターに乗ると、Level 0に移動します。 The EndのPCでTutorial.exeを実行すると、Level 0に到達します。 The Hubで"0"のドアに入ると、Level 0に到達します。 出口 非常口を見つけて入ると、Level 1に移動します。 壁に小さい穴を見つけて進むと、Level 0.1に外れ落ちます。 赤いドアを見つけて入ると、Level 0.7に移動します。 地面に外れ落ちると、Level 0.7に移動します。 窓のような構造に外れ落ちると、Level 188に到達します。 黒い壁に外れ落ちると、Level -1に移動します。 地面に開いた1m²の穴に落ちると、Level FUNに到達します。 白と黒で構成された空間をみつけ、そこにあるドアに入ると、Level 0.9223372036854775807に到達します。 感圧版が配置されているドアを見つけて入ると、The Manila Roomへ移動します。 研究室のような部屋を見つけ、部屋の中にあるドアの先へ進むとThe Hubに移動します。 青竹色に浸食された空間を見つけて外れ落ちると、Ohutonroomに到達します。 未知の合金でできたつるはしを使って床を破壊すると、The Airplaneに移動させられます。 上層階にある緑のExitと書かれた赤いドアに入ると、Level 0.01に到達します。 周りの外壁と同化している壁に外れ落ちると、Level SBに移動します。 参考 https //backrooms.fandom.com/ja/wiki/Level_0_(2) http //backrooms-wiki.wikidot.com/level-0
https://w.atwiki.jp/vs900com/pages/12.html
WELCOME TO VS900COM@WIKI SITE! ウィキはみんなで気軽にホームページ編集できるツールです。 このページは自由に編集することができます。 メールで送られてきたパスワードを用いてログインすることで、各種変更(サイト名、トップページ、メンバー管理、サイドページ、デザイン、ページ管理、等)することができます まずはこちらをご覧ください。 @wikiの基本操作 用途別のオススメ機能紹介 @wikiの設定/管理 分からないことは? @wiki ご利用ガイド よくある質問 無料で会員登録できるSNS内の@wiki助け合いコミュニティ @wiki更新情報 @wikiへのお問合せフォーム 等をご活用ください @wiki助け合いコミュニティの掲示板スレッド一覧 #atfb_bbs_list その他お勧めサービスについて 大容量1G、PHP/CGI、MySQL、FTPが使える無料ホームページは@PAGES 無料ブログ作成は@WORDをご利用ください 2ch型の無料掲示板は@chsをご利用ください フォーラム型の無料掲示板は@bbをご利用ください お絵かき掲示板は@paintをご利用ください その他の無料掲示板は@bbsをご利用ください 無料ソーシャルプロフィールサービス @flabo(アットフラボ) おすすめ機能 気になるニュースをチェック 関連するブログ一覧を表示 その他にもいろいろな機能満載!! @wikiプラグイン @wiki便利ツール @wiki構文 @wikiプラグイン一覧 まとめサイト作成支援ツール バグ・不具合を見つけたら? 要望がある場合は? お手数ですが、メールでお問い合わせください。