約 4,125,591 件
https://w.atwiki.jp/ddrreplay/pages/316.html
「on the bounce」スコア&ムービーボード 1ページ目 「on the bounce」スコア&ムービーボード 2ページ目
https://w.atwiki.jp/legendofnorrath/pages/317.html
SS Title Concordance Of Research Type Quest Faction - Attribute - Archetype Mage Level - Game Text - Card Number 1U-(Uncommon,Oathbound) Lore -
https://w.atwiki.jp/asigami/pages/1135.html
曲名 アーティスト バージョン 難易度 BPM NOTES/FREEZE(SHOCK) INSERTiON NAOKI underground 5th 踊9 110-225 374/0 STREAM VOLTAGE AIR FREEZE CHAOS 52 40 14 0 14 踊譜面(9) / 激譜面(13) 譜面 http //livedoor.blogimg.jp/yanmar195/imgs/e/f/eff44154.png 動画 https //www.youtube.com/watch?v=fxDGpsqunp4 解説 BPM推移:139-(停止)-110-(1拍ごとに2上昇)-140-(1拍ごとに2減少)-130-(1拍ごとに4減少)-118-(1拍ごとに4上昇)-150-139-165-(1拍ごとに5上昇)-225 メインのBPMは139。中盤のBPM変化が非常に細かいため踏みにくい。 ビジステップを含んだ8分7連が出てくる。 コメント コメント(感想など)
https://w.atwiki.jp/ddrdp/pages/810.html
on the bounce(踊) 曲名 アーティスト フォルダ 難易度 BPM NOTES/FA(SA) その他 on the bounce neuras X 踊9 150 269 / 9 STREAM VOLTAGE AIR FREEZE CHAOS 56 50 14 29 27 楽譜面(6) / 踊譜面(9) / 激譜面(14) / 鬼譜面(15) 属性 渡り、遠配置、地団駄、リズム難 譜面 http //eba502.web.fc2.com/fumen/ddr/x/onbounce_8t.html 解説 裏拍や付点8分など、変則リズムはこの踊譜面でも健在。 -- 名無しさん (2010-08-31 21 03 58) 桂馬踏みの8分やラストの付点8分での往復などこのレベル帯にしては振り回しがかなりきつめ。極めつけは2P→←1P←の8分3連と、リズム難なのもあり一度踏み外すとバランスが崩れやすいので注意。 -- 名無しさん (2012-02-09 02 43 33) ↑すいません2P→ 2P← 1P→ でした。 -- 名無しさん (2012-02-15 02 18 06) X2にて8→9に昇格。踊譜面ながら、裏拍や配置難が多いクセモノ。低難易度で足運びや精度を出すのを練習するには適した譜面 -- 名無しさん (2013-09-23 22 09 02) 名前 コメント コメント(私的なことや感想はこちら) 名前 コメント
https://w.atwiki.jp/matchmove/pages/94.html
Motion Capture and Face Tracking SynthEyes offers the exciting capability to do full body and facial motion capture using conventional video or film cameras. STOP! Unless you know how to do supervised tracking and understand moving-object tracking, you will not be able to do motion tracking. The material here builds upon that earlier material; it is not repeated here because it would be exactly that, a repetition. First, why and when is motion capture necessary? The moving-object tracking discussed previously is very effective for tracking a head, when the face is not doing all that much, or when trackable points have been added in places that don’t move with respect to one another (forehead, jaws, nose). The moving-object mode is good for making animals talk, for example. By contrast, motion capture is used when the motion of the moving features is to be determined, and will then be applied to an animated character. For example, use motion capture of an actor reading a script to apply the same expressions to an animated character. Moving-object tracking requires only one camera, while motion capture requires several calibrated cameras. Second, we need to establish a few very important points this is not the kind of capability that you can learn on the fly as you do that important shoot, with the client breathing down your neck. This is not the kind of thing for which you can expect to glance at this manual for a few minutes, and be a pro. Your head will explode. This is not the sort of thing you can expect to apply to some musty old archival footage, or using that old VHS camera at night in front of a flickering fireplace. This is not something where you can set up a shoot for a couple of days, leave it around with small children or animals climbing on it, and get anything usable whatsoever. This is not the sort of thing where you can take a SynthEyes export into your animation software, and expect all your work to be done, with just a quick render to come. And this is not the sort of thing that is going to produce the results of a $250,000 custom full body motion capture studio with 25 cameras. With all those dire warnings out of the way, what is the good news? If you do your homework, do your experimentation ahead of time, set up technically solid cameras and lighting, read the SynthEyes manual so you have a fair understanding what the SynthEyes software is doing, and understand your 3-D package well enough to set up your character or face rigging, you should be able to get excellent results. In this manual, we’ll work through a sample facial capture session. The techniques and issues are the same for full body capture, though of course the tracking marks and overall camera setup for body capture must be larger and more complex. Introduction To perform motion capture of faces or bodies, you will need at least two cameras trained on the performer from different angles. Since the performer s head or limbs are rotating, the tracking features may rotate out of view of the first two cameras, so you may need additional cameras to shoot more views from behind the actor. The fields of view of the cameras must be large enough to encompass the entire motion that the actor will perform, without the cameras tracking the performer (OK, experts can use SynthEyes for motion capture even when the cameras move, but only with care). You will need to perform a calibration process ahead of time, to determine the exact position and orientation of the cameras with respect to one another (assuming they are not moving). We’ll show you one way to achieve this, using some specialized but inexpensive gear. Very Important You’ll have to ensure that nobody knocks the cameras out of calibration while you shoot calibration or live action footage, or between takes. You’ll need to be able to resynchronize the footage of all the cameras in post. We’ll tell you one way to do that. Generally the performer will have tracker markers attached, to ensure the best possible and most reliable data capture. The exception to this would be if one of the camera views must also be used as part of the final shot, for example, a talking head that will have an extreme helmet added. In this case, markers can be used where they will be hidden by the added effect, and in locations not permitting trackers, either natural facial features can be used (HD or film source!), or markers can be used and removed as an additional effect. After you solve the calibration and tracking in SynthEyes, you will wind up with a collection of trajectories showing the path through space of each individual feature. When you do moving-object tracking, the trackers are all rigidly connected to one another, but in motion capture, each tracker follows its own individual path. You will bring all these individual paths into your animation package, and will need to set up a rigging system that makes your character move in response to the tracker paths. That rigging might consist of expressions, Look At controllers, etc; it s up to you and your animation package. Camera Types Since eachcamera s fields of view must encompass the entire performance (unless there are many overlapping cameras), at any time the actor is usually a small portion of the frame. This makes progressive DV, HD, or film source material strongly suggested. Progressive-scan cameras are strongly recommended, to avoid the factor of two loss of vertical resolution due to interlacing. This is especially important since the tracking markers are typically small and can slip between scan lines. While it may make operations simpler, the cameras do not have to be the same kind, have the same aspect ratio, or have the same frame rate. Resist the urge to use that old consumer-grade analog videotape camera as one of the cameras—the recording process will not be stable enough for good results. Lens distortion will substantially complicate calibration and processing. To minimize distortion, use high-quality lenses, and do not operate them near their maximum field of view, where distortion is largest. Do not try to squeeze into the a small studio space. Camera Placement The camera placements must address two opposing factors one, that the cameras should be far apart, to produce a large parallax disparity with good depth perception, and that the cameras should be close together, so that they can simultaneously observe as many trackers as possible. You’ll probably need to experiment with placement to gain experience, keeping in mind the performance to be delivered. Cameras do not have to be placed in any discernable pattern. If the performance warrants it, you might want coverage from up above, or down below. If any cameras will move during the performance, they will need a visible set of stationary tracking markers, to recover their trajectory in the usual fashion. This will reduce accuracy compared to a carefully calibrated stationary camera. Lighting Lighting should be sufficient to keep the markers well illuminated, avoiding shadowing. The lighting should be enough to be able to keep the shutter time of the cameras as low as possible, consistent with good image quality. Calibration Requirements and Fixturing In order for motion tracking footage to be solved, the camera positions, orientations, and fields of view must be determined, independent of the “live” footage, as accurately as possible. To do this, we will use a process based on moving-object tracking. A calibration object is moved in the field of view of all the cameras, and tracked simultaneously. To get the most data fastest and easiest, we constructed a prop we call a “porcupine” out of a 4” Styrofoam ball, 20-gauge plant stem wires, and small 7 mm colored pom-pom balls, all obtained from a local craft shop for under $5. Lengths of wire were cut to varying lengths, stuck into the ball, and a pom-pom glued to the end using a hot glue gun. Retrospectively, it would have been cleverer to space two balls along the support wire as well, to help set up a coordinate system. The porcupine is hung by a support wire in the location of the performer s head, then rotated as it is recorded simultaneously from each camera. The porcupine s colored pom-poms can be viewed virtually all the time, even as they spin around to the back, except for the occasional occlusion. Similar fixtures can be built for larger motion capture scenarios, perhaps using dolly track to carry a wire frame. It is important that the individual trackable features on the fixture not move with respect to one another their rigidity is required for the standard object tracking. The path of the calibration fixture does not particularly matter. Camera Synchronization The timing relationship between the different cameras must be established. Ideally, all the cameras would all be gen-locked together, snapping each image at exactly the same time. Instead, there are a variety of possibilities which can be arranged and communicated to SynthEyes during the setup process. Motion capture has a special solver mode on the Solver Panel individual mocap. In this mode, the second dropdown list changes from a directional hint to control camera synchronization. If the cameras are all video cameras, they can be gen-locked together to all take pictures identically. This situation is called “Sync Locked.” If you have a collection of video cameras, they will all take pictures at exactly the same (crystal-controlled) rate. However, one camera may always be taking pictures a bit before the other, and a third camera may always be taking pictures at yet a different time than the other two. The option is “Crystal Sync.” If you have a film camera, it might run a little more or a little less that 24 fps, not particularly synchronized to anything. This will be referred to as “Loose Sync.” In a capture setup with multiple cameras, one can always be considered to be Sync Locked, and serve as a reference. If it is a video camera, other video cameras are in Crystal Sync, and any film camera would be Loose Sync. If you have a film camera that will be used in the final shot, it should be considered to be the sync reference, with Sync Locked, and any other cameras are in Loose Sync. The beginning and end of each camera s view of the calibration sequence and the performance sequence must be identified to the nearest frame. This can be achieved with a clapper board or electronic slate. The low-budget approach is to use a flashlight or laser pointer flash to mark the beginning and end of the shot. Camera Calibration Process We’re ready to start the camera calibration process, using the two shot sequences LeftCalibSeq and RightCalibSeq. You can start SynthEyes and do a File/New for the left shot, and then Add Shot to bring in the second. Open both with Interlace=Yes, as unfortunately both shots are interlaced. Even though these are moving-object shots, for calibration they will be solved as moving-camera shots. You can see from these shots how the timing calibration was carried out. The shots were cropped right before the beginning of the starting flash, and right after the ending flash, to make it obvious what had been done. Normally, you should crop after the starting flash, and before the ending flash. On your own shots, you can use the Image Preprocessing panel s Region-of-interest capability to reduce memory consumption to help handle long shots from multiple cameras. You should supervise-track a substantial fraction of the pom-poms in each camera view; you can then solve each camera to obtain a path of the camera appearing to orbit the stationary pom-pom. Next, we will need to set up a set of links between corresponding trackers in the two shots. The links must always be on the Camera02 trackers, to a Camera01 tracker. This can be achieved at least three different ways. Matching Plan A Temporary Alignment This is probably easiest, and we may offer a script to do the grunt work in the future. Begin by assigning a temporary coordinate system for each camera, using the same pom-poms and ordering for each camera. It is most useful to keep the porcupine axis upright (which is where pom-poms along the support wire would come in useful, if available); in this shot three at the very bottom of the porcupine were suitable. With matching constraints for each camera, when you re-solve, you will obtain matching pairs of tracker points, one from each camera, located very close to one another. Now, with the Coordinate System panel open, Camera02 active, and the Top view selected, you can click on each of Camera02 s tracker points, and then alt-click (or command-click) on the corresponding Camera01 point, setting up all the links. As you complete the linking, you should remove the initial temporary constraints from Camera02. Matching Plan B Side by Side In this plan, you can the Camera Perspective viewport configuration. Make Camera01 active, and in the perspective window, right-click and Lock to current camera with Camera01 s imagery, then make Camera02 active for the camera view. Now camera and perspective views show the two shots simultaneously (Experts you can open multiple perspective windows and configure each for a different shot.). You can now click the trackers in the camera(02) view, and alt-click the matching (01) tracker in the perspective window, establishing the links. Reminder The coordinate system control panel must be open for linking. This will take a little mental rotation to establish the right correspondences; the colors of the various pom-poms will help. Matching Plan C Cross Link by Name This plan is probably more trouble than it worth for calibration, but can be an excellent choice for the actual shots. You assign names to each of the pom-poms, so that the names differ only by the first character, then use the Track/Cross-Link by Name menu item to establish links. It is a bit of pain to come up with different names for the pom-poms, and do it identically for the two views, but this might be more reasonable for other calibration scenarios where it is more obvious which point is which. Completing the Calibration We’re now ready to complete the calibration process. Change Camera02 to Indirectly solving mode on the Solver panel . Note the initial position of Camera01 is going to stay fixed, controlling the overall positions of all the cameras. If you want it in some particular location, you can remove the constraints from it, reset its path from the 3-D panel, then move it around to a desired location Solve the shot, and you have two orbiting cameras remaining at a fixed relative orientation as they orbit. Run the Motion Capture Camera Calibration script from the Script menu, and the orbits will be squished down to single locations. Camera01 will be stationary at its initial location, and Camera02 will be jittering around another location, showing the stability of the offset between the two. The first frame of Camera02 s position is actually an average relative position over the entire shot; it is this location we will later use. You should save this calibration scene file (porcupine.sni); it will be the starting point for tracking the real footage. The calibration script also produces a script_output.txt file in a user-specific folder that lists the calibration data. Body and Facial Tracking Marks Markers will make tracking faster, easier, and more accurate. On the face, markers might be little Avery dots from an office supply store, “magic marker” spots, pom-poms with rubber cement(?), mascara, or grease paint. Note that small colored dots tend to lose their coloration in video images, especially with motion blur. Make sure there is a luminance difference. Single-pixel-sized spots are less accurate than those that are several pixels across. Markers should be placed on the face in locations that reflect the underlying musculature and the facial rigging they must drive. Be sure to include markers on comparatively stationary parts of the head. For body tracking, a typical approach is to put the performer in a black outfit (such as UnderArmour), and attach table-tennis balls as tracking features onto the joints. To achieve enough visibility, placing balls on both the top and bottom of the elbow may be necessary. Because the markers must be placed on the outside of the body, away from the true joint locations, character rigging will have to take this into account. Preparation for Two-Dimensional Tracking We’re ready to begin tracking the actual performance footage. Open the final calibration scene file. Open the 3-D panel . For each camera, select the camera in the select-by-name dropdown list. Then hit Blast and answer yes to store the field of view data as well. Then, hit Reset twice, answering yes to remove keys from the field of view track also. The result of this little dance is to take the solved camera paths (as modified by the script), and make them the initial position and orientation for each camera, with no animation (since they aren’t actually moving). Next, replace the shot for each camera with LeftFaceSeq and RightFaceSeq. Again, these shots have been cropped based on the light flashes, which would normally be removed completely. Set the End Frame for each shot to its maximum possible. If necessary, use an animated ROI on the Imaging Preprocessing panel so that you can keep both shots in RAM simultaneously. Hit Control-A and delete to delete all the old trackers. Set each Lens to Known to lock the field of view, and set the solving mode of each camera to Disabled, since the cameras are fixed at their calibrated locations. We need a placeholder object to hold all the individual trackers. Create a moving object, Object01, for Camera01, then a moving object, Object02, for Camera02. On the Solving Panel, set Object01 and Object02 to the Individual mocap solving mode, and set the synchronization mode right below that. Two-Dimensional Tracking You can now track both shots, creating the trackers into Object01 and Object02 for the respective shots. If you don’t track all the markers, at least be sure to track a given marker either in both shots, or none, as a half-tracked marker will not help. The Hand-Held Use Others mode may be helpful here for the rapid facial motions. Frequent keying will be necessary when the motion causes motion blur to appear and disappear (a lot of uniform light and short shutter time will minimize this). Linking the Shots After completing the tracking, you must set up links. The easiest approach will probably be to set up side-by-side camera and perspective views. Again, you should link the Object02 trackers to the Object01 trackers, not the other way around. Doing the linking by name can also be helpful, since the trackers should have fairly obvious names such as Nose or Left Inner Eyebrow, etc. Solving You’re ready to solve, and the Solve step should be very routine, producing paths for each of the linked trackers. The final file is facetrk.sni. Afterwards, you can start checking on the trackers. You can scrub through the shot in the perspective window, orbiting around the face. You can check the error curves and XYZ paths in the graph editor . By switching to Sort by Error mode , you can sequence through the trackers starting from those with the highest error. Exports Rigging When you export a scene with individual trackers, each of them will have a key frame on each frame of the shot, animating the tracker path. It is up to you to determine a method of rigging your character to take advantage of the animated tracker paths. The method chosen will depend on your character and animation software package. It is likely you will need some expressions (formulas) and some Look-At controls. For full-body motion capture, you will need to take into account the offsets from the tracking markers (ie balls) to the actual joint locations. Modeling You can use the calculated point locations to build models. However, the animation of the vertices will not be carried forward into the meshes you build. Instead, when you do a Convert to Mesh operation in the perspective window, the current tracker locations are frozen on that frame. If desired, you can repeat the object-building process on different frames to build up a collection of morph-target meshes.
https://w.atwiki.jp/pathofexile12/pages/230.html
詳説・特徴 ジェムレベルによる変化 入手方法 エンチャント 関連リンク Convocation Minion,Spell,Durationマナコスト 6-13クールダウン 3.0秒 Recalls all minions that are following you to your location, and grants them a temporary life regeneration effect. クオリティ1%あたり1% increased Skill Effect Durationスキルの効果時間が1%増加1% increased Cooldown Recovery Speed日本語訳求む Base duration is 2 seconds基本持続時間は2秒 Regenerate (0.7-1.65)% of Life per second毎秒of Lifeが(0.7-1.65)%回復 詳説・特徴 ジェムレベルによる変化 +... レベル マナコスト life_regeneration_rate_per_minute_% 1 24 58 6 42% 2 27 64 7 45% 3 30 71 7 48% 4 33 77 8 51% 5 36 83 8 54% 6 39 90 9 57% 7 42 96 9 60% 8 45 102 10 63% 9 48 109 10 66% 10 50 113 10 69% 11 52 117 11 72% 12 54 121 11 75% 13 56 125 11 78% 14 58 130 11 81% 15 60 134 12 84% 16 62 138 12 87% 17 64 142 12 90% 18 66 146 13 93% 19 68 151 13 96% 20 70 155 13 99% 21 72 159 13 102% 22 74 159 14 105% 23 76 159 14 108% 24 78 159 14 111% 25 80 159 15 114% 26 82 159 15 117% 27 84 159 15 120% 28 86 159 15 123% 29 88 159 16 126% 30 90 159 16 129% 31 91 159 16 131% 32 92 159 16 132% 33 93 159 16 134% 34 94 159 17 135% 35 95 159 17 137% 36 96 159 17 138% 37 97 159 17 140% 38 98 159 17 141% 39 99 159 17 143% 40 100 159 17 144%
https://w.atwiki.jp/ddrdp/pages/1687.html
Do The Evolution(激) 曲名 アーティスト フォルダ 難易度 BPM NOTES/FA(SA) その他 Do The Evolution TAG feat. ERi 2014 激13 148 341 / 20 STREAM VOLTAGE AIR FREEZE CHAOS 63 67 25 53 40 楽譜面(5) / 踊譜面(9) / 激譜面(13) / 鬼譜面(-) 属性 渡り、地団駄、リズム難 譜面動画 http //www.nicovideo.jp/watch/sm23650307 http //www.nicovideo.jp/watch/sm23650307 解説 フィリップサイド楽曲の激譜面よろしく、あからさまなSPは仮の姿。16分2連からの8分や16分3連混じりの8分、サビ直前の地団駄など16分が比較的多め。8分の配置は難易度相応かやや弱いが、16分の関係上リズム難に感じるかもしれない -- 名無しさん (2014-03-27 00 06 05) 名前 コメント コメント(私的なことや感想はこちら) 最高に楽しい -- 名無しさん (2014-06-02 02 34 08) 名前 コメント
https://w.atwiki.jp/pathofexile12/pages/356.html
Cheap ConstructionはViridian Jewelのユニーク 入手方法 詳説・特徴 関連リンク Cheap Construction Viridian Jewel 10% reduced Trap DurationTrapの持続時間を10%減少 Can have up to 1 additional Trap placed at a time一度に設置できるTrapの量が一つ増える Why waste the good stuff on something that s going to blow up? 入手方法 カード等のドロップ以外の入手方法 アイテム 必要数 備考 The Garish Power 4 The Eye of the Dragon 10 Arrogance of the Vaal 8 Jack in the Box 4 詳説・特徴 関連リンク 英wiki https //pathofexile.gamepedia.com/Cheap_Construction Unique Jewel 一覧
https://w.atwiki.jp/eventhf/pages/17.html
概要 オプション一覧Area Of Effect Range Cooldown Time Damage Damage Cooldown Time Defense Drones Speed Energy Capacity Energy Cost Energy Recharge Rate Engine Power Hit Points Projectile Speed Projectile Speed Damage Projectile Weight Range Weight 概要 敵機撃破からのルートや、Marchantからの購入で 入手できるモジュールには、確率でオプション(追加効果)が付与される。 敵ドロップモジュールに付くオプションにはマイナス効果もあるが スキルツリーからMore Loot From Enemies(ドロップ向上)を解放する事で ドロップ量、付随オプションのクオリティ 更には、大型モジュールのドロップ率を上げることができる。 オプション一覧 Area Of Effect Range 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +25% +60% +100% Cooldown Time 赤-2 赤-1 赤 緑 紫 金 +100% +50% +20% -10% -25% -50% Damage 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +20% +50% +100% Damage Cooldown Time 赤-2 赤-1 赤 緑 紫 金 -60% -30% -30% -20% -15% -10% +40% +10% +100% +25% +200% +50% Defense 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +20% +50% +100% Drones Speed 赤-2 赤-1 赤 緑 紫 金 +20% +50% +80% Energy Capacity 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +20% +50% +100% Energy Cost 赤-2 赤-1 赤 緑 紫 金 +100% +50% +20% -10% -25% -50% Energy Recharge Rate 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +10% +25% +50% Engine Power 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +10% +25% +50% Hit Points 赤-2 赤-1 赤 緑 紫 金 -5 -3 -1 +1 +3 +5 Projectile Speed 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +10% +25% +50% Projectile Speed Damage 赤-2 赤-1 赤 緑 紫 金 -30% -20% -20% -15% -10% -10% +20% -10% +50% -20% +100% -25% Projectile Weight 赤-2 赤-1 赤 緑 紫 金 +150% +100% +50% -20% -50% -80% Range 赤-2 赤-1 赤 緑 紫 金 -50% -30% -20% +10% +50% +80% Weight 赤-2 赤-1 赤 緑 紫 金 +100% +50% +20% -20% -40% -50%
https://w.atwiki.jp/gurps/pages/590.html
Kromm s Collection of Optional GURPS Rules(クロムの追加ガープスルールコレクション)とは、 Dr, Kromm (ドクター・クロム) こと Sean Punch (ショーン・パンチ)氏によるGURPS第3版の追加ルール集のウェブサイト。現在。このサイトはなくなっており、代わりに、Internet Archive Wayback Machineで見つかるキャッシュが残っている。 概要サイト構成 概要 Kromm s Collection of Optional GURPS Rules(クロムの追加ガープスルールコレクション)には、GURPS第3版の追加ルールが記載されている。 いわゆる「ハウスルール」である。 当時のハウスルールとはいえ、これらの追加ルールの中には、後にGURPS第4版を作る上で参考にされGURPS第4版に適用されたルールもいくらか含まれている。 サイト構成 これらのページはアーカイブから掘り起こすために読み込みに時間がかかることがあります。Loading... と表示されている間はしばらく待ちましょう。 Kromm's Collection of Optional GURPS Rules - Internet Archive Wayback Machineで見つかるキャッシュ。 Alternative IQ Calculation - キャラクターの知力を決定するハウスルールを掲載している。 Attribute Recognition Extended Magery Generalizing the Armor-Piercing Enhancement Magical Ammunition New Spells Quick n' Dirty Addendum for Naval Combat at TL3- The Silver Sword of Saint Allannon Spell Research Weapon Master Expanded Cinematic Fast Learning これらのうち、 Alternative IQ Calculation は、Dr.Krommから翻訳掲載許可を得たELIZA氏による日本語版が『もうひとつの知力計算法』として掲載されている。