約 5,257,922 件
https://w.atwiki.jp/220yearsafterlove/pages/58.html
http //20yearsafterlove.blog111.fc2.com/blog-entry-312.html
https://w.atwiki.jp/matchmove/pages/78.html
Stabilization In this section, we’ll go into SynthEyes’ stabilization system in depth, and describe some of the nifty things that can be done with it. If we wanted, we could have a single button “Stabilize this!” that would quickly and reliably do a bad job almost all the time. If that s what you’re looking for, there are some other software packages that will be happy to oblige. In SynthEyes, we have provided a rich toolset to get outstanding results in a wide variety of situations. You might wonder why we’ve buried such a wonderful and significant capability quite so far into the manual. The answer is simple in the hopes that you’ve actually read some of the manual, because effectively using the stabilizer will require that you know a number of SynthEyes concepts, and how to use the SynthEyes tracking capabilities. If this is the first section of the manual that you’re reading, great, thanks for reading this, but you’ll probably need to check out some of the other sections too. At the least, you have to read the Stabilization quick-start. Also, be sure to check the web site for the latest tutorials on stabilization. We apologize in advance for some of the rant content of the following sections, but it s really in your best interest! Why SynthEyes Has a Stabilizer The simple and ordinary need for stabilization arises when you are presented with a shot that is bouncing all over the place, and you need to clean it up into a solid professional-looking shot. That may be all that is needed, or you might need to track it and add 3-D effects also. Moving-camera shots can be challenging to shoot, so having software stabilization can make life easier. Or, you may have some film scans which are to be converted to HD or SD TV resolution, and effects added. People of all skill levels have been using a variety of ad-hoc approaches to address these tasks, sometimes using software designed for this, and sometimes using or abusing compositing software. Sometimes, presumably, this all goes well. But many times it does not a variety of problem shots have been sent to SynthEyes tech support which are just plain bad. You can look at them and see they have been stabilized, and not in a good way. We have developed the SynthEyes stabilizer not only to stabilize shots, but to try to ensure that it is done the right way. How NOT to Stabilize Though it is relatively easy to rig up a node-based compositor to shift footage back and forth to cancel out a tracked motion, this creates a fundamental problem Most imaging software, including you, expects the optic center of an image to fall at the center of that image. Otherwise, it looks weird—the fundamental camera geometry is broken. The optic center might also be called the vanishing point, center of perspective, back focal point, center of lens distortion. For example, think of shooting some footage out of the front of your car as you drive down a highway. Now cut off the right quarter of all the images and look at the sequence. It will be 4 3 footage, but it s going to look strange—the optic center is going to be off to the side. If you combine off-center footage with additional rendered elements, they will have the optic axis at their center, and combined with the different center of the original footage, they will look even worse. So when you stabilize by translating an image in 2-D (and usually zooming a little), you’ve now got an optic center moving all over the place. Right at the point you’ve stabilized, the image looks fine, but the corners will be flying all over the place. It s a very strange effect, it looks funny, and you can’t track it right. If you don’t know what it is, you’ll look at it, and think it looks funny but not know what has hit you. Recommendation if you are going to be adding effects to a shot, you should ask to be the one to stabilize or pan/scan it also. We’ve given you the tool to do it well, and avoid mishap. That s always better than having someone else mangle it, and having to explain later why the shot has problems, or why you really need the original un-stabilized source by yesterday. In-Camera Stabilization Many cameras now feature built-in stabilization, using a variety of operating principles. These stabilizers, while fine for shooting baby s first steps, may not be fine at all for visual effects work. Electronic stabilization uses additional rows and columns of pixels, then shifts the image in 2-D, just like the simple but flawed 2-D compositing approach. These are clearly problematic. One type of optical stabilizer apparently works by putting the camera imaging CCD chip on a little platform with motors, zipping the camera chip around rapidly so it catches the right photons. As amazing as this is, it is clearly just the 2-D compositing approach. Another optical stabilizer type adds a small moving lens in the middle of the collection of simple lens comprising the overall zoom lens. Most likely, the result is equivalent to a 2-D shift in the image plane. A third type uses prismatic elements at the front of the lens. This is more likely to be equivalent to re-aiming the camera, and thus less hazardous to the image geometry. Doubtless additional types are in use and will appear, and it is difficult to know their exact properties. Some stabilizers seem to have a tendency to intermittently jump when confronted with smooth motions. One mitigating factor for in-camera stabilizers, especially electronic, is that the total amount of offset they can accommodate is small—the less they can correct, the less they can mess up. Recommendation It is probably safest to keep camera stabilization off when possible, and keep the shutter time (angle) short to avoid blur, except when the amount of light is limited. Electronic stabilizers have trouble with limited light so that type might have to be off anyway. 3-D Stabilization To stabilize correctly, you need 3-D stabilization that performs “keystone correction” (like a projector does), re-imaging the source at an angle. In effect, your source image is projected onto a screen, then re-shot by a new camera looking in a somewhat different direction with a smaller field of view. Using a new camera keeps the optic center at the center of the image. In order to do this correctly, you always have to know the field of view of the original camera. Fortunately, SynthEyes can tell us that. Stabilization Concepts Point of Interest (POI). The point of interest is the fixed point that is being stabilized. If you are pegging a shot, the point of interest is the one point on the image that never moves. POI Deltas (Adjust tab). These values allow you to intentionally move the POI around, either to help reduce the amount of zoom required, or to achieve a particular framing effect. If you create a rotation, the image rotates around the POI. Stabilization Track. This is roughly the path the POI took—it is a direction in 3-D space, described by pan/tilt/roll angles—basically where the camera (POI) was looking (except that the POI isn’t necessarily at the center of the image). Reference Track. This is the path in 3-D we want the POI to take. If the shot is pegged, then this track is just a single set of values, repeated for the duration of the shot. Separate Field of View Track. The image preparation system has its own field of view track. The image prep s FOV will be larger than main FOV, because the image prep system sees the entire input image, while the main tracking and solving works only on the smaller stabilized sub-window output by image prep. Note that an image prep FOV is needed only for stabilization, not for pixel-level adjustments, downsampling, etc. The Get Solver FOV button transfers the main FOV track to the stabilizer. Separate Distortion Track. Similarly there is a separate lens distortion track. The image prep s distortion can be animated, while the main distortion can not. The image prep distortion or the main distortion should always be zero, they should never both be nonzero simultaneously. The Get Solver Distort button transfers the main distortion value (from solving or the Lens-panel alignment lines) to the stabilizer, and begs you to let it clear the main distortion value afterwards. Stabilization Zoom. The output window can only be a portion of the size of the input image. The more jiggle, the smaller the output portion must be, to be sure that it does not run off the edge of the input (see the Padded mode of the image prep window to see this in action). The zoom factor reflects the ratio of the input and output sizes, and also what is happening to the size of a pixel. At a zoom ratio of 1, the input and output windows and pixels are the same size. At a zoom ratio of 2, the output is half the size of the input, and each incoming pixel has to be stretched to become two pixels in the output, which will look fairly blurry. Accordingly, you want to keep the zoom value down in the 1.1-1.3 region. After an Auto-scale, you can see the required zoom on the Adjust panel. Re-sampling. There s nothing that says we have to produce the same size image going out as coming in. The Output tab lets you create a different output format, though you will have to consider what effect it has on image quality. Re-sampling 3K down to HD sounds good; but re-sampling DV up to HD will come out blurry because the original picture detail is not there. Interpolation Filter. SynthEyes has to create new pixels “in-between” the existing ones. It can do so with different kinds of filtering to prevent aliasing, ranging from the default Bi-Linear to the most complex 3-Lanczos. The bi-linear filter is fastest but produces the softest image. The Lanczos filters take longer, but are sharper—although this can be drawback if the image is noisy. Tracker Paths. One or more trackers are combined to form the stabilization track. The tracker s 2-D paths follow the original footage. After stabilization, they will not match the new stabilized footage. There is a button, Apply to Trkers, that adjusts the tracker paths to match the new footage, but again, they then match that particular footage and they must be restored to match the original footage (with Remove f/Trkers) before making any later changes to the stabilization. If you mess up, you either have to return to an earlier saved file, or re-track. Overall Process We’re ready to walk through the stabilization process. You may want to refer to the Image Preprocessor Reference. · Track the features required for stabilization either a full auto-track, supervised tracking of particular features to be stabilized, or a combination. · If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not truly nodal. The resulting 3-D point locations will make the stabilization more accurate, and it is the best way to get an accurate field of view. · If you have not solved the shot, manually set the Lens FOV on the Image Preprocessor s Lens tab (not the main Lens panel) to the best available value. If you do set up the main lens FOV, you can import it to the Lens tab. · On the Stabilization tab, select a stabilization mode for translation and/or rotation. This will build the stabilization track automatically if there isn’t one already (as if the Get Tracks button was hit), and import the lens FOV if the shot is solved. · Adjust the frequency spinner as desired. · Hit the Auto-Scale button to find the required stabilization zoom · Check the zoom on the Adjust tab; using the Padded view, make any additional adjustment to the stabilization activity to minimize the required zoom, or achieve desired shot framing. · Output the shot. If only stabilized footage is required, you are done. · Update the scene to use the new imagery, and either re-track or update the trackers to account for the stabilization · Get a final 3-D or tripod solve and export to your animation or compositing package for further effects work. There are two main kinds of shots and stabilization for them shots focusing on a subject, which is to remain in the frame, and traveling shots, where the content of the image changes as new features are revealed. Stabilizing on a Subject Often a shot focuses on a single subject, which we want to stabilize in the frame, despite the shaky motion of the camera. Example shots of this type include · The camera person walking towards a mark on the ground, to be turned into a cliff edge for a reveal. · A job site to receive a new building, shot from a helicopter orbiting overhead · A camera car driving by a house, focusing on the house. To stabilize these shots, you will identify or create several trackers in the vicinity of the subject, and with them selected, select the Peg mode on the Translation list on the Stabilize tab. This will cause the point of interest to remain stationary in the image for the duration of the shot. You may also stabilize and peg the image rotation. Almost always, you will want to stabilize rotation. It may or may not be pegged. You may find it helpful to animate the stabilized position of the point of interest, in order to minimize the zoom required, see below, and also to enliven a shot somewhat. Some car commercials are shot from a rig that shows both the car and the surrounding countryside as the car drives they look a bit surreal because the car is completely stationary—having been pegged exactly in place. No real camera rig is that perfect! Stabilizing a Traveling Shot Other shots do not have a single subject, but continue to show new imagery. For example, · A camera car, with the camera facing straight ahead · A forward-facing camera in a helicopter flying over terrain · A camera moving around the corner of a house to reveal the backyard behind it In such shots, there is no single feature to stabilize. Select the Filter mode for the stabilization of translation and maybe rotation. The result is similar to the stabilization done in-camera, though in SynthEyes you can control it and have keystone correction. When the stabilizer is filtering, the Cut Frequency spinner is active. Any vibratory motion below that frequency (in cycles per second) is preserved, and vibratory motion above that frequency is greatly reduced or eliminated. You should adjust the spinner based on the type of motion present, and the degree of stabilization required. A camera mounted on a car with a rigid mount, such as a StickyPod, will have only higher-frequency residual vibration, and a larger value can be used. A hand-held shot will often need a frequency around 0.5 Hz to be smooth. Note When using filter-mode stabilization, the length of the shot matters. If the shot is too short, it is not possible to accurately control the frequency and distinguish between vibration and the desired motion, especially at the beginning and end of the shot. Using a longer version of the take will allow more control, even if much of the stabilized shot is cut after stabilization. Minimizing Zoom The more zoom required to stabilize a shot, the less image quality will result, which is clearly bad. Can we minimize the zoom, and maximize image quality? Of course, and SynthEyes provides the controllability to do so. Stabilizing a shot has considerable flexibility the shot can be stable in lots of different ways, with different amounts of zoom required. We want a shot that everyone agrees is stable, but minimizes the effect on quality. Fortunately, we have the benefit of foresight, so we can correct a problem in the middle of a shot, anticipating it long before it occurs, and provide an apparently stable result. Animating POI The basic technique is to animate the position of the point-of-interest within the frame. If the shot bumps left suddenly, there are fewer pixels available on the left side of the point of interest to be able to maintain its relative position in the output image, and a higher zoom will be required. If we have already moved the point of interest to the left, fewer pixels are required, and less zoom is required. Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom factor obtained by animating the rotation could be reduced further. We’ll continue that example here to show how. Re-do the quick start to completion, go to frame 178, with the Adjust tab open, in Padded display mode, with the make key button turned on. From the display, you can see that the red output-area rectangle is almost near the edge of the image. Grab the purple point-of-interest crosshair, and drag the red rectangle up into the middle of the image. Now everything is a lot safer. If you switch to the stabilize tab and hit Autoscale, the red rectangle enlarges—there is less zoom, as the Adjust tab shows. Only 15% zoom is now required. By dragging the POI/red rectangle, we reduced zoom. You can see that what we did amounted to moving the POI. Hit Undo twice, and switch to the Final view. Drag the POI down to the left, until the Delta U/V values are approximately 0.045 and -0.035. Switch back to the Padded view, and you’ll see you’ve done the same thing as before. The advantage of the padded view is that you can more easily see what you are doing, though you can get a similar effect in the Final view by increasing the margin to about 0.25, where you can see the dashed outline of the source image. If you close the Image Prep dialog and play the shot, you will see the effect of moving the POI a very stable shot, though the apparent subject changes over time. It can make for a more interesting shot and more creative decisions. Too Much of a Good Thing? To be most useful, you can scrub through your shot and look for the worst frame, where the output rectangle has the most missing, and adjust the POI position on that frame. After you do that, there will be some other frame which is now the worst frame. You can go and adjust that too, if you want. As you do this, the zoom required will get less and less. There is a downside as you do this, you are creating more of the shakiness you are trying to get rid of. If you keep going, you could get back to no zoom required, but all the original shakiness, which is of course senseless. Usually, you will only want to create two or three keys at most, unless the shot is very long. But exactly where you stop is a creative decision based on the allowable shakiness and quality impact. Auto-Scale Capabilities The auto-scale button can automate the adjustment process for you, as controlled by the Animate listbox and Maximum auto-zoom settings. With Animate set to Neither, Auto-scale will pick the smallest zoom required to avoid missing pieces on the output image sequence, up to the specified maximum value. If that maximum is reached, there will be missing sections. If you change the Animate setting to Translate, though, Auto-scale will automatically add delta U/V keys, animating the POI position, any time the zoom would have to exceed the maximum. Rewind to the beginning of the shot, and control-right-click the Delta-U spinner, clearing all the position keys. Change the Animate setting to Translate, reduce the Maximum auto-zoom to 1.1, then click Auto-Scale. SynthEyes adds several keys to achieve the maximum 10% zoom. If you play back the sequence, you will see the shot shifting around a bit—10% is probably too low given the amount of jitter in the shot to begin with. The auto-scale button can also animate the zoom track, if enabled with the Animate setting. The result is equivalent to a zooming camera lens, and you must be sure to note that in the main lens panel setting if you will 3-D solve the shot later. This is probably only useful when there is a lot of resolution available to begin with, and the point of interest approaches the boundary of the image at the end of the shot. Keep in mind that the Auto-scale functionality is relatively simple. By considering the purpose of the shot as well as the nature of any problems in it, you should often be able to do better. Tweaking the Point of Interest This is different than moving it! When the selected trackers are combined to form the single overall stabilization track, SynthEyes examines the weight of each tracker, as controlled from the main Tracker panel. This allows you to shift the position of the point-of-interest (POI) within a group of trackers, which can be handy. Suppose you want to stabilize at the location of a single tracker, but you want to stabilize the rotation as well. With a single tracker, rotation can not be stabilized. If you select two trackers, you can stabilize the rotation, but without further action, the point of interest will be sitting between the two trackers, not at the location of the one you care about. To fix this, select the desired POI tracker in the main viewport, and increase its weight value to the maximum (currently 10). Then, select the other tracker(s), and reduce the weight to the minimum (0.050). This will put the POI very close to your main tracker. If you play with the weights a bit, you can make the POI go anywhere within a polygon formed by the trackers. But do not be surprised if the resulting POI seems to be sliding on the image the POI is really a 3-D location, and usually the combination of the trackers will not be on the surface (unless they are all in the same plane). If this is a problem for what you want to do, you should create a supervised tracker at the desired POI location and use that instead. If you have adjusted the weights, and later want to re-solve the scene, you should set the weights back to 1.0 before solving. (Select them all then set the weight to 1). Resampling and Film to HDTV Pan/Scan Workflow If you are working with filmed footage, often you will need to pull the actual usable area from the footage the scan is probably roughly 4 3, but the desired final output is 16 9 or 1.85 or even 2.35, so only part of the filmed image will be used. A director may select the desired portion to achieve a desired framing for the shot. Part of the image may be vignetted and unusable. The image must be cropped to pull out the usable portion of the image with the correct aspect ratio. This cropping operation can be performed as the film is scanned, so that only the desired framing is scanned; clearly this minimizes the scan time and disk storage. But, there is an important reason to scan the entire frame instead. The optic center must remain at the center of the image. If the scanning is done without paying attention, it may be off center, and almost certainly will be if the framing is driven by directorial considerations. If the entire frame is scanned, or at least most of it, then you can use SynthEyes s stabilization software to perform keystone correction, and produce properly centered footage. As a secondary benefit, you can do pan and scan operations to stabilize the shots, or achieve moving framing that would be difficult to do during scanning. With the more complete scan, the final decision can be deferred or changed later in production. The Output tab on the Image Preparation controls resampling, allowing you to output a different image format then that coming in. The incoming resolution should be at least as large as the output resolution, for example, a 3K 4 3 film scan for a 16 9 HDTV image at 1920x1080p. This will allow enough latitude to pull out smaller subimages. If you are resampling from a larger resolution to a smaller one, you should use the Blur setting to minimize aliasing effects (Moire bands). You should consider the effect of how much of the source image you are using before blurring. If you have a zoom factor of 2 into a 3K shot, the effective pixel count being used is only 1.5K, so you probably would not blur if you are producing 1920x1080p HD. Due to the nature of SynthEyes’ integrated image preparation system, the re-sampling, keystone correction, and lens un-distortion all occur simultaneously in the same pass. This presents a vastly improved situation compared to a typical node-based compositor, where the image will be resampled and degraded at each stage. Changing Shots, and Creating Motion in Stills You can use the stabilization system to adjust framing of shots in post-production, or to create motion from still images (the Ken Burns effect). To use the stabilizing engine you have to be stabilizing, so simply animating the Delta controls will not let you pan and scan without the following trick. Delete any the trackers, click the Get Tracks button, and then turn on the Translation channel of the stabilizer. This turns on the stabilizer, making the Delta channels work, without doing any actual stabilization. You must enter a reasonable estimate of the lens field of view. If it is a moving-camera or tripod-mode shot, you can track it first to determine the field of view. Remember to delete the trackers before beginning the mock stabilization. If you are working from a still, you can use the single-frame alignment tool to determine the field of view. You will need to use a text editor to create an IFL file that contains the desired number of copies of your original file name. Stabilization and Interlacing Interlaced footage presents special problems for stabilization, because jitter in the positioning between the two fields is equivalent to jitter in camera position, which we’re trying to remove. Because the two different fields are taken at different points in time (1/30th or 1/25th of a second apart, regardless of shutter time), it is impossible for man or machine to determine what exactly happened, in general. Stabilizing interlaced footage will sacrifice a factor of two in vertical resolution. Best Approach if at all possible, shoot progressive instead of interlace footage. This is a good rule whenever you expect to add effects to a shot. Fallback Approach stabilize slow-moving interlaced shots as if they were progressive. Stabilize rapidly-moving interlaced shots as interlaced. To stabilize interlaced shots, SynthEyes stabilizes each sequence of fields independently. Note that within the image preparation subsystem, some animated tracks are animated by the field, and some are animated by the frame. Frame levels, color/hue, distortion/scale, ROI Field FOV, cut frequency, Delta U/V, Delta Rot, Delta Zoom When you are animating a frame-animated item on an interlaced shot, if you set a key on one field (say 10), you will see the same key on the other field (say 11). This simplifies the situation, at least on these items, if you change a shot from interlaced to progressive or “yes” mode or back. Avoid Slowdowns Due to Missing Keyframes While you are working on stabilizing a shot, you will be re-fetching frames from the source imagery fairly often, especially when you scrub through a shot to check the stabilization. If the source imagery is a QuickTime or AVI that does not have many (or any!) keyframes, random access into the shot will be slow, since the codec will have to decompress all the frames from the last keyframe to get to the one that is needed. This can require repeatedly decompressing the entire shot. It is not a SynthEyes problem, or even specific to stabilizing, but is a problem with the choice of codec settings. If this happens (and it is not uncommon), you should save the movie as an image sequence (with no stabilization), and Shot/Change Shot Images to that version instead. Alternatively, you may be able to assess the situation using the Padded display, turning the update mode to Neither, then scrubbing through the shot. After Stabilizing Once you’ve finished stabilizing the shot, you should write it back out to disk using the Save Sequence button on the Output tab. It is also possible to save the sequence through the Perspective window s Preview Movie capability. Each method has its advantages, but using the Save Sequence button will be generally better for this purpose it is faster; does less to the images; allows you to write the 16 bit version; and allows you to write the alpha channel. However, it does not overlay inserted test objects like the Preview Movie does. You can use the stabilized footage you write for downstream applications such as 3dsmax and Maya. But before you export the camera path and trackers from SynthEyes, you have a little more work to do. The tracker and camera paths in SynthEyes correspond to the original footage, not the stabilized footage, and they are substantially different. Once you close the Image Preparation dialog, you’ll see that the trackers are doing one thing, and the now-stable image doing something else. You should always save the stabilizing SynthEyes scene file at this point for future use in the event of changes. You can then do a File/New, open the stabilized footage, track it, then export the 3-D scene matching the stabilized footage. But… if you have already done a full 3-D track on the original footage, you can save time. Click the Apply to Trkers button on the Output tab. This will apply the stabilization data to the existing trackers. When you close the Image Prep, the 2-D tracker locations will line up correctly, though the 3-D X s will not yet. Go to the solver panel, and re-solve the shot (Go!), and the 3-D positions and camera path will line up correctly again. (If you really wanted to, you could probably use Seed Points mode to speed up this re-solve.) Important if you later decide you want to change the stabilization parameters without re-tracking, you must not have cleared the stabilizer. Hit the Remove f/Trkers button BEFORE making any changes, to get back to the original tracking data. Otherwise, if you Apply twice, or Remove after changes, you will just create a mess. Also, the Blip data is not changed by the Apply or Remove buttons, and it is not possible to Peel any blip trails, which correspond to the original image coordinates, after completing stabilization and hitting Apply. So you must either do all peeling first; remove, peel, and reapply the stabilization; or retrack later if necessary. Flexible Workflows Suppose you have written out a stabilized shot, and adjusted the tracker positions to match the new shot. You can solve the shot, export it, and play around with it in general. If you need to, you can pop the stabilization back off the trackers, adjust the stabilization, fix the trackers back up, and re-solve, all without going back to earlier scene files and thus losing later work. That s the kind of flexibility we like. There s only one slight drawback each time you save and close the file, then reopen it, you’re going to have to wait while the image prep system recomputes the stabilized image. That might be only a few seconds, or it might be quite a while for a long film shot. It s pretty stupid, when you consider that you’ve already written the complete stabilized shot to disk! Approach 1 do a Shot/Change Shot Images to the saved stabilized shot, and reset the image prep system from the Preset Manager. This will let you work quickly from the saved version, but you must be sure to save this scene file separately, in case you need to change the stabilization later for some reason. And of course, going back to that saved file would mean losing later work. Approach 2 Create an image prep preset (“stab”) for the full stabilizer settings. Create another image prep preset (“quick”), and reset it. Do the Shot/Change Shot Images. Now you’ve got it both ways fast loading, and if you need to go back and change the stabilization, switch back to the first (“stab”) preset, remove the stabilization from the trackers, change the shot imagery back to the original footage, then make your stabilization changes. You’ll then need to re-write the new stabilized footage, re-apply it to the trackers, etc. Approach 1 is clearly simpler and should suffice for most simple situations. But if you need the flexibility, Approach 2 will give it to you.
https://w.atwiki.jp/kobapan/pages/240.html
install node.js visit Node.js or wget http //nodejs.org/dist/v0.12.0/node-v0.12.0.tar.gz tar xf node-v0.12.0.tar.gz cd node-v0.12.0 ./configure make sudo make install update npm sudo npm install npm -g install Grunt CLI sudo npm install -g grunt-cli install Grunt bake cd path/to/your/project npm install grunt-bake --save-dev create Gruntfile.js in your project root module.exports = function(grunt) { // Project configuration. grunt.initConfig( { bake { your_target { files { // files to from, ... "index.html" "app/index.html", "mobile.html" "app/mobile.html" } }, }, }); // Load the plugin grunt.loadNpmTasks( "grunt-bake" ); // Default task(s). grunt.registerTask( default , [ bake ]);}; create app/index.html !--(bake includes/head.html title="おらホームページ")-- !--(bake includes/foot.html)-- create app/includes/head.html html head title {{title}} title /head body !--(bake contents.html)-- create app/includes/foot.html /body /html create app/includes/contents.html div id="container" hello /div run grunt $ grunt and this bake task will create index.html html head title おらホームページ title /head body div id="container" hello /div /body /html 参考 Getting started - Grunt The JavaScript Task Runner MathiasPaumgarten/grunt-bake grunt-rsync jedrichards/grunt-rsync npm install grunt-rsync module.exports = function(grunt) { // Project configuration. grunt.initConfig( { bake { // bake config }, rsync { options { exclude [ app , node_modules , README.txt , package.json , Gruntfile.js , .htaccess ], recursive true, syncDest false, // コピー先に存在しないファイルを削除しない }, dist { options { src "./", // コピー元ディレクトリ dest "~/www", // コピー先ディレクトリ host "username@host", // コピー先ホスト // private-key を使えるようにしておく ~/.ssh/id_rsa } }, } }); grunt.loadNpmTasks( grunt-rsync ); // Default task(s). grunt.registerTask( default , [ bake , rsync ]);};
https://w.atwiki.jp/220yearsafterlove/pages/51.html
http //20yearsafterlove.blog111.fc2.com/blog-entry-305.html
https://w.atwiki.jp/mrfrtech/pages/53.html
Market Scenario In its research report, Market Research Future (MRFR), asserts that the AI in Construction Market Research 2020 is slated to grow exponentially over the review period, securing a considerable market valuation of USD 2.01 billion, and a healthy 35% CAGR over the review period. Novel coronavirus has actually AI in Construction Market Research to open new avenues for those firms that are on the lookout for solutions that are reliable, efficiently managed, scalable, and are subscription-based, to remain more focused on the core business. The AI in Construction Market is bearing lesser impact of the COVID-19 outbreak compared to most other segments of the tech world. In a nutshell, COVID-19 impact on managed services has been fruitful, with the market growth enhanced than before. Given the prevalent lockdown situation, managed services vendors are now investing heavily in remote-centric worker solutions, which can make the market highly resilient in the coming years, even as the world is currently rushing to achieve a COVID-19 breakthrough. Request a Free Sample @ https //www.marketresearchfuture.com/sample_request/6035 Segmentation The AI in construction market is differentiated by component, technology, organization size, deployment, stage, and application. On the basis of stage, the market is segmented into construction stage, pre-construction, and post-construction. Based on the component, the AI in construction market is bifurcated as solutions and services. The solution segment is categorized as demand forecasting, virtual assistant, revenue estimation, design planning, predictive maintenance, and others. The service sub-segment comprised implementation services, training consulting, and other support services. In terms of technology, the market is segregated into machine learning deep learning, neural networks, and natural learning programming (NLP). Based on the deployment, the market is divided into on-cloud and on-premises. Based on the organization size, the market is bifurcated into large enterprises, and small medium enterprise (SMEs). On the basis of application, the market is categorized as, project management, schedule management, risk management, equipment management, building information management, and supply chain management. Competitive Outlook The major market players operating in the global market as identified by MRFR are Oracle Corporation (U.S), IBM Corporation (U.S.), SAP SE (Germany), Alice Technologies.(U.S.), Microsoft Corporation (U.S.), Autodesk (U.S.), Aurora Computer Services(U.K), eSUB (U.S.), Smartvid.io(U.S.),and Building System Planning (U.S.). Some other market players who are involved in AI construction market are Jaroop, Deepomatic, Lili.Ai, Predii, Assignar, Coins Global, Beyond Limits, Doxel Askporter, Bentley Systems, Plangrid, and Renoworks Software Regional Analysis The geographical overview of the global market has been analyzed in four major regions, comprising the Asia Pacific, North America, Europe, and the rest of the world. On the building industry, North America is believed to have substantial growth in the AI, with the U.S. and Canada being the sector leading countries. Regional domination is due to increased investment by companies such as IBM Corporation, Oracle Corporation and many others, which invest directly in the advancement of technologies such as neural networks and machine learning in research and development. However, Asia Pacific is also expected to experience a strong market growth rate. The leading countries in this field are China, Japan, South Korea and India. The market growth is due to rise in demand by the region to improve smart city projects which require better facilities that boost the real estate sector. Table of Contents 1Executive Summary 2Scope of the Report 2.1Market Definition 2.2Scope of the Study 2.2.1Research objectives 2.2.2Assumptions Limitations 2.3Markets Structure Continued…. Browse Full Report Details @ https //www.marketresearchfuture.com/reports/ai-in-construction-market-6035 List of Tables Table1 Global AI In Construction Market By Region, 2020-2027 Table2 North America AI In Construction Market By Country, 2020-2027 Table3 Europe AI In Construction Market By Country, 2020-2027 Continued… List of Figures FIGURE 1 Global AI In Construction Software Market Segmentation FIGURE 2 Forecast Methodology FIGURE 3 Porter’s Five Forces Analysis of Global AI In Construction Software Market Continued… Trending #MRFR Report** https //ictmrfr.blogspot.com/2022/04/geofencing-market-companies-growth-with.html https //blogfreely.net/pranali004/telecom-expense-management-market-size-impressive-cagr-changing-business-scope https //postheaven.net/pranali004/financial-app-industry-impressive-cagr-changing-business-needs-scope-of https //market-research-future.tribe.so/post/openstack-service-market-research-impressive-cagr-changing-scope-of-current--6263de46791566c10c79891e https //www.scutify.com/articles/2022-04-24-infrastructure-as-a-service-industry-cagr-changing-business-scope-of-current-and-future-industry- About Market Research Future Market Research Future (MRFR) has created a niche in the world of market research. It is counted among the top market research companies that offer well-researched and updated market research reports and insights to businesses of all sizes. What sets us apart is our super-responsive team that offers quality work keeping clients abridged of the prospective challenges and opportunities in various markets. Our team is adept in their space as well as patiently listens to every client. The best part is they know their work inside out and possess the expertise to guide the client in the right direction and achieve results on a tight deadline. We are a one-stop solution for all your data research needs. Our team does not believe in the “one size fits all” approach to creating a report that is detailed and concise. We handle 13 industry verticals including Healthcare, Chemicals and Materials, Information and Communications Technology, Semiconductor and Electronics, Energy and Power, Food, Beverages Nutrition, Automobile, Consumer and Retail, Aerospace and Defense, Industrial Automation and Equipment, Packaging Transport, Construction, and Agriculture. With our unique approach for every market report, we aim to reach the zenith in qualitative business intelligence and syndicated market research. Contact Market Research Future (Part of Wantstats Research and Media Private Limited) 99 Hudson Street, 5Th Floor New York, NY 10013 United States of America 1 628 258 0071 (US) 44 2035 002 764 (UK) Email sales@marketresearchfuture.com Website https //www.marketresearchfuture.com
https://w.atwiki.jp/api_programming/pages/145.html
http //developer.garmin.com/downloads/connect-iq/monkey-c/doc/Toybox/Attention.html Module Toybox AttentionDefined Under Namespace Constant Summary 関数(要約) 関数(詳細)(Object) backlight(onOff) バックライトを点灯/消灯する (Object) playTone(tone) ビープ音を鳴らす (Object) vibrate(vibe) Use the vibe motor Module Toybox Attention The Tone module allows for making pre-defined sounds. Not all devices support this API. Since 1.0.0 App Types Widget,App Defined Under Namespace Classes VibeProfile Constant Summary Supported Devices All except vivoactive TONE_KEY = 0 Indicates that a key was pressed. Since 1.0.0 TONE_START = 1 Indicates that an activity has started. Since 1.0.0 TONE_STOP = 2 Indicates that an acitivty has stopped. Since 1.0.0 TONE_MSG = 3 Indicates that a message is available. Since 1.0.0 TONE_ALERT_HI = 4 An alert ending with a high note. Since 1.0.0 TONE_ALERT_LO = 5 An alert ending with a low note. Since 1.0.0 TONE_LOUD_BEEP = 6 A loud beep. Since 1.0.0 TONE_INTERVAL_ALERT = 7 Indicates a change in interval. Since 1.0.0 TONE_ALARM = 8 Indicates an alarm has triggered. Since 1.0.0 TONE_RESET = 9 Indicates that the activity was reset. Since 1.0.0 TONE_LAP = 10 Indicates that the user has completed a lap. Since 1.0.0 TONE_CANARY = 11 An annoying sound to get the users attention. Since 1.0.0 TONE_TIME_ALERT = 12 An alert that a time threshold has been met. Since 1.0.0 TONE_DISTANCE_ALERT = 13 An alert that a distance threshold has been met. Since 1.0.0 TONE_FAILURE = 14 Indicates that the activity was a failure. Since 1.0.0 TONE_SUCCESS = 15 Indicates that the activity was a success. Since 1.0.0 TONE_POWER = 16 The power on tone. Since 1.0.0 TONE_LOW_BATTERY = 17 Indicates that the device has low battery power. Since 1.0.0 TONE_ERROR = 18 Indicates an error occurred. Since 1.0.0 関数(要約) (Object) backlight(onOff) バックライトを点灯/消灯する (Object) playTone(tone) ビープ音を鳴らす (Object) vibrate(vibe) Use the vibe motor. 関数(詳細) (Object) backlight(onOff) バックライトを点灯/消灯する ParametersonOff (Boolean) true to turn on backlight, false otherwise. Since 1.0.0 Supported Devices All devices (Object) playTone(tone) ビープ音を鳴らす Parameterstone TONE_XXX value to play Since 1.0.0 Supported Devices All except vivoactive (Object) vibrate(vibe) Use the vibe motor Parametersvibe (Array) Array of VibeProfile objects to play in sequence. Maximum of 8 supported. Since 1.0.0 Supported Devices All non-Edge devices
https://w.atwiki.jp/reshia/pages/12.html
HTMLの基本 基本はタグ。文章に記をつけていくところから。 はじめの一歩 文章に「しるし」をつけていくことからはじめる。 次のような文章があった場合 私のブログは、2005年1月から始まりました。 もし「私のブログ」の部分をクリックしたときに 自分のブログに飛べるようにするには 次のように文章に「しるし」をつける。 a href="http //whoinside.blog3.fc2.com/" 私のブログ /a は、2005年1月から始まりました。 この文章をウェブブラウザで見ると、次のようになる。 私のブログは、2005年1月から始まりました。 さらに「2005年1月」の部分に色を赤色にしたい場合は、つぎのようにする。 a href="http //whoinside.blog3.fc2.com/" 私のブログ /a は、 span style="color red" 2005年1月 /span から始まりました。 タグ このように文章につける「しるし」のことを「タグ」と呼ぶ。 タグは、次のような名称を持っている。 基本は 要素名 要素の内容 /要素名 である。 要素名には、そのタグの種類を書く。 たとえば、リンクを貼りたいなら「a」、画像を貼り付けたいなら「img」 文字を装飾したいときなどは「span」となる。 また、「a」などのように、タグの種類を「要素名」として書くだけでは 機能として不十分なものがある(「a」はリンク先を示す必要がある)。 そんなときは、次のように「属性」を指定する。 要素名 属性名="属性値" 要素の内容 /要素名 また、属性に関しては「属性値」だけを持つものもある。 要素名 属性値 要素の内容 /要素名
https://w.atwiki.jp/mydefrag_jp/pages/18.html
原文 http //www.mydefrag.com/Scripts-FileBoolean.html 更新日 2010/12/12 (ここで取り扱っている内容の原文をコピーした日付です) (...) Combine file booleans into a single boolean. Syntax ( FILEBOOLEAN ) Example FileSelect Size(10000000,0) and ( Name("-.zip") or Name("-.arj") ) FileActions ... FileEnd See also FileSelect FileBoolean FileActions All Select all the items (files, directories) that have not yet been placed in a previous zone. Syntax all Example FileSelect All FileActions ... FileEnd See also FileSelect FileBoolean FileActions Archive Select all the items that have the "archive" attribute set (yes) or not set (no). Applications use this attribute to mark files for backup or removal. Syntax Archive(yes) Archive(no) Example FileSelect # Select all the items that have the "archive" attribute. Archive(yes) FileActions .... FileEnd See also FileSelect FileBoolean FileActions AverageFragmentSize Select all the items that have an average number of bytes per fragment between the minimum (first number) and the maximum (second number). If the second number is zero then the maximum is infinity. For example, if an item is 300 bytes in size and has 3 fragments then it has an average fragment size of 100 bytes. Syntax AverageFragmentSize(NUMBER , NUMBER) Example FileSelect # Select all the items that have an average fragment size between 100 and 1000 bytes. AverageFragmentSize(100,1000) FileActions .... FileEnd See also FileSelect FileBoolean FileActions Compressed Select all the items that have the "compressed" attribute set (yes) or not set (no). For a file the attribute indicates if the file is compressed by the build-in Windows compression. For directories the attribute is the default for new files (directories by themselves cannot be compressed). Syntax Compressed(yes) Compressed(no) Example FileSelect # Select all the items that are compressed with the built-in Windows compression. Compressed(yes) FileActions .... FileEnd See also FileSelect FileBoolean FileActions CreationDate Select all the items that were created between the minimum time (first parameter) and the maximum time (second parameter). If the first parameter is empty then the minimum time is the beginning of time. If the second parameter is empty then the maximum time is infinity. - The creation date can be newer than the last-changed date, for example when a file was downloaded, or unpacked from an archive (such as zip or arj). Syntax CreationDate(DATETIME , DATETIME) Example FileSelect # Select all the items that were created less than 10 days ago. CreationDate(10 days ago,now) FileActions .... FileEnd See also FileSelect FileBoolean FileActions Directory Select all the directories (yes) or all the other files (no). Please note that this boolean does not select the files in a directory, but the directory itself. Directories and files are separate entities. Directories cannot be moved (defragmented, optimized) on FAT32 volumes. This is a known limitation of the Windows defragmentation API and not a bug in MyDefrag. Moving directories is slower than moving files of the same size, presumably because Windows has to update indexes and links in the MFT. Syntax Directory(yes) Directory(no) Example FileSelect # Select all the directories. Directory(yes) FileActions .... FileEnd See also FileSelect FileBoolean FileActions DirectoryName STRINGにマッチする名前を持つディレクトリをすべて選択し、そのディレクトリ以下にあるすべてのファイルとサブディレクトリを選択します。 STRINGにはワイルドカードとして"*"(0文字以上の任意の文字)と"?"(1文字の任意の文字)を選択できます STRINGにはスラッシュ・バックスラッシュ(および\マーク)を含めないでください。これはすべてのファイルのファイル名について比較しますが、このファイル名には(ファイルパスではないので)スラッシュなどが含まれていません。 この関数はすべてのハードリンク ファイル名を一つのアイテム(二つ名を持ち、同時に違う場所に存在するが、その実体は同じファイル)として扱います。ログファイルには最初に見つかった名前が載ります, so it may appear as if the function has selected some wrong items. この関数はソフトリンク(ジャンクション・シンボリックリンク)を追従しません。 Syntax DirectoryName(STRING) Example FileSelect # Select everything in the "Program Files" directory. DirectoryName("Program Files") FileActions .... FileEnd See also DirectoryPath FileName FullPath FileSelect FileBoolean FileActions DirectoryPath STRINGにマッチするフルパスを持つディレクトリをすべて選択します、そしてそのディレクトリ以下にあるすべてのファイルとサブディレクトリを選択します。STRINGにはワイルドカードとして"*"(0文字以上の任意の文字)と"?"(1文字の任意の文字)を選択できます。 この条件構文はDirectoryName()とよく似ていますが、これはディレクトリ名ではなくフルパスで比較するために多少遅いです。 STRINGはディレクトリのフルパスと比較され、マッチするでしょう。(The STRING is compared with and must match the full path of the directories.) ドライブレターにマッチするようなマスクを確認してください。ディレクトリパスというのは"c \windows\System32"といったような物のことです。バックスラッシュの追跡はしないことを覚えておいてください。(訳注 自信がないのでエロイ人お願いします。) この関数はすべてのハードリンク ファイル名を一つのアイテム(二つ名を持ち、同時に違う場所に存在するが、その実体は同じファイル)として扱います。ログファイルには最初に見つかった名前が載ります, so it may appear as if the function has selected some wrong items. この関数はソフトリンク(ジャンクション・シンボリックリンク)を追従しません。 Syntax DirectoryPath(STRING) Example FileSelect # Select everything in the "? \Program Files" directory. DirectoryPath("? \Program Files") FileActions .... FileEnd See also DirectoryName FileName FullPath FileSelect FileBoolean FileActions Encrypted Select all the items that have the "encrypted" attribute set (yes) or not set (no). For a file the attribute indicates if the file is encrypted by the build-in Windows encryption. For directories the attribute is the default for new files (directories by themselves cannot be encrypted). Syntax Encrypted(yes) Encrypted(no) Example FileSelect # Select all the items that have the "encrypted" attribute. Encrypted(yes) FileActions .... FileEnd See also FileSelect FileBoolean FileActions
https://w.atwiki.jp/aciii/pages/127.html
Desmond Files9 +神殿 2012/10/30 08 39 合衆国 トゥーリン 「神殿」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 中には何が隠されているのか? +任務 2012/10/30 17 22 合衆国 トゥーリン 「任務」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されているメールは3で閲覧可能なウィリアムからのメール(アニムス内での出来事には興味を惹かれ易いことを認めたうえで、チーム全員に任務への集中を呼びかける内容)と同じため、翻訳は省略。 +ハッピーハロウィーン! 2012/10/31 12 00 合衆国 トゥーリン 「ハッピーハロウィーン!」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されているメールは3で閲覧可能なショーンからのメール(最近ジュノーが頻繁に現れては直に消える事を報告し、最後に追伸で「ハッピーハロウィーン!」と挨拶する内容)と同じため、翻訳は省略。 +ヘイザムのアミュレット 2012/11/01 15 07 合衆国 トゥーリン 「ヘイザムのアミュレット」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されているメールは3で閲覧可能なショーンからのメール(ヘイザムのアミュレットがエデンの果実の一種だが、他の果実と違って鍵以外の機能は一切持ち合わせていない事を説明する内容)と同じため、翻訳は省略。 +尾行を撒く 2012/11/02 15 07 合衆国 スティルウォーター 「尾行を撒く」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されている画像の携帯端末は恐らくギャビンの物。表示されているテキストメッセージを以下で翻訳。 From William M. Gavin, what s the word? Received 2012-10-31 05 29 On the move. Working my tail off. Good news is They have no clue where you are. Sent 2012-11-02 00 52 Good luck! Received 2012-11-02 04 49 送信者 ウィリアム・M ギャビン、何かニュースは? 受信 2012-10-31 05 29 移動中です。尾行を撒こうとしている 所です。良いニュースが一つあります 奴らはあなた方の現在地に関して何も掴んでいません。 送信 2012-11-02 00 52 幸運を! 受信 2012-11-02 04 49 +アサシン12点、騎士団2点 2012/11/03 05 32 合衆国 ニューオーリンズ 「アサシン12点、騎士団2点」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 ギャビンがウィリアムに現状を報告。彼は未だに移動中らしい。 本文以上。表示されているテキストメッセージを以下で翻訳。 From William M. Good luck! Received 2012-11-02 04 49 Safe and sound. Word from HQ Florence team scheduled to make contact in 48 hours. Sadly, Marco didn t make it. Last week s score A 12, T 2. Sent 2012-11-03 01 13 Damn. Received 2012-11-03 05 32 送信者 ウィリアム・M 幸運を! 受信 2012-11-02 04 49 無事に戻りました。本部からの報告です フィレンツェのチームは48時間以内に 連絡をする予定だった。悲しいことに、 マルコからの連絡はない。先週の得点: A 12、 T 2。 送信 2012-11-03 01 13 くそ。 受信 2012-11-03 05 32 +種蒔 2012/11/04 04 08 合衆国 トゥーリン 「種蒔」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。以下、添付されているメールの全訳。「彼女」とは恐らくデズモンドの母親? From HQ03 Subject Snowing Seeds Sent Nov. 4th,2012 04 08 To William M. William, We put Gavin s plan into motion. As of now, all available teams are working to create as much noise as they can. We have 5 operations which sole purpose is to keep the Templars busy. Hopefully, the seeds they re sowing will keep them off your back. BTW, we contacted her. She was glad to learn Desmond is safe, and with you. 送信者 本部03 件名 種蒔 送信日時 2012年11月04日04 08 宛先 ウィリアム・M ウィリアム、 我々はギャビンの計画を実行に移した。 今のところ、動かせる全てのチームに最大限のノイズを出させている。 目下5つの作戦が、騎士団を忙殺させるためだけに進行中だ。彼らの撒 いた種によって彼らが君に後れを取ってくれることを願おう。 ところで、我々は彼女に連絡を取った。デズモンドが無事で君と一緒に いると知って喜んでいたよ。 +目下順調 2012/11/05 01 32 合衆国 トゥーリン 「目下順調」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されているメールは3で閲覧可能なレベッカからのメール(メールの日付から分かる通りデズモンドは数日間アニムスに入っているが、16号との接触の為かバイタルに異常はないため心配はいらない。だが、今後も注意は続ける、という旨)と同じため、翻訳は省略。 +父と息子 2012/11/06 10 18 合衆国 トゥーリン 「父と息子」 本文無し。動画の内容は、3の現代編で流れるもの(アニムスでの探索続行を求めるウィリアムにデズモンドが「あんたの先祖でもあるんだから、あんたがアニムスに入れ」と言い返した為に口論になり、「あんたもテンプル騎士団と同じだ」とするデズモンドをウィリアムが殴る内容)と同じ。 +トップガン 2012/11/08 17 37 イタリア ローマ 「トップガン」 blankimgプラグインエラー:ご指定のURLはサポートしていません。png, jpg, gif などの画像URLを指定してください。 本文無し。添付されている画像に写っているタブレット端末には、アブスターゴ社によるエージェントの評価報告(Ctibor Hašekなる人物による、Otso Bergなる元軍特殊部隊の男性に関する評価)が表示されている。5段階評価では、独創性・機転以外の項目ではほぼ満点を取っている。以下、所見の欄のみ翻訳。 ◆長所 経歴 エージェントの軍(特殊部隊)での経験は、決定的な資産だ。 従順さ 命令に疑問を差し挟まず従う。 リーダーシップ 策略には欠けるものの、他者に忠誠心を芽生えさせる。 献身 エージェントは我々の大義に完全に傾倒している。 ポテンシャル 1S要員になりうる。 ◆短所 エージェントには幼い(3歳)娘がいる;負債になりうる。 ◆備考 エージェントはより困難な挑戦の準備ができている。 エージェントにリーダーとしての責務を与えてやるべきだ。 エージェントにレベル5の任務を与えてやるべきだ。
https://w.atwiki.jp/220yearsafterlove/pages/46.html
ていうかそろそろ俺もネタ抜きに本気で彼女欲しいから『出会い系サイト』使うわ。そういや俺昔『出会い系サイトのサクラ』やってたなあー・・・。 http //20yearsafterlove.blog111.fc2.com/blog-entry-299.html