約 4,340,050 件
https://w.atwiki.jp/ultimatecastellan/pages/60.html
CAPTURE 城内PD このページは作成中ですw Round 1 Round 2 Round 3 Round 4 Round 5 上 / ホラーイェタリアンボウ ホラーイェタリアンボウ ホラーイェタリアンボウ グリムデサイア 下 グリムスティックス グリムジエロガン ロード グリムジエロガン ロード グリムジエロガン ロード グリムジエロガン ロード 左 / / フィンドラスプチンハンター フィンドラスプチンハンター ホラーデッドアイ 右 / / フィンドスケルトンビショップ フィンドラスプチンソーサラー フィンドラスプチンソーサラー Round 6 Round 7 Round 8 Round 9 Round 10 上 グリムデサイア グリムエリス グリムエリス グリムシトゥースバーサーカー グリムシトゥースバーサーカー 下 グリムジエロガン ロード グリムデサイア グリムデサイア グリムエリス グリムエリス 左 ホラーデッドアイ ホラージエロガンアーチャー ホラージエロガンアーチャー ホラーシートゥスハンター ホラーシートゥスハンター 右 フィンド ミンタカ フィンド ミンタカ フィンド デビルアスロン フィンドデビルアスロン フィンド トレガード ※ ■色はペアによるオススメ殲滅場所です。(Round6は左右どちらでもあまり変わりません) ※ /は敵が出てこない箇所です。 ※ Roung3~9は雷抵抗、Round10は呪抵抗がオススメです。(Round1,2,6は雷抵抗不要ですが、付け替えが大変なため、そのままにすることをオススメします) ※ ( )は上下の位置が不確定なものです。
https://w.atwiki.jp/multiwinia_jp/pages/33.html
間違いや疑問点がありましたら2chの該当スレッドへ。 編集 行 項目 原文 半日本語化 全日本語化 2677 multiwinia_cts_newstatue New Statue! 新しい石像だ! 新しい石像だ! 2678 multiwinia_cts_statuecapture *T captured a Statue! *Tが石像をキャプチャーした! *Tが石像をキャプチャーした! 2679 multiwinia_option_2_2_0 No No いいえ 2680 multiwinia_option_2_2_1 Yes Yes はい 2681 multiwinia_option_2_3_0 No No いいえ 2682 multiwinia_option_2_3_1 Yes Yes はい 2683 multiwinia_option_2_4_0 Random ランダム ランダム 2684 multiwinia_option_2_4_1 Weighted 偏る 偏る
https://w.atwiki.jp/live2ch/pages/422.html
トップ キャプボカテゴリ概要 各キャプチャーボードの詳しい使い方 Elgato Game Capture HD60 / 2019年02月04日 (月) 23時12分16秒 キャプチャーボードの基本的な知識については、キャプチャーボードをご覧ください。 キャプチャーボードの選び方については、キャプチャーボードの選び方をご覧ください。 海外でも高評価!PS4世代のゲームを1080p/60fpsで録画 Elgato Game Capture HD60(以下Game Capture HD60)は、Game Capture HDシリーズの1製品です。 ▲薄くてコンパクト、HDMI端子対応のGame Capture HD60(リンク先 Amazon) 1080p/60fps録画をUSB 2.0で実現 1080p/60fpsに完全対応しています。しかも、USB 2.0接続で1080p/60fpsの録画が可能です。USB 3.0接続の製品に不安を感じる場合に安心でしょう。 大画面TVでゲームをプレイしながらキャプチャーできる TVにゲーム画面を映した状態で、PCのほうで録画やライブ配信できる機能を搭載しています(パススルー出力機能)。ビデオキャプチャーの遅延の影響を受けないため、快適なゲームプレイが約束されます。 時間を遡ってゲームプレイを再生・録画できる ゲームをプレイ中に、いま見たかっこいいシーンを見返したくなったとき、あるいは録画を忘れていたとき、映像を遡って再生して録画できる機能を搭載しています(Flashback録画機能)。もう決定的な瞬間を撮り逃しすることはありません。 手になじむコンパクトなデザイン 製品は滑らかな丸みを帯びており、手のひらにしっくりとくるデザインとなっています。また、薄くて小さいため場所をとりません。 目次 動作環境・製品仕様TVまたはPCモニターが必要 接続できるのはHDMIに対応したゲーム機 付属品 動画サンプル(作例) ライブ配信での使用について 他製品との比較 ソフトウェアのインストール ゲーム機の接続 詳しい使い方 筆者がGame Capture HD60を使用した感想よい点 悪い点 関連ページ 動作環境・製品仕様 Game Capture HD60(Amazonで価格を見る) Elgato Game Capture HD60 接続方式 USB 2.0 エンコードタイプ ハードウェアエンコード ビデオ入力端子 HDMI端子 対応OS Windows 7 SP1以降、Mac OS X 10.11以降 備考 ソフトウェアはダウンロード TVまたはPCモニターが必要 Game Capture HD60の仕様で重要なのは、ハードウェアエンコードタイプという点でしょう。このタイプはキャプチャーボードの選び方で書いたとおり、遅延対策が重要です。具体的には、Game Capture HD60が搭載しているパススルー出力機能を使用します。用意するものは、HDMI入力を搭載したTVまたはPCディスプレイです(*1)。 そして、Game Capture HD60とTVをHDMIケーブルで接続します。接続方法が適切であれば、PCとTVの両方からゲーム機の映像・音声が出ます。TVに映っているゲーム画面を見ながらプレイしましょう。遅延対策はこれで完了です。大画面TVでいつもどおりゲームをプレイできます。 接続できるのはHDMIに対応したゲーム機 Game Capture HD60と接続できるのは、HDMI端子を搭載したゲーム機です。同端子を搭載していないゲーム機は接続できません。また、Game Capture HD60はHDCPに対応していないため、PS3やPS Vita TVを接続しても映像を表示・録画できません。対処法については、HDCPを知るをご覧ください。 HDMI端子による接続 備考 PS4 ○ PS4側の設定でHDCPをOFFにしておく(詳細) PS3 × ただしHDMI分配器を使えば対処可能(後述) PS2 × Switch ○ HDMI接続で問題なし Wii U ○ HDMI接続で問題なし Wii × Xbox One ○ HDMI接続で問題なし Xbox 360 ○ HDMI接続で問題なし PSP-3000/2000 × PS Vita TV × ただしHDMI分配器を使えば対処可能 iOSデバイス ○ 問題なし 付属品 Game Capture HD 60の付属品は以下のとおりです。CD-ROMは付属されていません。ソフトウェアは公式サイトからダウンロードして入手します(後述)。 動画サンプル(作例) 参考までに動画を用意しました。高画質で動画を再生したい場合は、YouTubeにアクセス後、画面右下の歯車アイコンで「1080p60 HD」を選択します。YouTubeに動画をアップロードした時点で画質は落ちていますが、その点はご了承ください。 ▲PS4版『METAL GEAR SOLID GROUND ZEROS』より(クリックで画像拡大)。 ▲画面の上へ ライブ配信での使用について 製品を購入するまえに覚えておきたいのですが、Game Capture HD60は遅延の関係でライブ配信では使いづらいかもしれません。製品の仕様上、PCの画面上ではゲーム画面とゲーム音は遅延するため、これに合わせてマイク音声(自分の声)を遅延させる必要があるからです。設定が少し複雑になり、初心者には難しい可能性があります。 詳細は、ライブ配信とキャプチャーボードを参照 上記リンク先の記事が難しく感じる場合は、Game Capture HDをライブ配信で使わないほうが無難です。Game Captue HD60 Sを使用してください。 ▲画面の上へ 他製品との比較 Game Capture HD60と似た製品について、簡単に比較しておきます。すでにGame Capture HD60を購入している場合は読み飛ばしてください。 Game Capture HDは、Game Capture HD60よりも以前に発売されたモデルです。両者の違いで重要なのは、コンポーネント端子の有無です。すなわち、Game Capture HDはPS3をコンポーネント端子で接続することによってHDCPを回避し、映像を表示・録画できるのですが、Game Capture HD60ではこれができません。PS3の接続を考えている場合は、Game Capture HDのほうが簡単でよいでしょう(*2)。 ▲Elgato Game Capture HD コンポーネント端子 1080p/60fpsでの録画 インスタントゲームビュー PCとの接続 Game Capture HD ○ × × USB 2.0 Game Capture HD60 × ○ × USB 2.0 Game Capture HD60 S × ○ ○ USB 3.0 Game Capture HD60 Pro × ○ ○ PCI Express また、両モデルの違いとして、1080p/60fpsでの録画に対応しているかどうかという点も重要です。Game Capture HDは最大でも1080p/30fpsでの録画となりますが、Game Capture HD60は最大1080p/60fpsでの録画が可能です。HDMI接続の場合、両モデル間で画質そのものに違いはありません。 Game Capture HD60 Pro、およびGame Captue HD60 Sは、インスタントゲームビュー機能というものが搭載されている点が大きな特長です。同機能のおかげで遅延が小さく、PCに映っているゲーム画面を見ながらでもゲームをプレイできます。必ずしもパススルー出力しなくてもかまいません。Game Capture HD60よりもお薦めです。 ▲Game Capture HD60 Pro(左)と、Game Capture HD60 S(右) ▲画面の上へ ソフトウェアのインストール Game Capture HD60をPCにUSB接続します。念のため、USB 3.0ではなくUSB 2.0に接続してください。USB 3.0では正常に動作しない場合があります。 公式サイトにアクセスし、「Game Capture for Windows」の部分にある「ダウンロード」をクリックします。Windows 7の場合は、こちらのページでバージョン 3.2をダウンロードしましょう。 ダウンロードした「GameCaptureSetup_xxx.msi」をダブルクリックします。「xxx」の部分はバージョン名です。 画面を順に進めていきます。 ソフトウェアのインストールが完了します。 ▲画面の上へ ゲーム機の接続 Game Capture HD60に、ゲーム機とTVを接続します。Game Capture HD60には「IN」と「OUT」のふたつのHDMI端子がありますが、ゲーム機を接続するのは「IN」のほうです。まちがえないようにしましょう。 PS3(またはPS Vita TV)を接続する場合は、 「AstroAI 4K HDMI 分配器 スプリッター 1入力2出力」(リンク先 Amazon)を使ってHDCPを回避します。 ▲PS3をHDMI分配器の「IN」に接続します。また、HDMI分配器の「OUT」とGame Capture HD60の「IN」を接続します。 iPhoneなどのiOSデバイスをHDMI接続する場合は、Lightning - Digital AVアダプタ(リンク先 Amazon)が別途必要です。 キャプチャーボードを使ってiPhoneの画面をPCに映す方法を参照 ▲画面の上へ 詳しい使い方 このあとは、ゲーム画面をPCに表示するための設定を行っていきましょう。ゲーム画面をPCに表示できたら、つぎはパススルー出力して遅延を回避する方法や、録画方法、動画に声を入れる方法、ライブ配信の方法を理解しておく必要があります。 詳細は、Elgato Game Capture HDシリーズの使い方を参照 ▲画像は、PS4版『METAL GEAR SOLID V The Phantom Pain』(コナミデジタルエンタテイメント)より ▲画面の上へ 筆者がGame Capture HD60を使用した感想 Elgato Game Capture HD60 posted with カエレバ Elgato 2014-08-23 Amazon 楽天市場 よい点 1080p/60fpsをUSB 2.0接続で実現している数少ない製品。USB 3.0がないPCでも安心。 パススルー出力があるのでラグ対策が簡単にできる。TV画面での快適なゲームプレイ。 Flashback録画が便利。いちいちゲームを録画しておく必要がない。AVT-C875にも類似の機能があるが、Game Capture HD60のほうが使いやすく洗練されている。 製品がきわめてコンパクト。Game Capture HDよりもさらに小さい。 悪い点 1080p/60fpsで録画しないなら、Game Capture HD60を購入する意味はない。Game Capture HDでよい。 HDMI端子のみの対応なので、割りきって使う必要がある。 ニコ生での配信用と考えると使いやすくはない。 付属ソフトには編集機能があるが、あくまでも簡易的なカット編集ができるだけ。必要性は低い。 ▲画面の上へ 関連ページ コメント質問など キャプチャーボードの映像・音声が出ないときはキャプチャーソフト 上で 映像を表示できない、音声が聞こえない場合の対処法 キャプチャーボードが不安定な場合の対処法キャプチャーボードが不安定!そんなときに試すべき対処法 キャプチャーボードがPCに認識されない場合の対処法キャプチャーデバイスが見つからない?そんなときに試すべき対処法 ゲーム実況で使える無料・有料のおすすめ編集ソフトお薦めは3本!AviUtl、PowerDirector、Vegas Proで決まり AviUtlの使い方無料で使える!動画編集ソフトの決定版 ゲーム配信で必要になるものあらゆる配信サイトに対応!ゲーム配信で必要なものを準備しよう Skypeの通話音声を動画に入れる方法動画にSkypeの通話音声を入れるやり方、3パターンを解説 マイクの音が小さいときの対処法マイクが声を拾ってくれない!というときに試すべき方法 ▲画面の上へ
https://w.atwiki.jp/sampleisbest/pages/399.html
開発環境 Apache Flex SDK 4.12.1 FlashDevelop 4.6.2.5 実行環境 Microsoft Windows 8.1 (64bit) プロジェクトの種類 ActionScript 3/AS3 Project プロジェクト名 FlightCam
https://w.atwiki.jp/utvipper/pages/40.html
Capture The Flag #ref error :ご指定のファイルが見つかりません。ファイル名を確認して、再度指定してください。 (widht=400) 略してCTF。 赤と青に分かれて、相手陣地の旗を取り、自分陣地の旗に持ち帰るゲームモード。 ただ単に旗を取りに行けばいいだけだが、 一人で相手陣地に攻めに行くのは、自殺行為。なので、攻める時は基本二人以上で。 一人で攻め込んで、旗を取っても、攻守を一人でしなければならないので、二人以上居た方が負担は少ない。 攻めも大事だが、防御も肝心。 もし、目を離した隙に敵の誰かが自分陣地に入ってきた時、誰も居ないと話にならない。 自分陣地に数人は待機していた方がいいかも。 待機するといっても、ただ目に付きやすい所にいるのは効果的では無い。 CTFのマップでは隠れられる場所が多いので、そこに待機すれば大抵は見つからないかも。 でも、たまに敵に隠れ場所を突かれる事があったりする。 CTFでは、トランスロケーター(武器参照)が使えるので、どんどん使っていこう。 攻撃が当たりにくくなるし、敵の所でワープすると瞬殺できるので役に立つ。
https://w.atwiki.jp/live2ch/pages/502.html
トップ キャプボカテゴリ概要 各キャプチャーボードの詳しい使い方 Elgato Game Capture HD60 S / 2020年05月26日 (火) 19時54分02秒 キャプチャーボードの基本的な知識については、キャプチャーボードをご覧ください。 キャプチャーボードの選び方については、キャプチャーボードの選び方をご覧ください。 低遅延で快適な操作を!Elgato Game Capture HDシリーズ決定版 Elgato Game Capture HD60 S(以下Game Capture HD60 S)は、Game Capture HDシリーズで初めてUSB 3.0接続を採用した製品です。 ▲ 低遅延が特徴のElgato Game Capture HD60 S(リンク先 Amazon) ゲーム機と本製品は、HDMI端子で接続します。 目次 本製品の特長1080p/60fps完全対応 インスタントゲームビュー機能で低遅延 パススルー出力機能で、大画面TVにゲーム画面を映せる Flashback録画機能で、時間を遡ってゲームプレイを再生・録画できる ゲーム実況が簡単 動作環境・製品仕様Windows 7非対応 付属品 USB ソフトウェアをインストールする 対応しているゲーム機の種類 ゲーム機を接続するPS4、Switch、Wii Uなどの場合 PS3の場合 iOSデバイスの場合 詳しい使い方ゲーム画面の表示 録画中に自分の声を入れたい場合 ライブ配信したい場合 筆者がGame Capture HD60 Sを使用した感想 関連ページ 本製品の特長 1080p/60fps完全対応 1080p/60fps対応のゲームを、そのまま1080p/60fpsで高画質にキャプチャーできます。PS4のゲームをキャプチャーするのにピッタリです。 インスタントゲームビュー機能で低遅延 Game Capture HD60 Sの場合、遅延(タイムラグ)が小さいため、PCに映したゲーム画面を見ながらゲームをプレイすることが可能です。Game Capture HD/H60では遅延が大きいため、これができませんでした。しかし、Game Capture HD60 Sではインスタントゲームビュー機能によって遅延の問題が改善されたということです。 パススルー出力機能で、大画面TVにゲーム画面を映せる もっとも、PCに映したゲーム画面を見ながらゲームをプレイする場合、遅延は0ではありません。ゲームジャンルによっては遅延を感じることもあるでしょう。その場合は、パススルー出力機能を使います。用意するものは、HDMIケーブルとTVです。TVの代わりにPCモニターでもかまいません。 ▲TVに映っているゲーム画面を見ながらゲームをプレイすれば、少なくともGame Capture HD60 Sによる遅延はありません。この状態で録画・配信できます。 Flashback録画機能で、時間を遡ってゲームプレイを再生・録画できる 決定的な瞬間や、見返したいシーンが出てきたときにゲームを録画していなくても心配ありません。Flashback録画機能を使えば、スポーツ中継のリプレイのように映像を遡って、その瞬間を見返すことができます。もちろん録画も可能です。 ▲シークバーの任意の地点をクリックすると、映像を遡って再生できます。上記画像では、5分46秒まえの映像を表示しています。そして録画ボタンをクリックすることで、その地点から録画を開始可能です。 ゲーム実況が簡単 ゲーム実況用の機能が複数搭載されています。たとえば、録画中に自分の声を入れたり、簡単な設定をするだけでTwitchやYouTube Liveでゲーム配信することができます。Game Capture HD60 Sは、録画にもライブ配信にも使えます。 ▲画面の上へ 動作環境・製品仕様 Elgato Game Capture HD60 S 接続方式 USB 3.0 エンコードタイプ ソフトウェアエンコード ビデオ入力端子 HDMI端子 対応OS Windows 10(64bit)(*1)、Mac OS X 10.11.4以降 Windows 7非対応 本製品をWindows PCで使用する場合、対応OSはWindows 10(64bit)となっています。Windows 7には対応していません。AmazonのほうにはWindows 7でも使用できたというユーザーの書き込みがありますが、詳細は不明です(参考)。 ほかのGame Capture HDシリーズの製品は、すべてWindows 7に対応していることが公式サイトにて明記されています。もしPCがWindows 7であれば、Windows 10にアップグレードするか、または別の製品にしたほうが無難でしょう。 付属品 付属品として、USBケーブルとHDMIケーブルが1本ずつ同梱(どうこん)されています。後述するとおり、ソフトウェアは、公式サイトからダウンロードするため、CD-ROMはありません。 USB Game Capture HD60 Sは、付属のUSBケーブルを使ってPCのUSB 3.0端子と接続します。USB 2.0端子に接続しても動作しません。 USBはUSB Type-Cを採用しています。リバーシブルであるため、裏表がありません。部屋が暗くても簡単にUSB接続できます。 ▲画面の上へ ソフトウェアをインストールする Game Capture HD60 Sを使用するためには、PCに取り付けたうえでソフトウェアをインストールする必要があります。 本製品をPCのUSB 3.0ポートに接続する(USB 2.0では動作しない)。 公式サイトにアクセスし、「Game Capture for Windows」の部分にある「ダウンロード」をクリックする。 ダウンロードした「GameCaptureSetup_xxx.msi」をダブルクリックする。 画面を順に進めていく。 ソフトウェアのインストールが完了する。 ▲画面の上へ 対応しているゲーム機の種類 Game Capture HD60 Sには、HDMI端子が搭載されています。したがって、同端子に対応しているゲーム機であれば接続可能です(例 PS4、Wii U)。ただ、PS3は例外です。Game Capture HD60 SはHDCP非対応であるため、PS3とHDMI接続してもゲーム画面は映りません。 対処法については、HDCPを知るを参照 下表は、Game Capture HD60 Sに接続できるゲーム機と、そうでないものをまとめたものです。iOSデバイス(例 iPhone)との接続には、後述するアダプターが必要です。 HDMI端子による接続 備考 PS4 ○ PS4側の設定でHDCPをOFFにしておく PS3 × ただしHDMI分配器を使えば対処可能(後述) PS2 × Switch ○ HDMI接続で問題なし Switch Lite × Wii U ○ HDMI接続で問題なし Wii × Xbox One ○ HDMI接続で問題なし Xbox 360 ○ HDMI接続で問題なし PSP-3000/2000 × PS Vita TV × ただしHDMI分配器を使えば対処可能 iOSデバイス ○ 問題なし ▲画面の上へ ゲーム機を接続する PS4、Switch、Wii Uなどの場合 PS4、Switch、Wii U、Xbox One、Xbox360の場合は、そのままHDMIケーブルでGame Capture HD60 Sと接続します。 PS3の場合 PS3の場合はHDCP対策が必要です。たとえば、「 KanaaN HDMIスプリッター 1入力2出力 4k対応 Y-アダプタ 2160p Full UHD/ HD 1.4b 2-fach / 2-port」(リンク先 Amazon)を用意して、これにPS3を接続します。 iOSデバイスの場合 iPhoneなどのiOSデバイスをHDMI接続する場合は、 Lightning - Digital AVアダプタ(リンク先 Amazon)が別途必要です。 キャプチャーボードを使ってiPhoneの画面をPCに映す方法を参照 ▲画面の上へ 詳しい使い方 Game Capture HD60 Sの詳しい使い方、ゲーム実況のやり方については、下記ページをご覧ください。ここでは簡単な紹介に留めます。 詳細は、Elgato Game Capture HDシリーズの使い方を参照 ゲーム画面の表示 ゲーム機の電源を入れて、付属のキャプチャーソフトを起動すればゲーム画面が映ります。とくに設定を変更する必要はありません。通常は、初期設定のままでゲーム画面が表示されるはずです。もしゲーム画面が映らない場合は、上記リンク先の解説記事をご覧ください。 ▲画像は、PS4版『アサシン クリード シンジケート』(ユービーアイ ソフト)より ゲーム画面を60fpsで表示するためには、設定画面を開いて「ビデオプレビューに 60fps を許可する」にチェックを入れましょう。初期設定では、30fpsでプレビューするようになっています。したがって、設定変更が必要になります。 録画中に自分の声を入れたい場合 YouTubeやニコニコ動画に投稿するゲーム実況動画を作りたい場合、(1)マイクをPCに接続したうえで、(2)付属のキャプチャーソフトの設定を変更し、(3)ゲームを録画します。詳細については、上記リンク先の解説記事をご覧ください。 ライブ配信したい場合 一般的に、ゲーム配信では配信ソフトを用いることになります。配信ソフトというのは、文字どおりライブ配信するために使うアプリです。具体的には、これを使ってゲーム画面やゲーム音、マイク音をリアルタイムで配信するわけです。定番の配信ソフトとして、OBS Studio、またはXSplitを覚えておきましょう。 ゲーム配信の基本的なやり方については、下記ページをご覧ください。配信サイトごとに、配信ソフトの設定方法を解説しています。 解説記事 備考 ニコニコ生放送 こちら ツイキャス こちら Twitch こちら お薦め YouTube Live・YouTube Gaming こちら お薦め OPENREC こちら OBS Studioにゲーム画面を映す方法を知りたい場合は、下記ページをご覧ください。Game Capture HD60 Sの設定例についても解説しています。ただし、読むのは後回しでかまいません。中・上級者用です。 OBS Studioの映像キャプチャーデバイスの設定方法を参照 付属ソフトに搭載されているライブ配信機能でも、簡単に配信できます。この場合、配信ソフトは必要ありません。 詳細は、Elgato Game Capture HDシリーズの使い方を参照 ▲画面の上へ 筆者がGame Capture HD60 Sを使用した感想 Elgato Game Capture HD60 S posted with カエレバ Elgato 2016-04-27 Amazon 楽天市場 基本的に、Game Capture HD60 Proのときと同じ感想です。USB 3.0接続を嫌う人もいると思いますが、個人的には気にしていません。Game Capture HDシリーズのなかでは、これがいちばん使いやすいでしょう。USB接続で、しかもPCの画面上でゲームをプレイできるのは、とても大きなメリットです。もうGame Capture HD/HD60には戻れません。 筆者の環境では、録画開始時、および付属キャプチャーソフトによる配信開始時に、キャプチャーソフトがフリーズして落ちる現象が起こりました(Ver.3.2)。試行錯誤した結果、設定画面を開いて「ストリームコマンドエンコーダ」を「ソフトウェア(内蔵)」にすることで、正常に動作することが判明しましたので、念のため書いておきます(やり方はこちら)。 Amazonで価格をチェックする Elgato Game Capture HD60 S - ゲームキャプチャー USB 3.0,PS4, Xbox One and Nintendo Switch対応 1GC109901004 【 日本正規代理店品 】 ▲画面の上へ 関連ページ コメント質問など 【2019年版】筆者がいつも使っている、おすすめキャプチャーボード4選キャプチャーボード購入で迷ったときの参考に! キャプチャーボードの映像・音声が出ないときはキャプチャーソフト 上で 映像を表示できない、音声が聞こえない場合の対処法 キャプチャーボードが不安定な場合の対処法キャプチャーボードが不安定!そんなときに試すべき対処法 キャプチャーボードがPCに認識されない場合の対処法キャプチャーデバイスが見つからない?そんなときに試すべき対処法 ゲーム実況で使える無料・有料のおすすめ編集ソフトお薦めは3本!AviUtl、PowerDirector、Vegas Proで決まり AviUtlの使い方無料で使える!動画編集ソフトの決定版 ゲーム配信で必要になるものあらゆる配信サイトに対応!ゲーム配信で必要なものを準備しよう Skypeの通話音声を動画に入れる方法動画にSkypeの通話音声を入れるやり方、3パターンを解説 マイクの音が小さいときの対処法マイクが声を拾ってくれない!というときに試すべき方法 ▲画面の上へ
https://w.atwiki.jp/ultimatecastellan/pages/529.html
CAPTURE 深層デバフ ボス名は略称50音順です。デバフ名・内容は画面でデバフのマウスオーバー時に表示されるものです。★マークはセグのリムーブで解除できるものです。 ボス名(略称) マーク デバフ名 効果 アガシ なし イリピア 冷気 移動速度低下 エシメド 雷撃 移動速度低下 エリアデン 束縛 移動速度低下・攻撃速度減少 エリップス なし オリエド 追放 行動不可・全ダメージ抵抗・HP回復力増加 ★ クレイ 束縛 移動速度低下・攻撃速度減少 コライガー 束縛 移動速度低下 サイフ ショック スタン状態 サキュバス 破壊 防御力減少 火炎 持続的HP減少 タイダー 束縛 移動速度低下・攻撃速度減少 ダビフ 束縛 移動速度低下・攻撃速度減少 ショック スタン デュスマブル なし ファブニル 呪い 睡眠状態 フィストロム 火炎 持続的HP減少 フォーザリー なし プタフ 火炎 持続的MP減少 フリーズカーン なし ホラス なし メブスタ 破壊 防御力減少 レイドン 衰弱 防御力減少
https://w.atwiki.jp/matchmove/pages/94.html
Motion Capture and Face Tracking SynthEyes offers the exciting capability to do full body and facial motion capture using conventional video or film cameras. STOP! Unless you know how to do supervised tracking and understand moving-object tracking, you will not be able to do motion tracking. The material here builds upon that earlier material; it is not repeated here because it would be exactly that, a repetition. First, why and when is motion capture necessary? The moving-object tracking discussed previously is very effective for tracking a head, when the face is not doing all that much, or when trackable points have been added in places that don’t move with respect to one another (forehead, jaws, nose). The moving-object mode is good for making animals talk, for example. By contrast, motion capture is used when the motion of the moving features is to be determined, and will then be applied to an animated character. For example, use motion capture of an actor reading a script to apply the same expressions to an animated character. Moving-object tracking requires only one camera, while motion capture requires several calibrated cameras. Second, we need to establish a few very important points this is not the kind of capability that you can learn on the fly as you do that important shoot, with the client breathing down your neck. This is not the kind of thing for which you can expect to glance at this manual for a few minutes, and be a pro. Your head will explode. This is not the sort of thing you can expect to apply to some musty old archival footage, or using that old VHS camera at night in front of a flickering fireplace. This is not something where you can set up a shoot for a couple of days, leave it around with small children or animals climbing on it, and get anything usable whatsoever. This is not the sort of thing where you can take a SynthEyes export into your animation software, and expect all your work to be done, with just a quick render to come. And this is not the sort of thing that is going to produce the results of a $250,000 custom full body motion capture studio with 25 cameras. With all those dire warnings out of the way, what is the good news? If you do your homework, do your experimentation ahead of time, set up technically solid cameras and lighting, read the SynthEyes manual so you have a fair understanding what the SynthEyes software is doing, and understand your 3-D package well enough to set up your character or face rigging, you should be able to get excellent results. In this manual, we’ll work through a sample facial capture session. The techniques and issues are the same for full body capture, though of course the tracking marks and overall camera setup for body capture must be larger and more complex. Introduction To perform motion capture of faces or bodies, you will need at least two cameras trained on the performer from different angles. Since the performer s head or limbs are rotating, the tracking features may rotate out of view of the first two cameras, so you may need additional cameras to shoot more views from behind the actor. The fields of view of the cameras must be large enough to encompass the entire motion that the actor will perform, without the cameras tracking the performer (OK, experts can use SynthEyes for motion capture even when the cameras move, but only with care). You will need to perform a calibration process ahead of time, to determine the exact position and orientation of the cameras with respect to one another (assuming they are not moving). We’ll show you one way to achieve this, using some specialized but inexpensive gear. Very Important You’ll have to ensure that nobody knocks the cameras out of calibration while you shoot calibration or live action footage, or between takes. You’ll need to be able to resynchronize the footage of all the cameras in post. We’ll tell you one way to do that. Generally the performer will have tracker markers attached, to ensure the best possible and most reliable data capture. The exception to this would be if one of the camera views must also be used as part of the final shot, for example, a talking head that will have an extreme helmet added. In this case, markers can be used where they will be hidden by the added effect, and in locations not permitting trackers, either natural facial features can be used (HD or film source!), or markers can be used and removed as an additional effect. After you solve the calibration and tracking in SynthEyes, you will wind up with a collection of trajectories showing the path through space of each individual feature. When you do moving-object tracking, the trackers are all rigidly connected to one another, but in motion capture, each tracker follows its own individual path. You will bring all these individual paths into your animation package, and will need to set up a rigging system that makes your character move in response to the tracker paths. That rigging might consist of expressions, Look At controllers, etc; it s up to you and your animation package. Camera Types Since eachcamera s fields of view must encompass the entire performance (unless there are many overlapping cameras), at any time the actor is usually a small portion of the frame. This makes progressive DV, HD, or film source material strongly suggested. Progressive-scan cameras are strongly recommended, to avoid the factor of two loss of vertical resolution due to interlacing. This is especially important since the tracking markers are typically small and can slip between scan lines. While it may make operations simpler, the cameras do not have to be the same kind, have the same aspect ratio, or have the same frame rate. Resist the urge to use that old consumer-grade analog videotape camera as one of the cameras—the recording process will not be stable enough for good results. Lens distortion will substantially complicate calibration and processing. To minimize distortion, use high-quality lenses, and do not operate them near their maximum field of view, where distortion is largest. Do not try to squeeze into the a small studio space. Camera Placement The camera placements must address two opposing factors one, that the cameras should be far apart, to produce a large parallax disparity with good depth perception, and that the cameras should be close together, so that they can simultaneously observe as many trackers as possible. You’ll probably need to experiment with placement to gain experience, keeping in mind the performance to be delivered. Cameras do not have to be placed in any discernable pattern. If the performance warrants it, you might want coverage from up above, or down below. If any cameras will move during the performance, they will need a visible set of stationary tracking markers, to recover their trajectory in the usual fashion. This will reduce accuracy compared to a carefully calibrated stationary camera. Lighting Lighting should be sufficient to keep the markers well illuminated, avoiding shadowing. The lighting should be enough to be able to keep the shutter time of the cameras as low as possible, consistent with good image quality. Calibration Requirements and Fixturing In order for motion tracking footage to be solved, the camera positions, orientations, and fields of view must be determined, independent of the “live” footage, as accurately as possible. To do this, we will use a process based on moving-object tracking. A calibration object is moved in the field of view of all the cameras, and tracked simultaneously. To get the most data fastest and easiest, we constructed a prop we call a “porcupine” out of a 4” Styrofoam ball, 20-gauge plant stem wires, and small 7 mm colored pom-pom balls, all obtained from a local craft shop for under $5. Lengths of wire were cut to varying lengths, stuck into the ball, and a pom-pom glued to the end using a hot glue gun. Retrospectively, it would have been cleverer to space two balls along the support wire as well, to help set up a coordinate system. The porcupine is hung by a support wire in the location of the performer s head, then rotated as it is recorded simultaneously from each camera. The porcupine s colored pom-poms can be viewed virtually all the time, even as they spin around to the back, except for the occasional occlusion. Similar fixtures can be built for larger motion capture scenarios, perhaps using dolly track to carry a wire frame. It is important that the individual trackable features on the fixture not move with respect to one another their rigidity is required for the standard object tracking. The path of the calibration fixture does not particularly matter. Camera Synchronization The timing relationship between the different cameras must be established. Ideally, all the cameras would all be gen-locked together, snapping each image at exactly the same time. Instead, there are a variety of possibilities which can be arranged and communicated to SynthEyes during the setup process. Motion capture has a special solver mode on the Solver Panel individual mocap. In this mode, the second dropdown list changes from a directional hint to control camera synchronization. If the cameras are all video cameras, they can be gen-locked together to all take pictures identically. This situation is called “Sync Locked.” If you have a collection of video cameras, they will all take pictures at exactly the same (crystal-controlled) rate. However, one camera may always be taking pictures a bit before the other, and a third camera may always be taking pictures at yet a different time than the other two. The option is “Crystal Sync.” If you have a film camera, it might run a little more or a little less that 24 fps, not particularly synchronized to anything. This will be referred to as “Loose Sync.” In a capture setup with multiple cameras, one can always be considered to be Sync Locked, and serve as a reference. If it is a video camera, other video cameras are in Crystal Sync, and any film camera would be Loose Sync. If you have a film camera that will be used in the final shot, it should be considered to be the sync reference, with Sync Locked, and any other cameras are in Loose Sync. The beginning and end of each camera s view of the calibration sequence and the performance sequence must be identified to the nearest frame. This can be achieved with a clapper board or electronic slate. The low-budget approach is to use a flashlight or laser pointer flash to mark the beginning and end of the shot. Camera Calibration Process We’re ready to start the camera calibration process, using the two shot sequences LeftCalibSeq and RightCalibSeq. You can start SynthEyes and do a File/New for the left shot, and then Add Shot to bring in the second. Open both with Interlace=Yes, as unfortunately both shots are interlaced. Even though these are moving-object shots, for calibration they will be solved as moving-camera shots. You can see from these shots how the timing calibration was carried out. The shots were cropped right before the beginning of the starting flash, and right after the ending flash, to make it obvious what had been done. Normally, you should crop after the starting flash, and before the ending flash. On your own shots, you can use the Image Preprocessing panel s Region-of-interest capability to reduce memory consumption to help handle long shots from multiple cameras. You should supervise-track a substantial fraction of the pom-poms in each camera view; you can then solve each camera to obtain a path of the camera appearing to orbit the stationary pom-pom. Next, we will need to set up a set of links between corresponding trackers in the two shots. The links must always be on the Camera02 trackers, to a Camera01 tracker. This can be achieved at least three different ways. Matching Plan A Temporary Alignment This is probably easiest, and we may offer a script to do the grunt work in the future. Begin by assigning a temporary coordinate system for each camera, using the same pom-poms and ordering for each camera. It is most useful to keep the porcupine axis upright (which is where pom-poms along the support wire would come in useful, if available); in this shot three at the very bottom of the porcupine were suitable. With matching constraints for each camera, when you re-solve, you will obtain matching pairs of tracker points, one from each camera, located very close to one another. Now, with the Coordinate System panel open, Camera02 active, and the Top view selected, you can click on each of Camera02 s tracker points, and then alt-click (or command-click) on the corresponding Camera01 point, setting up all the links. As you complete the linking, you should remove the initial temporary constraints from Camera02. Matching Plan B Side by Side In this plan, you can the Camera Perspective viewport configuration. Make Camera01 active, and in the perspective window, right-click and Lock to current camera with Camera01 s imagery, then make Camera02 active for the camera view. Now camera and perspective views show the two shots simultaneously (Experts you can open multiple perspective windows and configure each for a different shot.). You can now click the trackers in the camera(02) view, and alt-click the matching (01) tracker in the perspective window, establishing the links. Reminder The coordinate system control panel must be open for linking. This will take a little mental rotation to establish the right correspondences; the colors of the various pom-poms will help. Matching Plan C Cross Link by Name This plan is probably more trouble than it worth for calibration, but can be an excellent choice for the actual shots. You assign names to each of the pom-poms, so that the names differ only by the first character, then use the Track/Cross-Link by Name menu item to establish links. It is a bit of pain to come up with different names for the pom-poms, and do it identically for the two views, but this might be more reasonable for other calibration scenarios where it is more obvious which point is which. Completing the Calibration We’re now ready to complete the calibration process. Change Camera02 to Indirectly solving mode on the Solver panel . Note the initial position of Camera01 is going to stay fixed, controlling the overall positions of all the cameras. If you want it in some particular location, you can remove the constraints from it, reset its path from the 3-D panel, then move it around to a desired location Solve the shot, and you have two orbiting cameras remaining at a fixed relative orientation as they orbit. Run the Motion Capture Camera Calibration script from the Script menu, and the orbits will be squished down to single locations. Camera01 will be stationary at its initial location, and Camera02 will be jittering around another location, showing the stability of the offset between the two. The first frame of Camera02 s position is actually an average relative position over the entire shot; it is this location we will later use. You should save this calibration scene file (porcupine.sni); it will be the starting point for tracking the real footage. The calibration script also produces a script_output.txt file in a user-specific folder that lists the calibration data. Body and Facial Tracking Marks Markers will make tracking faster, easier, and more accurate. On the face, markers might be little Avery dots from an office supply store, “magic marker” spots, pom-poms with rubber cement(?), mascara, or grease paint. Note that small colored dots tend to lose their coloration in video images, especially with motion blur. Make sure there is a luminance difference. Single-pixel-sized spots are less accurate than those that are several pixels across. Markers should be placed on the face in locations that reflect the underlying musculature and the facial rigging they must drive. Be sure to include markers on comparatively stationary parts of the head. For body tracking, a typical approach is to put the performer in a black outfit (such as UnderArmour), and attach table-tennis balls as tracking features onto the joints. To achieve enough visibility, placing balls on both the top and bottom of the elbow may be necessary. Because the markers must be placed on the outside of the body, away from the true joint locations, character rigging will have to take this into account. Preparation for Two-Dimensional Tracking We’re ready to begin tracking the actual performance footage. Open the final calibration scene file. Open the 3-D panel . For each camera, select the camera in the select-by-name dropdown list. Then hit Blast and answer yes to store the field of view data as well. Then, hit Reset twice, answering yes to remove keys from the field of view track also. The result of this little dance is to take the solved camera paths (as modified by the script), and make them the initial position and orientation for each camera, with no animation (since they aren’t actually moving). Next, replace the shot for each camera with LeftFaceSeq and RightFaceSeq. Again, these shots have been cropped based on the light flashes, which would normally be removed completely. Set the End Frame for each shot to its maximum possible. If necessary, use an animated ROI on the Imaging Preprocessing panel so that you can keep both shots in RAM simultaneously. Hit Control-A and delete to delete all the old trackers. Set each Lens to Known to lock the field of view, and set the solving mode of each camera to Disabled, since the cameras are fixed at their calibrated locations. We need a placeholder object to hold all the individual trackers. Create a moving object, Object01, for Camera01, then a moving object, Object02, for Camera02. On the Solving Panel, set Object01 and Object02 to the Individual mocap solving mode, and set the synchronization mode right below that. Two-Dimensional Tracking You can now track both shots, creating the trackers into Object01 and Object02 for the respective shots. If you don’t track all the markers, at least be sure to track a given marker either in both shots, or none, as a half-tracked marker will not help. The Hand-Held Use Others mode may be helpful here for the rapid facial motions. Frequent keying will be necessary when the motion causes motion blur to appear and disappear (a lot of uniform light and short shutter time will minimize this). Linking the Shots After completing the tracking, you must set up links. The easiest approach will probably be to set up side-by-side camera and perspective views. Again, you should link the Object02 trackers to the Object01 trackers, not the other way around. Doing the linking by name can also be helpful, since the trackers should have fairly obvious names such as Nose or Left Inner Eyebrow, etc. Solving You’re ready to solve, and the Solve step should be very routine, producing paths for each of the linked trackers. The final file is facetrk.sni. Afterwards, you can start checking on the trackers. You can scrub through the shot in the perspective window, orbiting around the face. You can check the error curves and XYZ paths in the graph editor . By switching to Sort by Error mode , you can sequence through the trackers starting from those with the highest error. Exports Rigging When you export a scene with individual trackers, each of them will have a key frame on each frame of the shot, animating the tracker path. It is up to you to determine a method of rigging your character to take advantage of the animated tracker paths. The method chosen will depend on your character and animation software package. It is likely you will need some expressions (formulas) and some Look-At controls. For full-body motion capture, you will need to take into account the offsets from the tracking markers (ie balls) to the actual joint locations. Modeling You can use the calculated point locations to build models. However, the animation of the vertices will not be carried forward into the meshes you build. Instead, when you do a Convert to Mesh operation in the perspective window, the current tracker locations are frozen on that frame. If desired, you can repeat the object-building process on different frames to build up a collection of morph-target meshes.
https://w.atwiki.jp/ushi-sun/pages/2.html
PSP CAPTURE SITE PSPのソフトの攻略をメインで扱っています。 取り扱いタイトル 真・三国無双 真・三国無双 2nd Evolution 激・戦国無双
https://w.atwiki.jp/intensity/pages/29.html
//----------------------------------------------------------------------------- // $Id DecklinkCaptureDlg.cpp,v 1.9 2006/04/11 01 11 07 ivanr Exp $ // // Desc DirectShow capture sample // // Copyright (c) Blackmagic Design 2005. All rights reserved. //----------------------------------------------------------------------------- #include "stdafx.h" #include "DecklinkCapture.h" #include "DecklinkCaptureDlg.h" #include initguid.h // TODO move this to a lib #include "DecklinkSample_uuids.h" #undef lstrlenW #ifdef _DEBUG #define new DEBUG_NEW #endif #define WM_GRAPHNOTIFYWM_APP+1// for Filter Graph event notification //----------------------------------------------------------------------------- // CAboutDlg //----------------------------------------------------------------------------- // CAboutDlg dialog used for App About class CAboutDlg public CDialog { public CAboutDlg(); // Dialog Data enum { IDD = IDD_ABOUTBOX }; protected virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support // Implementation protected DECLARE_MESSAGE_MAP() }; CAboutDlg CAboutDlg() CDialog(CAboutDlg IDD) { } void CAboutDlg DoDataExchange(CDataExchange* pDX) { CDialog DoDataExchange(pDX); } BEGIN_MESSAGE_MAP(CAboutDlg, CDialog) END_MESSAGE_MAP() //----------------------------------------------------------------------------- // CDecklinkCaptureDlg dialog //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // Constructor // CDecklinkCaptureDlg CDecklinkCaptureDlg(CWnd* pParent /*=NULL*/) CDialog(CDecklinkCaptureDlg IDD, pParent) , m_pIVW(NULL) { m_hIcon = AfxGetApp()- LoadIcon(IDR_MAINFRAME); } //----------------------------------------------------------------------------- // DoDataExchange // void CDecklinkCaptureDlg DoDataExchange(CDataExchange* pDX) { CDialog DoDataExchange(pDX); DDX_Control(pDX, IDC_COMBO_VIDEOFORMATS, m_videoFormatCtrl); DDX_Control(pDX, IDC_COMBO_AUDIOFORMATS, m_audioFormatCtrl); DDX_Control(pDX, IDC_STATIC_PREVIEW, m_preview); DDX_Control(pDX, IDC_EDIT_CAPTUREFILE, m_captureFileCtrl); DDX_Control(pDX, IDC_COMBO_COMPRESSION, m_compressionCtrl); DDX_Control(pDX, IDC_COMBO_VIDEODEVICE, m_videoDeviceCtrl); DDX_Control(pDX, IDC_COMBO_AUDIODEVICE, m_audioDeviceCtrl); } BEGIN_MESSAGE_MAP(CDecklinkCaptureDlg, CDialog) ON_WM_SYSCOMMAND() ON_WM_PAINT() ON_WM_QUERYDRAGICON() //}}AFX_MSG_MAP ON_CBN_SELCHANGE(IDC_COMBO_VIDEOFORMATS, OnCbnSelchangeComboVideoformats) ON_CBN_SELCHANGE(IDC_COMBO_AUDIOFORMATS, OnCbnSelchangeComboAudioformats) ON_BN_CLICKED(IDC_CHECK_AUDIOMUTE, OnBnClickedCheckAudiomute) ON_BN_CLICKED(IDC_BUTTON_BROWSE, OnBnClickedButtonBrowse) ON_BN_CLICKED(IDC_BUTTON_CAPTURE, OnBnClickedButtonCapture) ON_BN_CLICKED(IDC_BUTTON_STOP, OnBnClickedButtonStop) ON_CBN_SELCHANGE(IDC_COMBO_COMPRESSION, OnCbnSelchangeComboCompression) ON_CBN_SELCHANGE(IDC_COMBO_VIDEODEVICE, OnCbnSelchangeComboVideodevice) ON_CBN_SELCHANGE(IDC_COMBO_AUDIODEVICE, OnCbnSelchangeComboAudiodevice) END_MESSAGE_MAP() //----------------------------------------------------------------------------- // CDecklinkCaptureDlg message handlers //----------------------------------------------------------------------------- //----------------------------------------------------------------------------- // OnInitDialog // Called before the dialog is displayed, use this message handler to initialise // our app BOOL CDecklinkCaptureDlg OnInitDialog() { CDialog OnInitDialog(); // Add "About..." menu item to system menu. // IDM_ABOUTBOX must be in the system command range. ASSERT((IDM_ABOUTBOX 0xFFF0) == IDM_ABOUTBOX); ASSERT(IDM_ABOUTBOX 0xF000); CMenu* pSysMenu = GetSystemMenu(FALSE); if (pSysMenu != NULL) { CString strAboutMenu; strAboutMenu.LoadString(IDS_ABOUTBOX); if (!strAboutMenu.IsEmpty()) { pSysMenu- AppendMenu(MF_SEPARATOR); pSysMenu- AppendMenu(MF_STRING, IDM_ABOUTBOX, strAboutMenu); } } // Set the icon for this dialog. The framework does this automatically // when the application s main window is not a dialog SetIcon(m_hIcon, TRUE);// Set big icon SetIcon(m_hIcon, FALSE);// Set small icon // create a basic capture graph and preview the incoming video m_pGraph = NULL; m_pVideoCapture = NULL; m_pAudioCapture = NULL; m_pVideoRenderer = NULL; m_pSmartT = NULL; m_pControl = NULL; m_pIVW = NULL; m_pMediaEvent = NULL; m_ROTRegister = 0; m_bAudioMute = FALSE; m_compressor = 0; m_bEnableCompressionCtrl = TRUE; m_captureFile = " Select File "; // initialise default video media type ZeroMemory( m_vihDefault, sizeof(m_vihDefault)); m_vihDefault.AvgTimePerFrame = 333667; m_vihDefault.bmiHeader.biWidth = 720; m_vihDefault.bmiHeader.biHeight = 486; m_vihDefault.bmiHeader.biBitCount = 16; m_vihDefault.bmiHeader.biCompression = YVYU ; // initialise default audio media type ZeroMemory( m_wfexDefault, sizeof(m_wfexDefault)); m_wfexDefault.nChannels = 2;// the only field of interest // retrieve last state QueryRegistry(); m_captureFileCtrl.SetWindowText(m_captureFile); EnableControls(); // create a preview graph // add the filters that will be used by all the graphs; preview, uncompressed capture, dv capture, // mpeg capture and windows media capture HRESULT hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, reinterpret_cast void** ( m_pGraph)); if (SUCCEEDED(hr)) { #ifdef _DEBUG hr = CDSUtils AddGraphToRot(m_pGraph, m_ROTRegister); #endif hr = m_pGraph- QueryInterface(IID_IMediaControl, reinterpret_cast void** ( m_pControl)); if (SUCCEEDED(hr)) { // locate the video capture devices hr = PopulateDeviceControl( CLSID_VideoInputDeviceCategory, m_videoDeviceCtrl); if (SUCCEEDED(hr)) { hr = PopulateDeviceControl( CLSID_AudioInputDeviceCategory, m_audioDeviceCtrl); if (SUCCEEDED(hr)) { PWSTR pVideoName = (PWSTR)m_videoDeviceCtrl.GetItemData(m_videoDeviceCtrl.SetCurSel(0)); PWSTR pAudioName = (PWSTR)m_audioDeviceCtrl.GetItemData(m_audioDeviceCtrl.SetCurSel(0)); if (pVideoName pAudioName) { hr = CDSUtils AddFilter2(m_pGraph, CLSID_VideoInputDeviceCategory, pVideoName, m_pVideoCapture); if (SUCCEEDED(hr)) { hr = CDSUtils AddFilter2(m_pGraph, CLSID_AudioInputDeviceCategory, pAudioName, m_pAudioCapture); if (SUCCEEDED(hr)) { PopulateVideoControl();// populate the video format control with the video formats of the currently selected device PopulateAudioControl();// populate the audio format control with the audio formats of the currently selected device PopulateCompressionControl(); // locate video screen renderer for the preview window hr = CDSUtils AddFilter(m_pGraph, CLSID_VideoRendererDefault, L"Video Renderer", m_pVideoRenderer); if (SUCCEEDED(hr)) { hr = CreatePreviewGraph(); } } } } } } } } return TRUE; // return TRUE unless you set the focus to a control } //----------------------------------------------------------------------------- // DestroyWindow // Called when the window is being destroyed, clean up and free all resources. BOOL CDecklinkCaptureDlg DestroyWindow() { m_regUtils.Close(); #ifdef _DEBUG CDSUtils RemoveGraphFromRot(m_ROTRegister); #endif DestroyGraph(); SAFE_RELEASE(m_pControl); // Hide Video Window and remove owner. This has to be done prior to // destroying any window that displays video/still. if (m_pIVW) { m_pIVW- put_Visible(OAFALSE); m_pIVW- put_Owner(NULL); } SAFE_RELEASE(m_pIVW); SAFE_RELEASE(m_pMediaEvent); SAFE_RELEASE(m_pVideoRenderer); SAFE_RELEASE(m_pAudioCapture); SAFE_RELEASE(m_pVideoCapture); SAFE_RELEASE(m_pGraph); // free mediatypes attached to format controls int count = m_videoFormatCtrl.GetCount(); for (int item=0; item count; ++item) { DeleteMediaType((AM_MEDIA_TYPE*)m_videoFormatCtrl.GetItemData(item)); } count = m_audioFormatCtrl.GetCount(); for (int item=0; item count; ++item) { DeleteMediaType((AM_MEDIA_TYPE*)m_audioFormatCtrl.GetItemData(item)); } // release the device names attached to the item s data count = m_videoDeviceCtrl.GetCount(); for (item=0; item count; ++item) { PWSTR pName = (PWSTR)m_videoDeviceCtrl.GetItemData(item); delete [] pName; } count = m_audioDeviceCtrl.GetCount(); for (item=0; item count; ++item) { PWSTR pName = (PWSTR)m_audioDeviceCtrl.GetItemData(item); delete [] pName; } return CDialog DestroyWindow(); } //----------------------------------------------------------------------------- // OnSysCommand // void CDecklinkCaptureDlg OnSysCommand(UINT nID, LPARAM lParam) { if ((nID 0xFFF0) == IDM_ABOUTBOX) { CAboutDlg dlgAbout; dlgAbout.DoModal(); } else { CDialog OnSysCommand(nID, lParam); } } //----------------------------------------------------------------------------- // OnPaint // If you add a minimize button to your dialog, you will need the code below // to draw the icon. For MFC applications using the document/view model, // this is automatically done for you by the framework. void CDecklinkCaptureDlg OnPaint() { if (IsIconic()) { CPaintDC dc(this); // device context for painting SendMessage(WM_ICONERASEBKGND, reinterpret_cast WPARAM (dc.GetSafeHdc()), 0); // Center icon in client rectangle int cxIcon = GetSystemMetrics(SM_CXICON); int cyIcon = GetSystemMetrics(SM_CYICON); CRect rect; GetClientRect( rect); int x = (rect.Width() - cxIcon + 1) / 2; int y = (rect.Height() - cyIcon + 1) / 2; // Draw the icon dc.DrawIcon(x, y, m_hIcon); } else { CDialog OnPaint(); } } //----------------------------------------------------------------------------- // HandleGraphEvent // At the moment we just read the event, discard it and release memory used to store it. void CDecklinkCaptureDlg HandleGraphEvent(void) { LONG lEventCode, lEventParam1, lEventParam2; if (!m_pMediaEvent) { return; } while (SUCCEEDED(m_pMediaEvent- GetEvent( lEventCode, reinterpret_cast LONG_PTR * ( lEventParam1), reinterpret_cast LONG_PTR * ( lEventParam2), 0))) { // just free memory associated with event m_pMediaEvent- FreeEventParams(lEventCode, lEventParam1, lEventParam2); } } //----------------------------------------------------------------------------- // WindowProc // Have to add our own message handling loop to handle events from the preview video // window and to pass Window events onto it - this is so it redraws itself correctly etc. LRESULT CDecklinkCaptureDlg WindowProc(UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_GRAPHNOTIFY HandleGraphEvent(); break; } // Pass all msgs to video window. vid window exists as child of static // picture frame. This ensures video window redraws itself etc. if (m_pIVW) { m_pIVW- NotifyOwnerMessage(reinterpret_cast LONG_PTR (m_hWnd) /* from me */, message, wParam, lParam); } return CDialog WindowProc(message, wParam, lParam); } //----------------------------------------------------------------------------- // OnQueryDragIcon // The system calls this function to obtain the cursor to display while the user drags // the minimized window. HCURSOR CDecklinkCaptureDlg OnQueryDragIcon() { return static_cast HCURSOR (m_hIcon); } //----------------------------------------------------------------------------- // CreatePreviewGraph // Create a graph to preview the input // NOTE There are many ways of building graphs, you could opt for the ICaptureGraphBuilder interface which would // make things are lot simpler, however it doesn t always build the most efficient graphs. HRESULT CDecklinkCaptureDlg CreatePreviewGraph() { HRESULT hr = S_OK; if (m_pGraph) { // locate smart-T // NOTE The smart-T appears to hold references to its upstream connections even when its input pin // is diconnected. The smart-T has to be removed from the graph in order to clear these references which // is why the filter is enumerated and added every time the preview graph is built and removed whenever // it is destroyed. ASSERT(NULL == m_pSmartT); hr = CDSUtils AddFilter(m_pGraph, CLSID_SmartTee, L"Smart Tee", m_pSmartT); if (SUCCEEDED(hr)) { // DV preview is slightly different to all other previews if (ENC_DV != m_compressionCtrl.GetItemData(m_compressionCtrl.GetCurSel())) { // uncompressed, mpeg and wm preview // create the following // // Decklink Video Capture - Smart-T - AVI Decompressor - Video Renderer // Decklink Audio Capture - Default Audio Renderer // // render the preview pin on the smart-T filter // first connect the Decklink video capture pin to the smart-T hr = CDSUtils ConnectFilters(m_pGraph, m_pVideoCapture, NULL, m_pSmartT, NULL); if (SUCCEEDED(hr)) { // now connect the preview pin of the smart-T to the video renderer hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, L"Preview", m_pVideoRenderer, NULL); } } else { // DV Preview // create the following // // Decklink Video Capture - AVI Decompressor - Smart-T - Colour Space Converter - Video Renderer // Decklink Audio Capture - Default Audio Renderer // // this is a more efficient graph than created by the ICaptureGraphBuilder2 interface // add the AVI decompressor and colour space converter filters CComPtr IBaseFilter pAVIDecompressor = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_AVIDec, L"AVI Decompressor", pAVIDecompressor); if (SUCCEEDED(hr)) { CComPtr IBaseFilter pColourSpaceConverter = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_Colour, L"Color Space Converter", pColourSpaceConverter); if (SUCCEEDED(hr)) { // conect the Decklink video capture pin to the AVI decompressor hr = CDSUtils ConnectFilters(m_pGraph, m_pVideoCapture, NULL, pAVIDecompressor, NULL); if (SUCCEEDED(hr)) { // connect AVI decompressor to the smart-T hr = CDSUtils ConnectFilters(m_pGraph, pAVIDecompressor, NULL, m_pSmartT, NULL); if (SUCCEEDED(hr)) { // connect the preview pin of the smart-T to the colour space converter hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, L"Preview", pColourSpaceConverter, NULL); if (SUCCEEDED(hr)) { // connect the colour space converter to the video renderer hr = CDSUtils ConnectFilters(m_pGraph, pColourSpaceConverter, NULL, m_pVideoRenderer, NULL); } } } } } } } } else { hr = E_POINTER; } if (SUCCEEDED(hr)) { // the video path has been connected, initialise the preview window InitialiseVideoPreview(); // optionally connect the audio path if (FALSE == m_bAudioMute) { // connect the Decklink audio capture pin to the mux hr = CDSUtils RenderFilter(m_pGraph, m_pAudioCapture, L"Capture"); } // run the graph so that we can preview the input video if (m_pControl) { hr = m_pControl- Run(); } else { hr = E_POINTER; } } return hr; } //----------------------------------------------------------------------------- // CreateCaptureGraph // Create a graph to capture the input HRESULT CDecklinkCaptureDlg CreateCaptureGraph() { HRESULT hr = S_OK; // tack the file writer onto the preview graph if (m_pGraph m_pControl) { // stop the graph as we are about to modify it m_pControl- Stop(); // remove the default audio renderer so the Decklink audio capture filter // can be connected to the AVI mux, we will not preview audio whilst capturing CComPtr IPin pIPinOutput = NULL; hr = CDSUtils GetPin(m_pAudioCapture, L"Capture", pIPinOutput); if (SUCCEEDED(hr)) { // to disconnect both pins must be disconnected // find the pin connected to the Decklink audio capture pin CComPtr IPin pIPinConnection = NULL; hr = pIPinOutput- ConnectedTo( pIPinConnection); if (SUCCEEDED(hr)) { // disconnect the pins hr = m_pGraph- Disconnect(pIPinOutput); hr = m_pGraph- Disconnect(pIPinConnection); // get the owning filter of the downstream pin and remove it from the graph PIN_INFO pinInfo = {0}; hr = pIPinConnection- QueryPinInfo( pinInfo); if (SUCCEEDED(hr)) { if (pinInfo.pFilter) { hr = m_pGraph- RemoveFilter(pinInfo.pFilter); pinInfo.pFilter- Release(); } } } } // retrieve the capture filename m_captureFileCtrl.GetWindowText(m_captureFile); // store filename USES_CONVERSION; WCHAR captureFile[MAX_PATH]; wcsncpy(captureFile, A2W(m_captureFile), MAX_PATH); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetString("CaptureFile", reinterpret_cast const BYTE* (captureFile), sizeof(captureFile))); // decide the type of capture graph to build switch (m_compressionCtrl.GetItemData(m_compressionCtrl.GetCurSel())) { default case ENC_NONE hr = CreateUncompressedCaptureGraph(); break; case ENC_DV hr = CreateDVCaptureGraph(); break; case ENC_WM hr = CreateWMCaptureGraph(); break; } if (FAILED(hr)) { // there was a problem building the capture graph, issue a message // and rebuild preview graph char buffer[128]; StringCbPrintfA(buffer, sizeof(buffer), "The error 0x%08lx was detected when creating the capture graph with the following file name \r\n\r\n %s ", hr, m_captureFile); MessageBox(buffer, _T("Error"), MB_ICONERROR); OnBnClickedButtonStop();// destroy broken capture graph, build preview graph and enable controls } } else { hr = E_POINTER; } return hr; } //----------------------------------------------------------------------------- // CreateUncompressedCaptureGraph // Create an optimum uncompressed capture graph HRESULT CDecklinkCaptureDlg CreateUncompressedCaptureGraph() { HRESULT hr = S_OK; // uncompressed capture // locate the AVI mux and file writer filters and add them to the graph CComPtr IBaseFilter pAVIMux = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_AviDest , L"AVI Mux", pAVIMux); if (SUCCEEDED(hr)) { CComPtr IBaseFilter pFileWriter = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_FileWriter, L"File writer", pFileWriter); if (SUCCEEDED(hr)) { // set the output filename CComQIPtr IFileSinkFilter, IID_IFileSinkFilter pIFS = pFileWriter; if (pIFS) { USES_CONVERSION;// for T2W macro hr = pIFS- SetFileName(T2W(m_captureFile), NULL); if (SUCCEEDED(hr)) { // connect the smart-T capture pin to the mux hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, L"Capture", pAVIMux, NULL); if (SUCCEEDED(hr)) { // connect the mux to the file writer hr = CDSUtils ConnectFilters(m_pGraph, pAVIMux, NULL, pFileWriter, NULL); if (SUCCEEDED(hr)) { // video path connected now optionally connect the audio path if (FALSE == m_bAudioMute) { // connect the Decklink audio capture pin to the mux hr = CDSUtils ConnectFilters(m_pGraph, m_pAudioCapture, L"Capture", pAVIMux, NULL); } if (SUCCEEDED(hr)) { m_pControl- Run(); } } } } } } } return hr; } //----------------------------------------------------------------------------- // CreateDVCaptureGraph // Create an optimum DV capture graph // NOTE that this will only work for SD HRESULT CDecklinkCaptureDlg CreateDVCaptureGraph() { HRESULT hr = S_OK; // locate the DV encoder, AVI mux and file writer filters and add them to the graph CComPtr IBaseFilter pDVEncoder = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_DVVideoEnc, L"DV Video Encoder", pDVEncoder); if (SUCCEEDED(hr)) { CComPtr IBaseFilter pAVIMux = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_AviDest , L"AVI Mux", pAVIMux); if (SUCCEEDED(hr)) { CComPtr IBaseFilter pFileWriter = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_FileWriter, L"File writer", pFileWriter); if (SUCCEEDED(hr)) { // set the output filename CComQIPtr IFileSinkFilter, IID_IFileSinkFilter pIFS = pFileWriter; if (pIFS) { USES_CONVERSION;// for T2W macro hr = pIFS- SetFileName(T2W(m_captureFile), NULL); if (SUCCEEDED(hr)) { // configure the DV encoder CComQIPtr IDVEnc, IID_IDVEnc pIDV = pDVEncoder; if (pIDV) { // located a DV compression filter, set the format int videoFormat, dvFormat, resolution; hr = pIDV- get_IFormatResolution( videoFormat, dvFormat, resolution, FALSE, NULL); if (SUCCEEDED(hr)) { ASSERT(DVENCODERFORMAT_DVSD == dvFormat); ASSERT(DVENCODERRESOLUTION_720x480 == resolution); if ((DVENCODERVIDEOFORMAT_NTSC == videoFormat) (576 == m_vihDefault.bmiHeader.biHeight)) { // set the encoder to PAL if its NTSC videoFormat = DVENCODERVIDEOFORMAT_PAL; hr = pIDV- put_IFormatResolution(videoFormat, dvFormat, resolution, FALSE, NULL); } else if ((DVENCODERVIDEOFORMAT_PAL == videoFormat) (486 == m_vihDefault.bmiHeader.biHeight)) { // set the encoder to NTSC if its PAL videoFormat = DVENCODERVIDEOFORMAT_NTSC; hr = pIDV- put_IFormatResolution(videoFormat, dvFormat, resolution, FALSE, NULL); } } } if (SUCCEEDED(hr)) { // if the format is PAL, insert the Decklink field swap filter, PAL DV is the opposite // field order to PAL SD if (576 == m_vihDefault.bmiHeader.biHeight) { CComPtr IBaseFilter pPALFieldSwap = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_DecklinkFieldSwap, L"Decklink PAL Field Swap", pPALFieldSwap); if (SUCCEEDED(hr)) { // connect the smart-T capture pin to the PAL field swap filter hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, L"Capture", pPALFieldSwap, NULL); if (SUCCEEDED(hr)) { // connect the field swap filter to the DV encoder hr = CDSUtils ConnectFilters(m_pGraph, pPALFieldSwap, NULL, pDVEncoder, NULL); } } } else { // connect the smart-T capture pin to the DV Encoder hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, L"Capture", pDVEncoder, NULL); } if (SUCCEEDED(hr)) { // connect the DV encoder output to the AVI mux hr = CDSUtils ConnectFilters(m_pGraph, pDVEncoder, NULL, pAVIMux, NULL); if (SUCCEEDED(hr)) { // connect the mux to the file writer hr = CDSUtils ConnectFilters(m_pGraph, pAVIMux, NULL, pFileWriter, NULL); if (SUCCEEDED(hr)) { // video path connected now optionally connect the audio path if (FALSE == m_bAudioMute) { // connect the Decklink audio capture pin to the mux hr = CDSUtils ConnectFilters(m_pGraph, m_pAudioCapture, L"Capture", pAVIMux, NULL); } if (SUCCEEDED(hr)) { m_pControl- Run(); } } } } } } } } } } return hr; } //----------------------------------------------------------------------------- // CreateWMCaptureGraph // Create an optimum Windows Media capture graph HRESULT CDecklinkCaptureDlg CreateWMCaptureGraph() { HRESULT hr = S_OK; // locate the asf writer filter and add it to the graph CComPtr IBaseFilter pASFWriter = NULL; hr = CDSUtils AddFilter(m_pGraph, CLSID_WMAsfWriter, L"WM ASF Writer", pASFWriter); if (SUCCEEDED(hr)) { // set the output filename CComQIPtr IFileSinkFilter, IID_IFileSinkFilter pIFS = pASFWriter; if (pIFS) { USES_CONVERSION;// for T2W macro hr = pIFS- SetFileName(T2W(m_captureFile), NULL); if (SUCCEEDED(hr)) { hr = ConfigureWMEncoder(pASFWriter); } } if (SUCCEEDED(hr)) { if (FALSE == m_bAudioMute) { // connect the Decklink audio capture pin to the ASF writer hr = CDSUtils ConnectFilters(m_pGraph, m_pAudioCapture, pASFWriter, MEDIATYPE_Audio); } if (SUCCEEDED(hr)) { // connect the smart-T capture pin to the ASF writer hr = CDSUtils ConnectFilters(m_pGraph, m_pSmartT, pASFWriter, MEDIATYPE_Video); if (SUCCEEDED(hr)) { m_pControl- Run(); } } } } return hr; } //----------------------------------------------------------------------------- // ConfigureWMEncoder // Configure the Windows Media encoder HRESULT CDecklinkCaptureDlg ConfigureWMEncoder(IBaseFilter* pASFWriter) { HRESULT hr = S_OK; // modify the video output resolution of a system profile if (pASFWriter) { // simple system profile encoding CComQIPtr IConfigAsfWriter, IID_IConfigAsfWriter pICW = pASFWriter; if (pICW) { //NOTE You could just use the following for a default system profile //hr = pICW- ConfigureFilterUsingProfileGuid(WMProfile_XXX);// RE wmsysprf.h //NOTE If you want video only capture you must modify the profile to remove the audio // otherwise encoding will fail // Load a system profile and modify the resolution of the video output // NOTE The scope of the encoding is enormous, this just demonstrates how to change // the output video resolution from 320x240 to something larger. // Changing the resolution affects coding performance, it is likely that the encoder will // start to drop frames after a while. Using WM9 codecs will probably improve performance // and that has been left to the reader... ;o) // // get a profile manager CComPtr IWMProfileManager pIWMProfileManager = NULL; hr = WMCreateProfileManager( pIWMProfileManager); if (SUCCEEDED(hr)) { // load a system profile to modify CComPtr IWMProfile pIWMProfile = NULL; // NOTE Any WMProfile_XXX could be used here, or create a custom profile from scratch hr = pIWMProfileManager- LoadProfileByID(WMProfile_V80_FAIRVBRVideo, pIWMProfile); if (SUCCEEDED(hr)) { // search the streams for the video stream and attempt to modify the video size DWORD cbStreams = 0; hr = pIWMProfile- GetStreamCount( cbStreams); if (SUCCEEDED(hr)) { IWMStreamConfig* pIWMStreamConfig = NULL; GUID streamType = {0}; DWORD stream; if (m_bAudioMute) { // remove the audio stream for video only capture for (stream=0; stream cbStreams; ++stream) { hr = pIWMProfile- GetStream(stream, pIWMStreamConfig); if (SUCCEEDED(hr)) { hr = pIWMStreamConfig- GetStreamType( streamType); if (SUCCEEDED(hr)) { if (MEDIATYPE_Audio == streamType) { if (SUCCEEDED(pIWMProfile- RemoveStream(pIWMStreamConfig))) { --cbStreams; } SAFE_RELEASE(pIWMStreamConfig); break; } } } } } for (stream=0; stream cbStreams; ++stream) { hr = pIWMProfile- GetStream(stream, pIWMStreamConfig); if (SUCCEEDED(hr)) { hr = pIWMStreamConfig- GetStreamType( streamType); if (SUCCEEDED(hr) (MEDIATYPE_Video == streamType)) { // found the video stream CComQIPtr IWMMediaProps, IID_IWMMediaProps pIWMMediaProps = pIWMStreamConfig; if (pIWMMediaProps) { // get the size of the media type WM_MEDIA_TYPE* pMediaType = NULL; DWORD cbMediaType = 0; hr = pIWMMediaProps- GetMediaType(pMediaType, cbMediaType); if (SUCCEEDED(hr)) { pMediaType = (WM_MEDIA_TYPE*)new char [cbMediaType]; if (pMediaType) { hr = pIWMMediaProps- GetMediaType(pMediaType, cbMediaType); if (SUCCEEDED(hr)) { BITMAPINFOHEADER* pbmih = NULL; if (WMFORMAT_VideoInfo == pMediaType- formattype) { WMVIDEOINFOHEADER* pvih = (WMVIDEOINFOHEADER*)pMediaType- pbFormat; pbmih = pvih- bmiHeader; } else if (WMFORMAT_MPEG2Video == pMediaType- formattype) { WMVIDEOINFOHEADER2* pvih = (WMVIDEOINFOHEADER2*) ((WMMPEG2VIDEOINFO*)pMediaType- pbFormat)- hdr; pbmih = pvih- bmiHeader; } if (pbmih) { // modify the video dimensions, set the property, reconfigure the stream // and then configure the ASF writer with this modified profile pbmih- biWidth = 640;// was 320; pbmih- biHeight = 480;// was 240; pbmih- biSizeImage = pbmih- biWidth * pbmih- biHeight * pbmih- biBitCount / 8;// NOTE This calculation is not correct for all bit depths hr = pIWMMediaProps- SetMediaType(pMediaType); if (SUCCEEDED(hr)) { // config the ASF writer filter to use this modified system profile hr = pIWMProfile- ReconfigStream(pIWMStreamConfig); if (SUCCEEDED(hr)) { hr = pICW- ConfigureFilterUsingProfile(pIWMProfile); } } } } delete [] (char*)pMediaType; } } } } SAFE_RELEASE(pIWMStreamConfig); } } } } } /* // modify other ASF writer properties IServiceProvider* pProvider = NULL; hr = pASFWriter- QueryInterface(IID_IServiceProvider, reinterpret_cast void** ( pProvider)); if (SUCCEEDED(hr)) { IID_IWMWriterAdvanced2* pWMWA2 = NULL; hr = pProvider- QueryService(IID_IID_IWMWriterAdvanced2, IID_IID_IWMWriterAdvanced2, reinterpret_cast void** ( pWMWA2)); if (SUCCEEDED(hr)) { // set the deinterlace mode pWMWA2- GetInputSetting(...); SAFE_RELEASE(pWMWA2); } SAFE_RELEASE(pProvider); } */ } } else { hr = E_INVALIDARG; } return hr; } //----------------------------------------------------------------------------- // DestroyGraph // Remove all intermediate filters, keep any Decklink and video render filters as // these are used by all the graphs. HRESULT CDecklinkCaptureDlg DestroyGraph() { HRESULT hr = S_OK; if (m_pGraph m_pControl) { m_pControl- Stop(); // release our outstanding reference on this filter so it can be removed from the graph SAFE_RELEASE(m_pSmartT); // retrieve the name of the capture device, don t remove it in this method PWSTR pNameVideoCapture = (PWSTR)m_videoDeviceCtrl.GetItemData(m_videoDeviceCtrl.GetCurSel()); PWSTR pNameAudioCapture = (PWSTR)m_audioDeviceCtrl.GetItemData(m_audioDeviceCtrl.GetCurSel()); CComPtr IEnumFilters pEnum = NULL; hr = m_pGraph- EnumFilters( pEnum); if (SUCCEEDED(hr)) { IBaseFilter* pFilter = NULL; while (S_OK == pEnum- Next(1, pFilter, NULL)) { FILTER_INFO filterInfo = {0}; hr = pFilter- QueryFilterInfo( filterInfo); if (SUCCEEDED(hr)) { SAFE_RELEASE(filterInfo.pGraph); if ((NULL == wcsstr(filterInfo.achName, pNameVideoCapture)) (NULL == wcsstr(filterInfo.achName, pNameAudioCapture)) (NULL == wcsstr(filterInfo.achName, L"Video Renderer"))) { hr = m_pGraph- RemoveFilter(pFilter); if (SUCCEEDED(hr)) { hr = pEnum- Reset(); } } } SAFE_RELEASE(pFilter); } } } else { hr = E_POINTER; } return hr; } //----------------------------------------------------------------------------- // InitialiseVideoPreview // In short get the video screen renderer to draw into the picture control, which is our preview window // the following code sets this up, in addition to adding the HandleGraphEvent and WindowProc methods // read the DXSDK docos for more detailed information void CDecklinkCaptureDlg InitialiseVideoPreview(void) { // modify the preview window if (m_pVideoRenderer) { if (NULL == m_pIVW) { if (SUCCEEDED(m_pVideoRenderer- QueryInterface(IID_IVideoWindow, reinterpret_cast void** ( m_pIVW)))) { // get the window to handle redraws, etc // Set msg drain of VideoWindow to point to our dialog window. The dialog s // window procedure then handles events from the VideoWindow. HRESULT hr = m_pIVW- put_MessageDrain(reinterpret_cast OAHWND (m_hWnd)); if (NULL == m_pMediaEvent) { // Make graph send WM_GRAPHNOTIFY when it wants our attention see "Learning // When an Event Occurs" in the DX9 documentation. hr = m_pGraph- QueryInterface(IID_IMediaEventEx, reinterpret_cast void** ( m_pMediaEvent)); if (SUCCEEDED(hr)) { hr = m_pMediaEvent- SetNotifyWindow(reinterpret_cast OAHWND (m_hWnd), WM_GRAPHNOTIFY, 0); } // object created for it. RECT rc; m_preview.GetClientRect( rc); m_pIVW- SetWindowPosition(rc.left, rc.top, rc.right - rc.left, rc.bottom - rc.top); // VideoWindow is a child window of the bounding rect hr = m_pIVW- put_WindowStyle(WS_CHILD); hr = m_pIVW- put_Owner(reinterpret_cast OAHWND (m_preview.GetSafeHwnd())); hr = m_pIVW- SetWindowForeground(-1); } } } } } //----------------------------------------------------------------------------- // PopulateDeviceControl // Fill device combo box with available devices of the specified category HRESULT CDecklinkCaptureDlg PopulateDeviceControl(const GUID* pCategory, CComboBox* pCtrl) { HRESULT hr = S_OK; if (pCategory pCtrl) { // first enumerate the system devices for the specifed class and filter name CComPtr ICreateDevEnum pSysDevEnum = NULL; hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast void** ( pSysDevEnum)); if (SUCCEEDED(hr)) { CComPtr IEnumMoniker pEnumCat = NULL; hr = pSysDevEnum- CreateClassEnumerator(*pCategory, pEnumCat, 0); if (S_OK == hr) { IMoniker* pMoniker = NULL; bool Loop = true; while ((S_OK == pEnumCat- Next(1, pMoniker, NULL)) Loop) { IPropertyBag* pPropBag = NULL; hr = pMoniker- BindToStorage(0, 0, IID_IPropertyBag, reinterpret_cast void** ( pPropBag)); if (SUCCEEDED(hr)) { VARIANT varName; VariantInit( varName); hr = pPropBag- Read(L"FriendlyName", varName, 0); if (SUCCEEDED(hr)) { size_t len = wcslen(varName.bstrVal) + 1; PWSTR pName = new WCHAR [len]; StringCchCopyW(pName, len, varName.bstrVal); CW2AEX buf(varName.bstrVal); pCtrl- SetItemData(pCtrl- AddString(buf), (DWORD)pName); } VariantClear( varName); // contained within a loop, decrement the reference count SAFE_RELEASE(pPropBag); } SAFE_RELEASE(pMoniker); } } } } else { hr = E_POINTER; } return hr; } //----------------------------------------------------------------------------- // PopulateVideoControl // Fill video format combo box with supported video formats using the IAMStreamConfig // interface. HRESULT CDecklinkCaptureDlg PopulateVideoControl() { HRESULT hr = S_OK; if (m_pVideoCapture) { // free mediatypes attached to format controls int count = m_videoFormatCtrl.GetCount(); if (count) { for (int item=0; item count; ++item) { DeleteMediaType((AM_MEDIA_TYPE*)m_videoFormatCtrl.GetItemData(item)); } m_videoFormatCtrl.ResetContent(); } // locate the video capture pin and QI for stream control CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pVideoCapture, MEDIATYPE_Video, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // loop through all the capabilities (video formats) and populate the control int count, size; hr = pISC- GetNumberOfCapabilities( count, size); if (SUCCEEDED(hr)) { if (sizeof(VIDEO_STREAM_CONFIG_CAPS) == size) { AM_MEDIA_TYPE* pmt = NULL; VIDEO_STREAM_CONFIG_CAPS vscc; VIDEOINFOHEADER* pvih = NULL; for (int index=0; index count; ++index) { hr = pISC- GetStreamCaps(index, pmt, reinterpret_cast BYTE* ( vscc)); if (SUCCEEDED(hr)) { char buffer[128]; WORD PixelFormat; float FrameRate; ZeroMemory(buffer, sizeof(buffer)); pvih = (VIDEOINFOHEADER*)pmt- pbFormat; char* pPixelFormatLUT[] = {"4 2 2", "4 4 4"}; if (pvih- bmiHeader.biBitCount == 16) PixelFormat = 8; else if (pvih- bmiHeader.biBitCount == 20) PixelFormat = 10; else PixelFormat = pvih- bmiHeader.biBitCount; // provide a useful description of the formats if (486 == pvih- bmiHeader.biHeight) { if (417083 == pvih- AvgTimePerFrame) { StringCbPrintfA(buffer, sizeof(buffer), "NTSC %d-bit %s (3 2 pulldown removal)", PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { StringCbPrintfA(buffer, sizeof(buffer), "NTSC %d-bit %s", PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } } else if (576 == pvih- bmiHeader.biHeight) { StringCbPrintfA(buffer, sizeof(buffer), "PAL %d-bit %s", PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { char* pFrameRateFormat[] = {"%.2f", "%.0f"}; FrameRate = (float)(long)UNITS / pvih- AvgTimePerFrame; if ((720 == pvih- bmiHeader.biHeight) (59.94 FrameRate)) { if ((FrameRate - (int)FrameRate) 0.01) { StringCbPrintfA(buffer, sizeof(buffer), "HD720 %.2fp %d-bit %s (Overcranked 60p)", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { StringCbPrintfA(buffer, sizeof(buffer), "HD720 %.0fp %d-bit %s (Overcranked 60p)", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } } else if ((720 == pvih- bmiHeader.biHeight) (59.94 = FrameRate)) { if ((FrameRate - (int)FrameRate) 0.01) { StringCbPrintfA(buffer, sizeof(buffer), "HD720 %.2fp %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { StringCbPrintfA(buffer, sizeof(buffer), "HD720 %.0fp %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } } else if ((1080 == pvih- bmiHeader.biHeight) (50.00 = FrameRate)) { if ((FrameRate - (int)FrameRate) 0.01) { StringCbPrintfA(buffer, sizeof(buffer), "HD1080 %.2fi %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { StringCbPrintfA(buffer, sizeof(buffer), "HD1080 %.0fi %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } } else { if ((FrameRate - (int)FrameRate) 0.01) { StringCbPrintfA(buffer, sizeof(buffer), "HD1080 %.2fPsF %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } else { StringCbPrintfA(buffer, sizeof(buffer), "HD1080 %.0fPsF %d-bit %s", FrameRate, PixelFormat, pPixelFormatLUT[(30 == PixelFormat)]); } } } // add the item description to combo box int n = m_videoFormatCtrl.AddString(buffer); // store media type pointer in item s data section m_videoFormatCtrl.SetItemData(n, (DWORD_PTR)pmt); // set default format if ((pvih- AvgTimePerFrame == m_vihDefault.AvgTimePerFrame) (pvih- bmiHeader.biWidth == m_vihDefault.bmiHeader.biWidth) (pvih- bmiHeader.biHeight == m_vihDefault.bmiHeader.biHeight) (pvih- bmiHeader.biBitCount == m_vihDefault.bmiHeader.biBitCount)) { m_videoFormatCtrl.SetCurSel(n); pISC- SetFormat(pmt); } } } } else { m_videoFormatCtrl.AddString("ERROR Unable to retrieve video formats"); } } } } else { hr = E_POINTER; } return hr; } //----------------------------------------------------------------------------- // PopulateAudioControl // Fill audio format combo box with supported audio formats using the IAMStreamConfig // interface. HRESULT CDecklinkCaptureDlg PopulateAudioControl() { HRESULT hr = S_OK; if (m_pAudioCapture) { // free mediatypes attached to format controls int count = m_audioFormatCtrl.GetCount(); if (count) { for (int item=0; item count; ++item) { DeleteMediaType((AM_MEDIA_TYPE*)m_audioFormatCtrl.GetItemData(item)); } m_audioFormatCtrl.ResetContent(); } // locate the audio capture pin and QI for stream control CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pAudioCapture, MEDIATYPE_Audio, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // loop through all the capabilities (audio formats) and populate the control int count, size; hr = pISC- GetNumberOfCapabilities( count, size); if (SUCCEEDED(hr)) { if (sizeof(AUDIO_STREAM_CONFIG_CAPS) == size) { AM_MEDIA_TYPE* pmt = NULL; AUDIO_STREAM_CONFIG_CAPS ascc; WAVEFORMATEX* pwfex = NULL; for (int index=0; index count; ++index) { hr = pISC- GetStreamCaps(index, pmt, reinterpret_cast BYTE* ( ascc)); if (SUCCEEDED(hr)) { char buffer[32]; ZeroMemory(buffer, sizeof(buffer)); pwfex = (WAVEFORMATEX*)pmt- pbFormat; // provide a useful description of the formats if (1 == pwfex- nChannels) { StringCbPrintfA(buffer, sizeof(buffer), "%d channel, %2.1fkHz, %d-bit", (int)pwfex- nChannels, (float)pwfex- nSamplesPerSec / 1000, (int)pwfex- wBitsPerSample); } else { StringCbPrintfA(buffer, sizeof(buffer), "%d channels, %2.1fkHz, %d-bit", (int)pwfex- nChannels, (float)pwfex- nSamplesPerSec / 1000, (int)pwfex- wBitsPerSample); } // add the item description to combo box int n = m_audioFormatCtrl.AddString(buffer); // store media type pointer in item s data section m_audioFormatCtrl.SetItemData(n, (DWORD_PTR)pmt); // set default format if ((pwfex- wFormatTag == m_wfexDefault.wFormatTag) (pwfex- nChannels == m_wfexDefault.nChannels) (pwfex- nSamplesPerSec == m_wfexDefault.nSamplesPerSec) (pwfex- nAvgBytesPerSec == m_wfexDefault.nAvgBytesPerSec)) { m_audioFormatCtrl.SetCurSel(n); pISC- SetFormat(pmt); } } } } else { m_audioFormatCtrl.AddString("ERROR Unable to retrieve audio formats"); } } } } else { hr = E_POINTER; } return hr; } //----------------------------------------------------------------------------- // PopulateCompressionControl // Fill compression control with a selection of video compressors, locate the // encoders and add them to the combo box if they exist. HRESULT CDecklinkCaptureDlg PopulateCompressionControl() { int n = m_compressionCtrl.AddString("Uncompressed"); m_compressionCtrl.SetItemData(n, (DWORD_PTR)ENC_NONE); // search for the DV encoder, MPEG encoder and WM encoder IBaseFilter* pFilter = NULL; HRESULT hr = CoCreateInstance(CLSID_DVVideoEnc, 0, CLSCTX_INPROC_SERVER, IID_IBaseFilter, reinterpret_cast void** ( pFilter)); if (SUCCEEDED(hr)) { n = m_compressionCtrl.SetCurSel(m_compressionCtrl.AddString("DV Video Encoder")); m_compressionCtrl.SetItemData(n, (DWORD_PTR)ENC_DV); SAFE_RELEASE(pFilter); } hr = CoCreateInstance(CLSID_WMAsfWriter, 0, CLSCTX_INPROC_SERVER, IID_IBaseFilter, reinterpret_cast void** ( pFilter)); if (SUCCEEDED(hr)) { n = m_compressionCtrl.SetCurSel(m_compressionCtrl.AddString("Windows Media Encoder")); m_compressionCtrl.SetItemData(n, (DWORD_PTR)ENC_WM); SAFE_RELEASE(pFilter); } m_compressionCtrl.SetCurSel(m_compressor); return S_OK; } //----------------------------------------------------------------------------- // OnCbnSelchangeComboVideodevice // Rebuild graph with selected capture device void CDecklinkCaptureDlg OnCbnSelchangeComboVideodevice() { SAFE_RELEASE(m_pVideoCapture);// release our outstanding reference // remove intermediate filters, since the device selection has changed the capture device will also be removed HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { // rebuild graph with new capture device selection PWSTR pName = (PWSTR)m_videoDeviceCtrl.GetItemData(m_videoDeviceCtrl.GetCurSel()); if (pName) { hr = CDSUtils AddFilter2(m_pGraph, CLSID_VideoInputDeviceCategory, pName, m_pVideoCapture); if (SUCCEEDED(hr)) { // as the device has changed get the current operating format so that the control // and display this as the current selection CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pVideoCapture, MEDIATYPE_Video, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // get the current format of the device to set the current selection of the control AM_MEDIA_TYPE* pamt = NULL; hr = pISC- GetFormat( pamt); if (SUCCEEDED(hr)) { if (pamt- pbFormat) { m_vihDefault = *(VIDEOINFOHEADER*)pamt- pbFormat; } DeleteMediaType(pamt); } } hr = PopulateVideoControl();// repopulate the control with formats from the new device if (SUCCEEDED(hr)) { hr = CreatePreviewGraph();// rebuild the graph with the new device } } } else { hr = E_POINTER; } } } //----------------------------------------------------------------------------- // OnCbnSelchangeComboAudiodevice // Rebuild graph with selected capture device void CDecklinkCaptureDlg OnCbnSelchangeComboAudiodevice() { SAFE_RELEASE(m_pAudioCapture);// release our outstanding reference // remove intermediate filters, since the device selection has changed the capture device will also be removed HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { PWSTR pName = (PWSTR)m_audioDeviceCtrl.GetItemData(m_audioDeviceCtrl.GetCurSel()); if (pName) { hr = CDSUtils AddFilter2(m_pGraph, CLSID_AudioInputDeviceCategory, pName, m_pAudioCapture); if (SUCCEEDED(hr)) { // as the device has changed get the current operating format so that the control // and display this as the current selection CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pAudioCapture, MEDIATYPE_Audio, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // get the current format of the device to set the current selection of the control AM_MEDIA_TYPE* pamt = NULL; hr = pISC- GetFormat( pamt); if (SUCCEEDED(hr)) { if (pamt- pbFormat) { m_wfexDefault = *(WAVEFORMATEX*)pamt- pbFormat; } DeleteMediaType(pamt); } } hr = PopulateAudioControl();// repopulate the control with formats from the new device if (SUCCEEDED(hr)) { hr = CreatePreviewGraph();// rebuild the graph with the new device } } } else { hr = E_POINTER; } } } //----------------------------------------------------------------------------- // OnCbnSelchangeComboVideoformats // Rebuild preview graph if format selection changed void CDecklinkCaptureDlg OnCbnSelchangeComboVideoformats() { HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { // locate the video capture pin and QI for stream control CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pVideoCapture, MEDIATYPE_Video, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // set the new media format AM_MEDIA_TYPE* pmt = (AM_MEDIA_TYPE*)m_videoFormatCtrl.GetItemData(m_videoFormatCtrl.GetCurSel()); m_vihDefault = *(VIDEOINFOHEADER*)pmt- pbFormat; ASSERT(sizeof(VIDEOINFOHEADER) = pmt- cbFormat); hr = pISC- SetFormat(pmt); if (SUCCEEDED(hr)) { // save the new format EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("VideoFormat", reinterpret_cast const BYTE* ( m_vihDefault), sizeof(m_vihDefault))); // update compression control, we don t have an HD compression filter so disable compression for HD formats if (576 m_vihDefault.bmiHeader.biHeight) { m_compressor = 0; m_compressionCtrl.SetCurSel(m_compressor); // save the new state EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("VideoCompressor", reinterpret_cast const BYTE* ( m_compressor), sizeof(m_compressor))); m_bEnableCompressionCtrl = FALSE; } else { m_bEnableCompressionCtrl = TRUE; } EnableControls(); // rebuild the graph hr = CreatePreviewGraph(); } } } } //----------------------------------------------------------------------------- // OnCbnSelchangeComboAudioformats // Rebuild preview graph if format selection changed void CDecklinkCaptureDlg OnCbnSelchangeComboAudioformats() { HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { // locate the audio capture pin and QI for stream control CComPtr IAMStreamConfig pISC = NULL; hr = CDSUtils FindPinInterface(m_pAudioCapture, MEDIATYPE_Audio, PINDIR_OUTPUT, IID_IAMStreamConfig, reinterpret_cast void** ( pISC)); if (SUCCEEDED(hr)) { // set the new media format AM_MEDIA_TYPE* pmt = (AM_MEDIA_TYPE*)m_audioFormatCtrl.GetItemData(m_audioFormatCtrl.GetCurSel()); m_wfexDefault = *(WAVEFORMATEX*)pmt- pbFormat; ASSERT(sizeof(WAVEFORMATEX) == pmt- cbFormat); hr = pISC- SetFormat(pmt); if (SUCCEEDED(hr)) { // save the new format EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("AudioFormat", reinterpret_cast const BYTE* ( m_wfexDefault), sizeof(m_wfexDefault))); // rebuild the graph hr = CreatePreviewGraph(); } } } } //----------------------------------------------------------------------------- // OnCbnSelchangeComboCompression // Rebuild preview graph if compression selection changed void CDecklinkCaptureDlg OnCbnSelchangeComboCompression() { HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { // save the new state m_compressor = m_compressionCtrl.GetCurSel(); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("VideoCompressor", reinterpret_cast const BYTE* ( m_compressor), sizeof(m_compressor))); // rebuild the graph hr = CreatePreviewGraph(); } } //----------------------------------------------------------------------------- // OnBnClickedCheckAudiomute // Rebuild the capture graph to reflect the new audio setting void CDecklinkCaptureDlg OnBnClickedCheckAudiomute() { CButton* pCheck = (CButton*)GetDlgItem(IDC_CHECK_AUDIOMUTE); if (pCheck) { m_bAudioMute = pCheck- GetState() 0x0003; HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { // save the new state EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("AudioMute", reinterpret_cast const BYTE* ( m_bAudioMute), sizeof(m_bAudioMute))); // rebuild the graph which reflects the new audio setting hr = CreatePreviewGraph(); } } } //----------------------------------------------------------------------------- // OnBnClickedButtonBrowse // Create a file open dialog to browse for a file location void CDecklinkCaptureDlg OnBnClickedButtonBrowse() { char BASED_CODE szFilters[] = "Windows Media Files|*.avi;*.asf;*.wmv|All Files (*.*)|*.*||"; char* pExt[] = {"*.avi", "*.avi", "*.asf;*.wmv"}; CFileDialog FileDlg(TRUE, "Windows Media Files", pExt[m_compressor], 0, szFilters, this); if (FileDlg.DoModal() == IDOK) { m_captureFile = FileDlg.GetPathName(); m_captureFileCtrl.SetWindowText(m_captureFile); } } //----------------------------------------------------------------------------- // OnBnClickedButtonCapture // Create a capture graph a start capture void CDecklinkCaptureDlg OnBnClickedButtonCapture() { HRESULT hr = CreateCaptureGraph(); if (SUCCEEDED(hr)) { if (m_pControl) { hr = m_pControl- Run(); if (SUCCEEDED(hr)) { DisableControls(); } } } } //----------------------------------------------------------------------------- // OnBnClickedButtonStop // Stop capture and revert to preview void CDecklinkCaptureDlg OnBnClickedButtonStop() { HRESULT hr = DestroyGraph(); if (SUCCEEDED(hr)) { hr = CreatePreviewGraph(); if (SUCCEEDED(hr)) { EnableControls(); } } } //----------------------------------------------------------------------------- // EnableControls // void CDecklinkCaptureDlg EnableControls(void) { CWnd* pWnd = GetDlgItem(IDC_COMBO_VIDEOFORMATS); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_COMBO_AUDIOFORMATS); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_CHECK_AUDIOMUTE); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_COMBO_COMPRESSION); m_bEnableCompressionCtrl = (576 m_vihDefault.bmiHeader.biHeight) ? FALSE TRUE;// don t have an HDV codec do disable compression control for HD formats pWnd- EnableWindow(m_bEnableCompressionCtrl); pWnd = GetDlgItem(IDC_EDIT_CAPTUREFILE); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_BUTTON_BROWSE); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_BUTTON_CAPTURE); pWnd- EnableWindow(TRUE); pWnd = GetDlgItem(IDC_BUTTON_STOP); pWnd- EnableWindow(FALSE); } //----------------------------------------------------------------------------- // DisableControls // void CDecklinkCaptureDlg DisableControls(void) { CWnd* pWnd = GetDlgItem(IDC_COMBO_VIDEOFORMATS); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_COMBO_AUDIOFORMATS); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_CHECK_AUDIOMUTE); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_COMBO_COMPRESSION); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_EDIT_CAPTUREFILE); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_BUTTON_BROWSE); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_BUTTON_CAPTURE); pWnd- EnableWindow(FALSE); pWnd = GetDlgItem(IDC_BUTTON_STOP); pWnd- EnableWindow(TRUE); } //----------------------------------------------------------------------------- // QueryRegistry // retrieve previous media formats from registry void CDecklinkCaptureDlg QueryRegistry(void) { if (ERROR_SUCCESS == m_regUtils.Open("DecklinkCaptureSample")) { EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.GetBinary("VideoFormat", reinterpret_cast LPBYTE ( m_vihDefault), sizeof(m_vihDefault))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.GetBinary("AudioFormat", reinterpret_cast LPBYTE ( m_wfexDefault), sizeof(m_wfexDefault))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.GetBinary("AudioMute", reinterpret_cast LPBYTE ( m_bAudioMute), sizeof(m_bAudioMute))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.GetBinary("VideoCompressor", reinterpret_cast LPBYTE ( m_compressor), sizeof(m_compressor))); WCHAR captureFile[MAX_PATH]; ZeroMemory(captureFile, sizeof(captureFile)); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.GetString("CaptureFile", reinterpret_cast LPBYTE (captureFile), sizeof(captureFile))); m_captureFile = captureFile; } else { // create the key and registry values if (ERROR_SUCCESS == m_regUtils.Create("DecklinkCaptureSample")) { EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("VideoFormat", reinterpret_cast const BYTE* ( m_vihDefault), sizeof(m_vihDefault))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("AudioFormat", reinterpret_cast const BYTE* ( m_wfexDefault), sizeof(m_wfexDefault))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("AudioMute", reinterpret_cast const BYTE* ( m_bAudioMute), sizeof(m_bAudioMute))); EXECUTE_ASSERT(ERROR_SUCCESS == m_regUtils.SetBinary("VideoCompressor", reinterpret_cast const BYTE* ( m_compressor), sizeof(m_compressor))); } } // update mute audio check box control CButton* pButton = (CButton*)GetDlgItem(IDC_CHECK_AUDIOMUTE); pButton- SetCheck(m_bAudioMute); }