Jump to content

Brian

Member
  • Posts

    101
  • Joined

  • Last visited

Everything posted by Brian

  1. Its possible it was even older than 9, I don't really remember except for the part where I was creating the wav from stems.
  2. Hmm I think the last version a client was using was 9, I'll look into this. Although... I'm not scared of Audacity and doing it myself with provided stems!
  3. Its possible you could use Touch Designer from Derivative to create the "glue" interface Mike was talking about. Touch Designer is already setup to use inputs from the Kinect, and has modules for outputting IP commands and LOTS of other stuff. In minutes you can be controlling a 3d spaceship image from your accelerometer enabled cell phone! Its incredibly flexible and allows for things like Skrillex's real-time giant dancing robot for the 2011 (I think it was called) Mothership tour. Derivative's website is full of examples involving Kinect work and because of Touch Designer's flexibility it could possibly interface WatchOUT and Kinect for you. I am clueless on how but I know the power of both packages and I could see it happening.
  4. In regards to "looping" your cards together SDI for "ease" of setup... I would avoid this. You are potentially introducing frame delay between the inputs by varying the travel length the signal has to move before hitting another card. Single source>cable>DA>identical cables>capture card is how I would do this. Instead of this: Single source>cable>card>cable>card>cable>card Signal travels VERY fast for sure... but going back to old 5-wire RGBHV/VGA days for a bad example: If your red cable was shorter than the rest you would get "ghosting". The red signal would arrive ahead of the green and blue and sync and the processing would show this. Really there are more physics involved than that... like red travels faster than blue and green naturally anyway but that's all analog issues and beyond this example. Yes, you are dealing with digital vs analog with SDI so you won't see color shifts like that but I believe you are running the risk of having sync issues if you have an input that crosses Displays between multiple servers. Your sync is hitting card 1 which starts its capture, then loops out to card 2, which then starts its capture... If you are on a soft edge blend almost ANY sync issues between Displays/Servers will be noticeable. Just a thought. I can't help you with the rest as I only have DVI capture card experience. Brian Lynn
  5. I feed my camera/cameras/camera switch SDI/HDSDI into a BARCO ImageProHD (or similar scaler) and convert it to 1080p@60 DVI (or any size resolution progressive scan rate depending on the source) and bring that to my capture cards. This works very well for me. Using the DeInterlace option in WatchOUT adds an incredible amount of extra delay. The ImageProHD feed into WatchOUT is a pleasure to work with and adds minimal delay to a system that needs as minimal delay as possible. It also gives me incredible flexibility for multiple flavors of input as the ImageProHD can handle a plethora of signal types, and it can sub-switch those signal types into my capture cards.
  6. This line scares me: Note that the transmission range depends on the signal resolution, graphics card and display used Distance should be independent of the graphics card and display, especially with DVI. Its a digital signal between two converters, the graphics card and display are invisible to the cable. Different resolutions require different bandwidth and there fore will greatly effect the distance, but the Kramer boxes don't care what makes the signal as long as its clean and strong enough to retransmit, and they don't care what its attached to on the other end, it simply outputs a signal to what ever is there. Even with EDID pass through, which is a nice feature, the Kramer boxes shouldn't care what is generating and what is receiving the signal. I would call them out on that if I was talking to them directly. Gefen used to do this and it was a forced sale for their over priced in-house cables. Kramer claiming you have to use their cables is something I always shy away from. My experience with DVI over CAT5 is limited. I have a few Gefen DVI>Cat5 boxes and they won't carry a signal very far. I use DVI>Cat5/6 for runs under 100 feet (approx 30m). Using Cat5, Cat6, shielded, unshielded, doesn't seem to make a difference. Anything over 100 feet (30m) we use Fiber Optic. This may seem like an expensive option but we've had great luck with the Gefen DVI FM 100. These un-marry the fiber cable from the ends so you can easily replace the fiber itself. And if I don't care about the color I can find good fiber suplus for inexpensive. Biggest problem is just making sure you have the proper connectors and cables as there are many flavors of fiber. For extreme lengths that out distance your CAT setup you could always rent fiber if its too expensive of an option. In the USA we rent a lot of fiber from the big rental houses... quality is hit and miss (on the rental stock), and fiber can be picky to abuse and fail for seemingly silly reasons, so rent spares!
  7. From experience: trying to program custom resolutions is VERY difficult. I would go with the suggestion from Jonas... You will be digging into writing a customized driver for your graphics card and you don't want to get into that... Providing the custom EDID is easy with the proper hardare (e.g. DVI Parrot, Gefen DVI Detective, LightWare EDID Manager) but the driver side takes a hardcore understanding of driver programming. Save yourself the headache and go with Jonas' suggestion!
  8. Good to know 4 works... I think I'm maxed on my PCIe lanes though, bummer.
  9. Nic do the files play or just completely fail? Can you play them outside of WatchOUT and see all 8 channels working? What audio interface are you using for playback? What version of WatchOUT? The best luck I've had creating 8 cannel wave files was with Audacity. From my clients I request clearly labeled 8 channels "stems". These import into Audacity easily (all the "stems" should be exactly the same length) and then I can either use a default output to interlace the 8 channel file or re-assign the channels to fit my setup (which I try to keep to standard anyway). Everything I've gotten from clients that is pre-programmed 8 channel audio tends to fail unless they've used the same process as I do. From what I've heard the last version of ProTools can't even export a 7.1 wave file but that might have just been the audio guy being lazy or not knowing his software, I don't do much audio creation. Its possible there are simply too many settings in the "pro" software that Audacity has omitted or streamlined but Audacity has not failed me yet. And its free!
  10. Thanks Neil! I LOVE the AUX Timelines... They make live show timing soooooo much easier! No more waiting for a long cue to time out so I can start the next section if something goes really short... Dan: I believe can use both DVI on my input cards simultaneously. I will test this and get back to you with a confirmation. I believe I did this during my testing a few days ago and I should be able to capture a sample and post a link... assuming I can post links... I hope I can do links... The following link is a short video I made in the shop during testing. It doesn't show both DVI inputs working but here is what it does show: What you're looking at is my Corio2 Multiviewer monitor. The grid is the 16 channels of my servers (4 servers, 4 heads each) setup in a matrix. The top row is server 1, down to the bottom row of server 4. The feed is coming from a MacPro tower playing back a KiPro capture of a show we recently did. My WatchOUT setup is the same 4x4 matrix grid as the Multiviewer monitor. Testing only, I was having fun lol. All 16 outputs are 1400x1050 but the Multiviewer is stretching to FILL so my 4:3 is stretched to 16:9 (hence some small distortion). First I show a 16 up: all 16 outputs having the same signal. Second I show a 4 up. Each full image is made up of four outputs... two from one server, two from another. Third I show a 1 up. This is my capture card input across all 16 outputs so that each Display is showing 1/16 of the input. Fourth I add a Scale and Rotation Tween to show that the image is truly crossing each output. Fifth I did just a quick animation. The STOP is intentional. I made the PIP grow and spin, stop and hold, then grow, spin, and opacity fade to 0. I was stress testing my rig to see what I could throw at it before it started to act ... odd ... I really wouldn't sell my WatchOUT rig as being capable of this... but it is... http://vimeo.com/41111857 Hope you like it! Brian Lynn
  11. Brian

    Presets

    Presets? What is it you would like to create "presets" for? The Copy/Paste and ability to replace footage has prevented me from feeling like Presets would be an advantage but with a little more information about what you are trying to accomplish with presets maybe we can point you in the proper direction or possibly add to the "features request" thread?
  12. I use AUX timelines almost exclusively at this point. Any show I've built will have zero content on the Main Timeline. My Aux Timelines break the flow down to the smallest parts possible, almost like PowerPoint slides. Every look, every transition, every everything almost has its own AUX. If there is a need for a quick rearrangement of cues in a show (like a presenter is going early, or cancelled, or late, etc) I can either rearrange* my AUXs very quick compared to a linear Main Timeline, or simply cue the AUX Timelines out of order (I tend to stack my AUX in order, Slide 1 on the bottom, Last Slide on top). If a video background needs to end early I can do that by simply starting the next AUX on top of it! No more waiting for a very long cue to time out before you can jump in with a new one! The other nice thing about the AUX is, as stated above, you can name them anything you want! Some stage managers I work with are very flexible for names and some are very strict. The AUX Timelines allow me to keep everyone happy! They also have individual run times that report back in the Task window which is nice. They also allow for a sort of manual "prepping". By pressing the STOP on an inactive Timeline you can get it ready to run. Using the AUX this way tends to give me better sync starts than simply hitting PLAY first. Hitting STOP on an inactive AUX will cause the first frame to show so I either gap the content or use a Tween Opacity to ramp in the content unless there is some reason I want instant content, like cut switching between camera inputs from capture cards. *rearranging the position of the AUXs requires an Update if you need to maintain the stacking order. You can change the AUX order in Production all you want but if you need to change the Z positions of the AUX themselves so one comes in on top of another and follows your change: this needs an Upate. Updating will cause WatchOUT to drop out so be careful. I tend to make changes on my "backup" or non-live system, update it, then switch over to it and make the same changes on my "main" system which is now my non-live system. This is not very fun if you don't have something like a Spyder between you and the screens...
  13. If anyone cares here is my real world experience: Frame delay in live music is... well... at this point in technology its just something we have to live with if you want to do more than just basic production IMAG (Image Magnification, putting someone's face up on screen real big). Minimizing is about the best we can do. Watch some live music TV shows... like The Voice or American Idol or MTV Video Music Awards performances or any equivalent (I live in USA so I can really only give USA examples, I apologize)... They avoid showing IMAG shots on the TV cameras because of the delay. Mostly you see eye-candy on TV while the live audience has large IMAG screens in the house. Eminem played for the MTV Video music awards and during his performance he employed a fist pumping: swinging his fist up and down in a gavel like motion. In the background, on the IMAG, the delay was very noticeable. As Eminem's fist went up in real life it was going down on the IMAG. There was almost a full half second of delay between the live shot and when the IMAG actually got to the screen. The MTV set was also lots of very non-standard screens and were probably processed very heavily behind the scenes before it ever hit the screens and as someone who does this for a living it was very obvious to me. That performance was a few years ago but in 2012 it hasn't improved much. The WatchOUT rack that I get to play with the most is very powerful. Four very nice servers, 4 head ATI FirePro cards with S400 sync modules, Datapath Vision RGBE2S capture cards... The ability to frame lock the graphics cards with the S400 modules has saved my hide a few times. That said I don't see any noticeable difference with delay if I/when I can genlock. The advantage to the S400 modules is greatly improved sync between outputs. Its a noticeable difference in certain situations. I went with the DVI inputs because I need full 60p at 1080 vs HDSDI which maxed at 1080 30p or 1080 60i. With an SDI native setup the delay that I am seeing will probably be reduced as I am using a BARCO ImageProHD to convert SDI to DVI and that adds delay. But even going in direct with a DVI there is still delay. I have not done a test to see how much actual delay there is but I will after this show! Its less than half a second I know that, probably closer to under 10 frames. The only devices I've worked with that have no noticeable delay are production camera switches. For instance the Kayak, BARCO FSN, Sony DFS700a, etc. These devices have almost NO delay... but they also tend to be very limited in what resolution you can feed them and how much you can tweak (scale, pip, etc) an input. 480i, 720p, and 1080i are typically your only choices. The amazing power of WatchOUT and hardware devices like Encore and Spyder have to do so much processing that delay is inevitable with the current speed of the silicone. On May 15th and 19th I have the pleasure of attending (finally... a concert I'm going to watch and not work!!!! Yay!) The Wall presented by Roger Waters... Aside from the fact that Pink Floyd is my favorite band EVER I am going to watch the actual production and I am going to be keeping a very keen eye on how much delay I can see in any IMAG that is done. I am expecting a top notch performance from a technical aspect and I would assume the delay from live cameras will be at a bare minimum. If I remember I can/will report back here if you're curious how it looked! Brian Lynn
  14. Should I post more feature requests here?? 1: The ability to add "display arrays". It would be nice to say: Select Add Array Then select Number Of Displays for the array (2 or more) Then select the resolution of the array Displays (1400x1050, 1920x1080, etc) Then select % overlap or # of pixel overlap (# of pixels would be better for me but some might want %? I don't know...) Then select Horizontal or Vertical layout. Then select Done or Apply and have WatchOUT add multiple displays that are pre-aligned with the proper overlap. I can easily go back in and set the IP and Display #s. Currently I just do this all by hand. And maybe there is a way to do this and I am totally missing it. The show I am on right now is 8 Diplays, but only 6 are active. The two inactive Displays are bookends for doing a seamless 360 screen. Being able to quickly setup this array through some settings and a procedural build through WatchOUT would be make this very quick and prevent silly mistakes that can happen while assembling the array manually. 2: The ability for Video Proxy to automatically add its own Length? Currently I tale one of my proxy clips, give it a .mpg extension, import into WatchOUT, record the Length, delete the import, rename the clip, and then import the Video Proxy using the Length from the standalone clip import. It seems there should be an easier way. Again, its possible that WatchOUT will do this, somehow, and my paranoia of always setting the Length has made me miss some automated process in WatchOUT. I do like that the Add Video Proxy window holds the settings from the previous import... this prevents me from ending up with 7250x1050 clips imported as 800x600 because I missed a step but I am afraid that without manually setting the Length that the new proxy will take on the Length setting of the previous proxy import and my video will be too short or too long. Thanks! Brian Lynn
  15. +1 Lloyd: Black level adjustment outside of blended areas would be awesome for making dark content on blends look amazing... I can do this in some projectors or in a Spyder but i don't always have that luxury!
  16. You can add new points and you get awesome control over them but what I was shooting for would be the ability to add points any where on the grid, currently it seems you can only get points to certain grid locations. Also the ability to move one point without pulling the others around it...
  17. I have four servers with four outputs each. Many times its not possible to use only one machine for a blended screen... the next show I am on is a 6 projector seamless blend that will require 4 outputs from one server and 2 from an additional server. The show I did before this was 8 projectors blended onto a single surface and required all 4 outputs from 2 different servers! Thanks for the information... I get to have fun with my new capture cards today I think! Neil - the Aux timelines are the only way I run shows! every look, every video, every anything just about has its own Aux timeline so I can stop it/start it at any time I want and I don't have to worry about the linearity of the main timeline. The last four shows I've done had absolutely nothing on the main timeline! Thanks for the info guys! Brian
  18. Also being able to increase/decrease the numbers of grid points and the ability to grab a single grid point and move it would be huge... I can do this in my projectors and typically don't use the geometry correction in watchout because its too basic. The curves are nice but with Christy Twist and BARCO Warp I feel like I have much more granular control. The ability to shift one point, or a row of points would add amazing power to the geometry correction engine especially when it comes to correction for projection mapping distortion.
  19. I have no experience with crestron FO but i've had great luck with Gefen products. Are you using edid managers or does your fiber allow for edid talk back? (e.g. has copper in it or a two way communication down the fiber) Brian
  20. There should be no change in the video signal itself from watchout. Assuming you are online with no issues going from black to an image should not cause a refresh of the watchout machine output. Between offline and online watchout takes over the graphics card and a change could be seen at point but once you are online watchout locks down that signal and only changes the content that is bring played. 5 minutes into a cue is a long time for a that kind of glitch... Does it happen at the same time or point in the roll every time? Are you using an edid manager? Any switching between the display machine and the projector or is it a direct connection?
  21. The bad key problem would show connection issues just after trying to go online, just to clarify the symptom vs the original posted issue. Thanks Brian
  22. To update my previous response... I had a watchout key, or two, go bad. I was able to talk to one server but not the other. The key was intermittent in operation and kept acting like it was plugging/unplugging often. Replacing the key fixed the issue and I have since exchanged all my keys for the new heavy duty keys. No more connection problems for this show!
  23. I have not done inputs from external sources into watchout for many years... with the giant leaps in computer inputs its time to look at this again to fulfill client requests. I had one question, a confirmation really, that I didn't think was covered very well, or at all in the divination, or maybe I couldn't find it... I have 4 wo5.2 servers. We are planning to get either dual dvi input or quad hdsdi input cards for each server. I'm guessing that crossing blend regions with one of these inputs is possible assuming that each input is setup... For example if I want input 1 to be my camera switch feed that input should be #1 for all for servers and that would allow that input pip to cross over between servers... Is this basic understanding correct? I couldn't find anything that directly addressed using an external source across multiple servers in a blend situation... Normally I feed an encore or spyder and this isn't an issue but this next show coming wants some new stuff. Thanks! Brian
  24. Try running your free run videos as aux time lines. That will allow you to cue the video to any point you need. The free run wont start until the first time the video hits its end point... Aux time lines are an amazing tool that allows you to have granular control of what's going on. If you stack them properly you can layer aux time lines using a separate one for foreground and background. I hope this makes sense... I'm on my phone and the forums are not very mobile friendly but leave a note and I can give a better explanation once i'm out of show and willing to do something other than WO with my laptop!
  25. Oh please no not an iPad... Waste of technology. The "countdown cue" is an interesting idea... The only times I need a countdown is when we are playing a specific linear video like an opening or part of a presentation. When I think about it real quick in the middle of this setup that I'm on I am not at my best ... but being able to set a media piece properties to "requires countdown" and then have a countdown that follows the time marker and shows up within the visual strip that represents the clip itself... Kind of hard to explain... The composition as a countdown is something I already do although I have to set it up per clip with some effort. A quick radio box to turn on/off a countdown would be nice. Because I so heavily use Aux Timlines maybe allowing for the Aux Timeline time counters to count down to a specific cue marker instead of counting total run time, as an option... this would totally cover at least -my- needs for a countdown...
×
×
  • Create New...