Jump to content

JJ Myers

Member
  • Posts

    74
  • Joined

  • Last visited

Everything posted by JJ Myers

  1. Hi there. I think our firm, Pixel Mosaic, may be right up your alley: www.pixelmosaic.com We are located in Chicago, so Toronto is within fair proximity. Feel free to contact me off forum. You can reach myself and our general inquiry staff at info@pixelmosaic.com.
  2. Glad to see more attention and information to this issue. Has anyone in this discussion had this happen on a version earlier than v6.2.2? That is the only version we have seen this happen on, so my suspicion is v6.2.2 is when this became an issue. My other suspicion after reading this is that 2 different variables can potentially factor in the invocation of this bug: 1. Environmental factors, like what it appears to have been the case for asmundboe 2. Naming your displays via ip address instead of name, which appears to be the culprit in our case There are already several noted bugs involved with defining displays by ip address, so that is an easy one to just avoid from now on. Obviously, we may ultimately find that our experience was also environmentally influenced, but we had a pretty clear cut test case/project that we would run defined by ip address and hit the bug at least once per day. There were even events in the Windows Events log that correlated with the time of the bug invocation. We would then run the same exact test project, except defined by computer names - and find it to run for several days straight... and never invoke that bug. Ever since we had discovered issues surrounding ip address display definition, I have been a pretty strong advocate of getting the word out there. It takes a minute or 2 to re-define your displays by computer name and can avoid so many potential issues. It really is a no-brainer.
  3. Hi, I believe we have encountered what you are describing as well. Although, it appears to me less like Watchpoint is minimized and more like it simply no longer displays anything at all. It looks more like Windows has assigned focus to something else. Our findings have been that from a task manager standpoint, Watchpoint thinks it is running completely as normal. Yet we see the desktop and no evidence that Watchpoint is running (other than task manager). If we go to Task Manager and click on "switch to" it restores back to displaying content. We also do not receive any notifications in Watchmaker, but Windows Event Log does report some information about Watchpoint no longer being responsive (or something along those lines). Does that match your experience? I would think if Watchpoint is truly "minimizing", you would indeed get some sort of feedback to your production machine, because that would mean Watchpoint has switched to "windowed mode". If our issues are indeed the same, I can tell you we have witnessed your issue in a project void of NDI, virtual displays, and projection on 3D objects. Recently, we have found issues of ours to resolve when we define our displays using computer names instead of IP address. Here is the post that led to this discovery: Ever since we redefined our displays via computer name, we have not encountered the "disappearance" issue again. We are running tests 24/7 at our office in an effort to find more absolution to our findings. I would say I am 99.999% certain to our findings as it relates to the "latency" issue. We had a pretty decently reproducible test case for the "disappearance" issue as well that we could no longer get to invoke after making the display definition edit. But our the reproducibility of that test case was still much less than what we had for the "latency" issue, so I cannot say I am as certain. We, of course, will continue to run tests for as long as we can. So my final question to you is - are you defining your displays via IP address or computer name? If so, perhaps make that edit and see if the issue resolves? Doing so will certainly help the knowledge base and I can help make sure the word on this gets out to users.
  4. Ring the bells! We believe we may have found the culprit! Or at least, we have found something that prevents the test project from reproducing the bug. The test project in my post on October 2 uses IP address definition for 2D displays. I revised my project in the following way: 1. Set computer names and a cluster name for the 2 displays I have online 2. Set the cluster name in preferences 3. Define all my 2D displays by computer name instead of IP address I would have never thought such a revision would have resolved this issue, but alas. I would - by no means - suggest that our results are conclusive, as they only have solved the reproducible case. For us to put our rubber stamp on this, we are going to run further prolonged testing to see if the latency issue is completely absolved. For us, that means several days of typical operator cueing. I would love to hear back from other users as to their findings between IP address defined displays versus computer name defined displays as it relates to this bug. And let's please spread the word! If our efforts can prevent even just one WO show from having a catastrophe, the effort will have been worth it. So please by all means - pass it on!!
  5. It looks like spaces in show file names no longer work when trying to launch from script (file-based control). I swear it used to once upon a time. Is there an escape sequence one can use in the script to force a space character (ASCI 32, I believe)? Or does file-based control require a file name without spaces? Sorry if there is already a topic on this - I couldn't find one.
  6. Hey Jim - you probably never were alerted to this because I don't believe I properly quoted you in my above response. Just looking for some absolute clarification in the syntax for the command line argument. If you could post answers to my questions regarding the use of spaces and quotes that would be great. Thx!
  7. Hey jfk, Can you clarify the syntax for that command line argument? You wrote "- WmAudioRegFix off". That exact string has a space between the dash and "W"... there shouldn't be a space, correct? Also, does the word "off" need to be encapsulated in quotes, or is just space then off? Thx!
  8. So, we believe we have discovered more information relating to this phenomenon. We - at the very least - have something somewhat reproducible. The WO project for the reproducible case is at this link: https://www.dropbox.com/s/i6f12ixyzwglfde/Latency Bug.zip?dl=0 We also recorded a video that presents how to reproduce: https://www.dropbox.com/s/krhh85m1e9z8b3c/IMG_2142.mov?dl=0 Here are our specs and setup for this test: Display machines: - Windows 7 64b Professional - ASRock X99 WS-E LGA 2011-v3 Intel X99 SATA 6Gb/s USB 3.0 Extended ATX Intel Motherboard - Intel Core i7-5930K Haswell-E 6-Core 3.5 GHz LGA 2011-v3 140W BX80648I75930K Desktop Processor - SanDisk SSD PLUS 2.5" 120GB SATA III Internal Solid State Drive (SSD) SDSSDA-120G-G25 (OS) - SanDisk Ultra II 2.5" 480GB SATA III Internal Solid State Drive (SSD) SDSSDHII-480G-G25 x 4 (Media, Hardware RAID 0) - MD FirePro W7100 100-505724 8GB 256-bit GDDR5 PCI Express 3.0 x16 Full height/full length single-slot Workstation Video Card - ATI S400 sync card Production machine: - Windows 10 Professional - Latest 8-core Mac Pro In the video, we had 2 displays running 2 feeds each, all genlock synchronized via an NTSC composite signal. One machine is supplying Feeds 01 and 04, and the other was supplying Feeds 02 and 03. We set up both machines to sync from genlock (house sync) in this particular video. After recording this video, I remembered a discussion where the official recommended approach was to set the assigned master to sync from genlock and then use a cat5 connections on the master s400 card to serve as the synchronization master to the slaved machine. Just for clarification, here are screen shots of the settings for that synchronization method: https://www.dropbox.com/sh/vvvww8eos3trnlq/AABdhXbxj6v1SbiGVfFTvppda?dl=0 We ran this same test with that setup and experienced better results (an increase in stability). We had a much more difficult time reproducing what you see in the video, but none-the-less we were able to get it to occur under these conditions as well. We currently have a system running 24/7 with automated cueing in this set up so we can monitor prolonged usage under various types of synchronization and on various versions of WO. We also ran this test on a system not using genlock, rather using Feed 01's display as master timing source in Firepro, and then a cat5 connections on the master s400 card to serve as the synchronization master to the slaved machine. Same results of instability and latency as you see in the video. Here are the firepro settings for this synchronization method: https://www.dropbox.com/sh/uxrynsjauhdpbb2/AAAFdbS4OYRI6w6xrkDuPYQUa?dl=0 I want to point out that we not only experienced cases where both machines were equally latent (as seen in the video), but cases where the latency between the 2 machines was unequal... which of course is a far worse predicament to be in on a show. What we have never encountered is a scenario where individual feeds from a given machine to be unequally latent in relation to each other. We also - at this point - can rule out 2nd nic and/or virtual displays being a culprit in these particular instances, as both were factored out of these tests. I am going to supply this test project to dataton support and hopefully get some answers to what we are encountering. I would love to have other members of the WO community run this same test and get feedback as to whether they experience the same results. Perhaps something can ultimately be traced down to a piece of hardware, tweak setting, or the like. Our current plan is to run these same tests on rollbacked versions until we get back to a level of stabliity we can trust. Once we get to that point, we are going to set that as our version to use for future shows until a new release remedies the problems we are encountering. We otherwise do not feel comfortable with stability in v6.2.2 (regardless of method for synchronization). And we have encountered this latency bug in v6.1.6, so it is possible we will roll back all the way to v6.0.2. We will of course keep users informed of our results every step of the way.
  9. We have experienced similar problems. Some things we have encountered, I have only seen since switching to v6.2.2. We are currently running tests to try to determine what behavior is causing some new, odd behavior. The most perplexing thing I recently experienced is I had a display machine, in which had been sitting completely idle in a static image state (JPEG and PNG images) for 15 minutes, suddenly gave away focus to the desktop. No report to production at all, because it seemed as if WP was completely unaware that focus had been taken away. AFAICT - from reviewing things like task manager, WP still thought it was in full screen, full system focus mode. I essentially had to click "Switch To" from task manager to restore full screen WP. We are now back at our shop re-evaluating the show file and are experiencing behavior similar to what you are describing... which I would not necessarily say is limited to v6.2.2, but we certainly are finding easier ways to reproduce. I am hoping to package a reproducible case to send to support today.
  10. Just a recommendation, coming from someone who recently experienced a great deal of confusion on why I was not able to sign in... Allow 2 options for sign in - screen name or email address. I wasn't even aware of what my current screen name was, because - well, I had not been using that to login and I had logged in for a long time (I had something set to remain logged in). And my keychain of course had nothing set for the site with my screen name. I had to click "forgot my password" and follow the email trail in order to sign in, and had to do that several times before I came across this discussion. And if you are giving a user the option to reset their password by the traditional "send notification to email on file" method anyways, why not just allow a user to log in with their email in the first place? For reasons... A - You are in essence setting the security bar at one's email anyways B - Login using email is about as standard as it comes these days... people are used to it
  11. Hey Keith - we have recently been doing some testing with ffmpeg and have found that our output files display significant frame loss when decoded in conventional players (Quicktime) or even no decoding whatsoever (VLC), but however look great in WO... or at least on par with files that we otherwise produce out of a traditional After Effects render queue. Is that consistent with your findings? Or are your remarks in this thread regarding quality more along the lines of things like color reproduction? Just curious. If the quality concerns with ffmpeg is really just a challenge in terms of accurate preview outside of WO, it certainly seems like more of a solvable issue. As in, someone just needs to build a decent preview application for HAP files that we can provide clients and vendors.
  12. Hey Walter. Many thanks for the information on your set up. Certainly helps on our journey regarding unveiling answers to this bug. Our next efforts are going to be full show reconstruction that completely eliminates the use of VDs. Our thought is if we go a week of continual usage in a non-VD show-like environment and do not invoke the bug, it gives us great plausibility that VDs are a factor. And then we perhaps take it a step further - single mapping instances of VDs. And so on.... Hey - not much more we can do when you are dealing with a highly consequential, hard-to-reproduce type of bug without having the benefit of extensive knowledge of what is occurring under the hood.
  13. A simple means of taking a slice anywhere on the stage, and remapping it to another part of the stage, but without the computational cost and latency that virtual displays incur. By using the term "slicing", you would be delineating the 2 features while introducing a feature that uses a term that is familiar to media server experts. It would be fair to then restrict 'slicing' from the extended capabilities of virtual displays, like mapping to planes of a 3D mesh. Just looking for something that can take an area of the stage and duplicate to multiple other areas of the stage with the ability to scale. That is about 90% of what we end up using VDs for, but it currently sometimes comes at an unnecessary computational cost.
  14. Another thought - perhaps WO should ultimately add a feature that subscribes to the popular industry term "slicing"? That is essentially what I end up using VDs for quite a bit. I totally understand there was a particular usage Dataton had in mind for VDs, and that users like myself creating alternate usages for them may not be what was intended. Many media servers have a slicing feature where latency is negligible. I will go ahead and make an official request in the requests topic.
  15. So, we just discovered this today: https://www.dropbox.com/s/fksxu98ls771lni/Test 1 Capture.mp4?dl=0 What you are looking at is a WO project that contains 2x 2D Displays (-y area of stage) and 16x virtual displays (+y area of stage). The VDs are named "VD A" - "VD P". There is a single mov file in the project that we use to measure latency. It is a 60fps file that displays current frame. There is one cue of the file mapped directly to the rightmost 2D display (Display "2"). The other is mapped to "VD A". "VD A" then maps to "VD B", which maps to "VD C", and so on and so on, until "VD P" finally maps to Display "1". As you can see, there is about 7-8 frames of latency between Displays "1" and "2". That would suggest that each VD is adding around 1/2 frame of latency in this particular arrangement. What I find incredibly fascinating is that by simply changing the arrangement of 2D Displays and VDs on the stage, I get completely different results - both in the way Watchmaker behaves and the amount of latency between Displays "1" and "2" (2-3 frames), as seen in the 2nd test here: https://www.dropbox.com/s/kfsimgefz76nil4/Test 2 Capture.mp4?dl=0 To me, that would suggest the order in which the stage data is sampled in the application code (eg. L>R, T>B) impacts the result of latency. But I am just grasping at straws here not knowing exactly how everything works "under the hood". Suffice to say, this discovery has me re-thinking our current strategies and usage of VDs. Until now, we had found tremendously powerful and unique uses for them. Usage that provides very dynamic and randomly accessible manners to cue a show ; usage that allowed us to build convenient "virtual multi viewers" for other client and crew resources ; usage that allowed us to be incredibly efficient and effective in creating mappings that could be used like "presets", which also mitigated risk of programmatic errors. It would be great to get a response from Dataton here, so we can gain an understanding of how cascading VD mappings can impact things like latency. Sort of a vague open-ended invitation I know, but if we understood more of what is happening "under the hood", that would help influence our thinking when inventing creative ways to use these tools. Anyways - circling this back to this topic... The creation of the above 2 show files resulted in my inability to get the latency bug to rear its ugly head again over the last 2 days. And the reason I started exploring VD mappings as a potential contributor, I realized at one point that all the shows we have encountered the latency issue of which this topic was founded upon, were shows in which we were using VDs in similar fashion. Don't get me wrong - we would never cascade a cue through 16 VDs before reaching an ultimate display destination as its end point. However, we have had cases where we would map a cue through 2 VDs before its end point. And there is a common thread here: latency. So without knowing more "under the hood" information, I think it is entirely plausible that what is witnessed in the projects above is potentially related to the latency bug in which this topic addresses. My research will continue. I will post more findings as I come across. For anyone interested in exploring the projects above, here is a link to the entire package (including the captures of the tests): https://www.dropbox.com/sh/ktb3rk6f8q9029p/AABHjpQWh2UGz4D_TRwobIiWa?dl=0
  16. At our office, we are officially referring to this bug as the "UDP latency bug", in the assumption that it is related to the UDP play commands from WO used to launch cue activity relating to a timeline. We are in a heavy test/R&D mode currently in an effort to create a very reproducible case to provide support, and in hopes of learning solutions for work arounds, etc... We were able to invoke the bug back at the shop using a very recent show file. We capture the bug in play on video. Here is a link to that: https://www.dropbox.com/s/pybjd45zvinw9h4/IMG_1351.m4v?dl=0 Pardon my excessive rambling over the capture, and thus impeding on the reference to lost A/V sync (hindsight = 20/20). It took a couple days to get it to invoke and we are still very unsure what exact steps of behavior led to the invocation. So, we are going to continue to test, collect data, etc... in the hope we will have more answers and feedback soon. In the meantime, I am happy to answer any questions anyone may have as to the video at the link above. I am holding off on providing too much information on the above test until I have gained a better understanding of the "why". All in effort to keep information on this issue as relevant as possible, and allow future inquiries for information to result in as efficient, streamlined, and factual of a process as possible.
  17. I myself am planning on using ffmpeg for future HAP encoding, considering that anything using the 32b Quicktime component on OS X will be reaching EOL very soon. In fact, the latest Adobe AME and AE OS X releases have already pulled HAP from the list in anticipation of the next version of OS X (which will no longer support any 32b components). There has been a backlash directed towards Adobe from the live events community concerning the lack of warning to users and the lack of planning to have a solution in place ahead of time. Adobe has mentioned on various forums that they hear the response and intend to provide a solution in the future.... for whatever that's worth. I am sorry Keith I do not have any results to share with you at this time, as I am just getting started. I will indeed let you know what I find out as I experiment. I am hopeful in finding a way to get acceptable results out of ffmpeg, because the project really seems to have taken off and is a great contribution to the development community!
  18. Are you really speaking for the entire WO user base here? Because I have been using WO since 2002 and I have never heard a user that expects a cue to not load its first frame when flush to the playhead. I - for one - would be strongly opposed to that idea. If you need to it to pre-load, that is what things like the pre-load field in the cue is for. It sounds like you are more interested in a program that works from a vertical cue stack, rather than a timeline. Don't get me wrong - I too wish there was a vertical cue stack feature inherent in the production software. But that is a completely different argument. Anyone who has edited video, using compositing software like After Effects, etc... - all of which the timeline aspect of WO is derived from - expects a cue on a layer to render a frame when the playhead is parked at the front. That is a pretty basic property of timeline based editing. Please user base - correct me where I am wrong!!
  19. I have done this many times. Get an ethernet shield with your arduino kit and use the TCP/IP classes in arduino to communicate with WO. From there, it is just following TCP socket communication stuff and sending data to WO according to it's various control protocol documentation (in the manual). You should research both the arduino documentation on socket communication and the WO manual for socket control protocols.
  20. We completed our testing on this, using scenario #3. After reading RBeddig's comment, we decided to only focus on #3. I have seen enough results to be convinced that you can indeed frame lock multiple format outputs, provided they are on different cards/machines and provided you are using external genlock as your means of frame locking the outputs. Just to confirm specs and info, here is the relevant software and hardware details from the test: - W7100 AMD GFX cards in both display machines - S400 cards in both display machines - Ran one output at 1920x1200 and the other at 3840x1200 - Used NTSC 480i Black as genlock source - Used a Barco s3 to butt up the 2 WO sources side by side and view on a UHD destination - Used WATCHOUT v6.1.6 for the testing
  21. I tried finding a current thread relevant to this topic, but could not. So I figured I would start a new one! I have never tried frame locking multiple mixed format outputs with an s400 card. I remember hearing some restrictions on doing so. What I cannot remember is if I heard the restriction is in WO or ADM or both, whether the restriction is on a single card or extends into multiple cards, or whether the restriction only applies to frame locking cards over cat5 versus genlocking from a blackburst signal. So anyways, here are the questions/scenarios I am ultimately looking for answers to: 1. Can I have 2 different formats on a single card (where an S400 is present and available) and expect those 2 outputs to be frame locked? Example : A 3840x1080 output and a 1920x1080 output. 2. Let's say I have 2 separate cards on 2 separate computers. Both computers have S400 cards and matching GFX card models. All outputs for card #1 are set to 3840x1080 and all outputs for card #2 are set to 1920x1080. I then use the card-to-card frame lock method where output #1 of card #1 is the master, both cards are connected via a cat5 cable, and all other outputs are set to sync to the master. Can I expect a successful frame lock in this scenario? 3. Same scenario as #2 above, but instead of internal frame locking of the 2 cards, I use external black burst as the means of synchronization. Can I expect a successful frame lock in this scenario? Despite responses here acknowledging success, I intend to fully test with an e2 and follow up on this thread with my results. My main goal here is to open the discussion and provide a place for future reference.
  22. Do you have any lengthy timelines? As in, is your main timeline, compositions, or tasks longer than 1-2 hrs? I have found that to be a culprit of unusually long updates. I once had a show where they wanted some projection mapping within a tent to reflect a sunrise to sunset over a 6 hr party. So.... naturally I created a 6 hr timeline. When I went to take the show online to displays, they actually would time out because the update was taking longer than (what appeared to be) a hard coded time out in the software. I had to figure out another game plan. I have since made sure my timelines are always 1 hr or less.
  23. It is indeed. I figured that would likely be the case.
  24. Has anyone had any experience playing back HDR 4K video content from WO? A client asked me if WO could play back HDR content yesterday and I honestly did not know the answer. I had to do some googling and research just to further educate myself on HDR.
  25. This topic - by far - is the most challenging part of our group's (Fuse Creative LLC - Chicago) service offerings to our live events clients - both from an implementation standpoint, and a decision standpoint. We have been dealing with this topic for many years and have tremendous amounts of experience worth sharing. First of all - there are advantages and disadvantages to all the solution options out there. There is no one superior solution. And different scenarios can prove one solution better than the other, and in some cases comp[letely eliminate certain solution options. Certainly, from a show risk and simplified implementation standpoint, using a multi-source switching system (in which incorporates double hardware scalers), such as Barco e2 is the way to go. However, there is one major disadvantage to this solution: the processing involved with using a hardware scalar comes at the cost of latency. An e2 will add system frames of latency to the signal. Our experience is 1-2, depending on variables. For running media cues, the issue of latency can be moot or circumventable. However, for live sources (mainly live camera), the latency that e2 adds can create a problem in a live event environment that is unfixable as your show begins to look like an erroneous slip edit. If you do the math, and determine your capture card adds ~4 frames of delay and your e2 adds ~2 frames of delay. Right there, you are already at 6 frames of delay. LED processor in line? Add another 1-2 frames of delay. Projectors are using warp engine features (perhaps unbeknownst to you)? Add another 1-2 frames of latency. If your camera feed is of the interlaced variety? Add additional frames for the need to de-interlace in WO. You may say - no problem, we have a multi-format camera switcher and we can simply set our system output to a progressive format. Well guess what? the camera switcher needs to do some processing in order to accommodate format conversion which then adds latency. So... OK, let's set our cameras to 720p and have everything on the camera switcher run natively. Oh wait, the client would like their native show record format to be at least 1080i. Ok... let's just use 1080p cameras! Oh wait... that has now killed the budget to do WO for the show. This is seriously the conundrum we encounter frequently and at the end of the day we end up explaining to the client that some form of compromise needs to be made... or they need to increase the budget. If you are playing to a large room and the audience is pushed way back, the latency can be a moot point as the natural audio delay (by the time it reaches the audience) catches up with video processing delay and the show no longer looks like our erroneous slip edit. But if you are playing to an audience that is anywhere near the screens (and we all know the ones cutting the checks like to sit in front), latency beyond 4 frames can easily start to look like that bad slip edit. But wait? Why would I need to bring any live cameras into WO if we have an e2 downstream? Well, we don't.... that is, unless you have a client who would like a fancy DVE look for their live camera. Perhaps the producer has watched too much CCN or Sportscenter and insists on seeing a live camera feed swing around on it's Y-axis, or follow and become shaped by a moving track matte in coordination with some motion GFX. In such cases, where the client is insistent on a fancy representation of live camera, the only option is to integrate live camera into WO (or whatever media server you choose to use). Also, e2 is not cheap and requires another skilled position/operator. If you are a company like ours that aims our focus on media server playback, speaker presentation support, and content creation - adding e2 into the list of services is a whole new world that now requires a whole new level of investment into resources and training. Now we are having to sacrifice some of our time we would otherwise spend learning a new plugin or inventing a new trick in WO. I personally am not a fan of the approach that sends different hardware channels of the same redundant signal to each projector of a stack (in the case of projection). It means all your WO channels are live at all times, doubling the amount of channels you need to be actively managing and monitoring. It also means a single production machine managing all channels in order to keep the 2 channels frame locked. Lose your production machine for whatever reason? You are now scrambling to bring another production machine online or using cluster control to run your cues. It also means ANY form of failure will have an impact to the audience. If any "B" WO channel crashes in this scenario, the result is a screen that is now have as bright (as a result of shuttering the projector with the bad signal). And if that projector is part of a blend, the projectionists will likely be forced to shutter ALL "B" projectors for the sake of a lesser of evils in the presentation to the audience. So... you have a scenario where the client just lost 50% of their screen brightness due to an error on a single WO "B" channel. If a "B" channel crashes in other approaches, no one in front of the big black curtain has any clue. Also, this approach adds more complexity in the implentation on the projection side of things. When a projectionist has a stack of projectors, they typically expect the same redundant signal to both projectors. That simplifies things in a case where signal troubleshooting of some sort is necessary. If the distribution is via a simple hardware splitter or matrix router, the projectionist at least knows the origin of both signals is identical (in terms of timing, RGB values, etc...). If the signals are of 2 different origins, that can add to the complexity when troubleshooting. More often, we use a matrix router with zero scaling/processing to manage primary/backup switching. We used to use Geffen Pro, but we have since started using Lightware. Yeah, they are expensive, but there are some really good cost-effective rental options out there - where if you budget and quote your show right, you can phase the cost of the rental into project with a good return of value and piece of mind. Lightware also makes some nice multi-card routers where you can have a system that supports a variety of computer signals. Geffen used to as well, but they have drastically changed their target in recent years. If you spec a large enough router, the router can also be your means of distribution. In many cases, this router ends up essentially serving 3 purposes and otherwise reducing a lot of extra peripherals like Fiber transmitters and/or signal splitters. If you use top of the line routers (I suggest stay away from stuff from companies like Kramer), you can get other great options like redundant power supplies and support for genlock. Having a modular card based router with a redundant power supply, redundant WO system, and a strategic power plan can eliminate (or at least mitigate) single points of failure. Using WO systems with S400 cards and properly gen locking everything to a black burst generator (including the router) will also provide you a clean switch between your primary system and backup system. The disadvantages of this approach are not having another failsafe to go to (other than the backup WO system) like a still store, and just generally being live at all time. That can make things challenging - for instance - if your projectionists need grid, but you need to work. There are ways to work around that (other than the obvious going offline to work), but just require some abstract thinking when planning your WO project. We have found ways to put ourselves in positions where projectionists have grid, and our stage manager and ourselves can still see what we are programming and thus continue to work on the show. We have also built some great custom software applications that allows us to run multiple matrix switches from the push of a button, so that main/backup switching is quick and easy to get to. In the end - my suggestion is to always choose something that delivers to the client's expectations, while posing the least amount of show failure risk possible. Remember - show failure risk management is not always just what your hardware could potentially do, but what your operator could potentially do! And take a guess as to where most show failures occur - human or computer error? I don't think I even need to answer that. Operators must juggle the running of show cues, keep their head in the show, while monitoring and managing various system resources. If your operator is spending more time managing resources than keeping their head in the show... that poses a risk! Reduce distractions to the show operator and you will reduce your overall risk.
×
×
  • Create New...