Jump to content
Dataton Forum

Michael Voccola

Member
  • Content Count

    9
  • Joined

  • Last visited

About Michael Voccola

  • Rank

  • Birthday June 11

Contact Methods

  • Website URL
    http://www.oppsav.com

Profile Information

  • Gender
    Male
  • Location
    Providence, RI
  1. I have been experimenting with other platforms over the last week with good success. Some of the findings can be found in this post: http://forum.dataton.com/topic/2434-emulating-spydere2-preset-recall-with-only-watchout/?p=10525
  2. I am posting this here as it is relevant to the thread; however, (once approved) I'll also quote it in the feature requests. Millumin has a great implementation of the "Looks" that have been discussed in this thread via their "dashboard", which is more of a cue list than a timeline. Their dashboard supports timelines as cues and there is a way to advance through cues and jump to cues directly in any order. When advancing or popping to cues, the software also allows a variety of transitions, which is exactly what I'm looking for and the layers in Millumin behave very similarly to those in a screen management system like Spyder or E2. Finally, Millumin also also for super easy drop-in of video files for playback without 3min of fussing with manual fade tweens. Here is a good example For corporate events Millumin is in many ways a much easier and more affordable system than WO. It is lacking many of the more advanced features of WO - especially the scalability and overall flexibility. That being said, I think there are a number of lessons to be learned from the way Millumin handles cues which would make WO superior to where it stands today.
  3. Dissolve overlapping media items on the same layer From a live event perspective Is there way to dissolve two overlapping media items that reside on the same layer? For example, I have a media item on the timeline that runs from 0->30sec and would like to dissolve into another media item over 1sec. If the second media item is placed on the timeline at 29sec, a red bar appears indicating an overlap. From what I have read and experienced, there is no way to accomplish this without putting the items on different layers and manually creating an opacity tween on each track over the desired timeframe. While I understand there is a significant amount of control gained in doing this manually, it can be time consuming and time isn't always on our side on a live event site. For this reason, PlayBack Pro and similar programs are a great option for simple video playback; however, I'd like to know if there is a workflow - that is relatively simple - for accomplishing this in a hurry. From a user experience perspective, and perhaps as a feature recommendation, it would be excellent to have the option of treating overlapping media on the same layer in the following ways by right-clicking that red line that shows in the GUI right now: - CUT (default, current behavior) - DISSOLVE for the duration of the overlap. This behavior could be indicated with a visual indicator in much the same way as the current indicator, but maybe with the style of NLE's, like Logic or, dare I say, iMovie. Again, doing a manual tween is much more flexible, but having the ability to drop a file and hitting "go" has a tremendous advantage in the corporate event market. From my time with Watchout so far, I feel like I must be in the minority of users given how many things like this I have been running across. While the corporate event users are probably a small percentage of WO users, perhaps it is because WO is great at so many different things yet doesn't offer a simple way to do simple tasks.
  4. Although it is unusual for most users to be interfacing with a DP destination with or without loop, the MST hub, if I understand this correctly, can still be used with standard DP->HDMI/DVI adapters. With that in mind, the MST hubs are still of great utility to most users and it may well be worth having some number of recommended/support hubs - even if it was a community-driven list or some sort.
  5. Based on what I am expriencing and reading here, it doesn't seem that this is realistically achievable with any level of speed or flexibility on a live event site, so I'll cross this off the list. However, what does seem reasonable is to have WO handle the "PiP" keyframes and PGM exit/entry while the virtual display that the PiP(s) is based off of is always fed the same live input. At this point, an external hardware switch feeds that live input. This should greatly simplify the task at hand. The first thing that comes to mind is a Blackmagic ATEM - specifically the TVS or their new TVS-HD (supports 3G SDI and has buttons). Both of these units are inexpensive and rack mountable. They also both accept network commands and have full SDK available, so let's use these in the example. With WO managing the full composition, the live input is fed by an ATEM. The operator can control what content is on screen directly with the ATEM or, alternatively, by sending commands to the ATEM from WO. The PGM out on the ATEM hits the live input of each display computer of course. If you want to have more than one live input on screen at a time, more ATEMs can be used to feed additional live inputs. In this way, each live input can be thought of as a layer (like on Spyder) and the ATEM can be considered the mixer. Why bother? My thought on starting this thread is to bring a most cost-effective option to the table for events that have many destinations requiring WO but not the budget to have a screen management system in between WO and the destinations. Instead, it can be done with the WO rig that we know is needed no matter what, but instead of something like Spyder (and maybe a router) to create PiPs on top of watchout, we can instead use an inexpensive simple switch, like ATEM. If we want to show 4 different things on screen at once, we just need 4 live inputs and 4 ATEM. Those ATEM inputs could be spread across, say, 32 WO outputs many times where if we had Spyder make the PiP it would require the same WO system AND 4x Spyder X20-1608. This is why I am interested in doing this. Of course, there are many applications where it is simply better to stick a Spyder downstream of Watchout, but I was looking for a way to handle the events that just don't need all that.
  6. Working in primarily corporate presentation environments, I am interested in exploring the possibility of using WATCHOUT v6 as a substitute for some applications that would otherwise be suited for a Spyder X20 or Barco e2. Unlike WATCHOUT, the core operating concepts of those screen management systems is the use of presets and the ability to recall a given preset at any time and in any order. From the research on this forum and other sources, it is my understanding that auxiliary timelines are the best approach to replicating this behavior. However, I am having issues in fully doing so. Most of the trouble I am experiencing (I'm also new to WATCHOUT) is in in the management of PiPs and their entrance/exit on screen. To explore this, I have created a project with the following stage items: (1x) 2-projector blend (1x) downstage monitor (1x) virtual display. The virtual display is present on the stage to act as a single location to present the live sources. The virtual display, being considered by WATCHOUT as a media item, is reused on the DSM as well as a PiP on the main screen. In practice, the media will consist primarily of external live inputs such as PlayBack Pro and PowerPoint. For the most part, the only media in WATCHOUT will be background content (looping videos etc...). For this example project I am using two photos as placeholders for live inputs to represent PowerPoint and PlaybackPro. I have also created three auxiliary timelines (ATL): PiPsManages the virtual display on the stage canvas. Specifically, on the DSM and Wide Screen. CUES:@0sec 1sec opacity ramp @ 1.2sec PAUSE @ 1.4sec JUMP TO 1sec GFX Places GFX source on the virtual display and calls triggers the PiP ATL. Always on top CUES:@0.2 places content on Virtual Display @0.3 runs PiP ATL @1.5 PAUSE @1.6 JUMP TO 0.2 PlaybackPlaces GFX source on the virtual display and calls triggers the PiP ATL. Always on top CUES:@0.2 places content on Virtual Display @0.3 runs PiP ATL @1.5 PAUSE @1.6 JUMP TO 0.2 Behavior Running the GFX or Playback ATL the first time is no problem. If the operator runs an ATL that is already "on-air" there is no change on screen (desired behavior). However, when a different ATL is run it cuts sources rather than dissolves the new source in (because the PiP is already past it's dissolve). Desired Behavior Running GFX or Playback ATL dissolves the source on screen either from empty canvas or dissolves through the other source if the PiP is already live - like a standard screen management system (X20/e2). I'm sure I could eventually create some spider web of impossible to understand nonsense that would accomplish this somehow, but I would like to know if any other users have a clean workflow to handle this scenario. Screen shot https://drive.google.com/uc?export=download&id=0ByvQLu5LpvPrS3BhNmNJMlBDSjQ
  7. I failed to ask on our phone call when we discussed this: Are there any recommended hubs at all which are known to work long-term? Fortunately for this project, three of the twelve displays have DP loop-through. However; for future reference, it would be good to know of tried-and-true options for hubs as it is unusual that we are connecting to devices with DP input at all (never mind with loop). Usually everything is HDMI/DVI.
  8. Understood. Is it possible to, with an AMD FirePro W7100 (4 outputs) and a DisplayPort splitter (like the one from the OP), drive 6 displays on a single machine?
  9. We are looking into purchasing a Watchmax to drive a mosaic consisting of 12 different size monitors each with a native resolution of 1920x1080 and DisplayPort inputs. This means different pixel densities. The displays are also arranged in an abstract manner. So I know from a pixel-space perspective it is possible to push 12x displays at the 1920x1080 resolution because driving four 4K displays is the same 33.2MP and, from what I understand, with DisplayPort 1.2a it is possible to send multiple streams via a single DP connection. I do not yet know if the monitors have DisplayPort loops, but if they do, or by using a DP splitter like this one, is it possible to connect and drive all 12 displays independently from a single Watchmax?
×
×
  • Create New...