Jump to content

Emulating Spyder/e2 preset recall with only Watchout


Michael Voccola

Recommended Posts

Working in primarily corporate presentation environments, I am interested in exploring the possibility of using WATCHOUT v6 as a substitute for some applications that would otherwise be suited for a Spyder X20 or Barco e2. Unlike WATCHOUT, the core operating concepts of those screen management systems is the use of presets and the ability to recall a given preset at any time and in any order. From the research on this forum and other sources, it is my understanding that auxiliary timelines are the best approach to replicating this behavior. However, I am having issues in fully doing so.

 

Most of the trouble I am experiencing (I'm also new to WATCHOUT) is in in the management of PiPs and their entrance/exit on screen. To explore this, I have created a project with the following stage items:

  • (1x) 2-projector blend
  • (1x) downstage monitor
  • (1x) virtual display.

The virtual display is present on the stage to act as a single location to present the live sources. The virtual display, being considered by WATCHOUT as a media item, is reused on the DSM as well as a PiP on the main screen.

 

In practice, the media will consist primarily of external live inputs such as PlayBack Pro and PowerPoint. For the most part, the only media in WATCHOUT will be background content (looping videos etc...). For this example project I am using two photos as placeholders for live inputs to represent PowerPoint and PlaybackPro.

 

I have also created three auxiliary timelines (ATL):

 

  • PiPs
    • Manages the virtual display on the stage canvas. Specifically, on the DSM and Wide Screen.
    • CUES:
      • @0sec 1sec opacity ramp
      • @ 1.2sec PAUSE
      • @ 1.4sec JUMP TO 1sec
  • GFX 
    • Places GFX source on the virtual display and calls triggers the PiP ATL.
    • Always on top
    • CUES:
      • @0.2 places content on Virtual Display
      • @0.3 runs PiP ATL
      • @1.5 PAUSE
      • @1.6 JUMP TO 0.2
  • Playback
    • Places GFX source on the virtual display and calls triggers the PiP ATL.
    • Always on top
    • CUES:
      • @0.2 places content on Virtual Display
      • @0.3 runs PiP ATL
      • @1.5 PAUSE
      • @1.6 JUMP TO 0.2

 

Behavior

Running the GFX or Playback ATL the first time is no problem. If the operator runs an ATL that is already "on-air" there is no change on screen (desired behavior). However, when a different ATL is run it cuts sources rather than dissolves the new source in (because the PiP is already past it's dissolve).

 

Desired Behavior

Running GFX or Playback ATL dissolves the source on screen either from empty canvas or dissolves through the other source if the PiP is already live - like a standard screen management system (X20/e2).

 

I'm sure I could eventually create some spider web of impossible to understand nonsense that would accomplish this somehow, but I would like to know if any other users have a clean workflow to handle this scenario.

 

Screen shot

https://drive.google.com/uc?export=download&id=0ByvQLu5LpvPrS3BhNmNJMlBDSjQ

Link to comment
Share on other sites

  • Member

It seems at one point or another we have all experimented with how we can do more with WO. In my experience of being onsite with live events switchers are much needed tools for easy workflow and allowing backup systems for WO, PowerPoint and possibly more. I agree that WO has some pluses in capabilities, but it is more of presenting tool not a switching tool.

 

With that said, I have had much smaller events where I used WO to bring PPT on and off screen. I like having redundant systems on larger shows because failing is not an option and switchers are a good way to accomplish that.

 

Can you give more detail on why you are attempting to emulate a switcher?

Link to comment
Share on other sites

  • Dataton Partner

Hi Michael,

We already tried to emulate some Spyder features for some customers.

I think for the best result you should not use only tasks because you will need to know what are the tasks status to know what Watchout need to do.

What we did is to use inputs for fade, pip positions ect… and drive the inputs with control cue that send commands to Watchout itself. It’s a bit tricky but there are 2 main advantages to do what you expect:

  • All effects are driven by inputs so you can ask a media to fade out in 1 second, if he is already transparent there is no side effect.
  • All media are active and you don’t jump into aux timelines, Watchout react much more faster with no pre-roll, it’s particularly useful for live inputs (even if there are others trick to solve this problem).

For example, here how to setup a background video with 2 presets to show and hide the background:

  • Create a generic input called “Bgnd_opacity” (limit 1).
  • Create a task named “Bgnd” and put your background video in a layer
  • Add an opacity tween but don’t add any key points, map the tween to the input “Bgnd_opacity”. Now if you move the input, it change the opacity.
  • To play the video background when you fade in, add “Bgnd_opacity > 0” to the trigger of the task
  • To stop the video background, create a new aux timeline “Bgnd_stop” with 2 control cue “stop aux timeline named Bgnd” and “stop itself”. Add “Bgnd_opacity=0” to the trigger of the task. Now you just need to control the input to fade and start and stop the video.
  • To control the input from anywhere, create an output “WO”, IP=127.0.0.1, Port=3039
  • Create a new task named “preset 1”, add the “WO” output and in the cue settings, data to send=”setInput "Bgnd_Opacity" 1 1000$0D” (let’s say the preset 1 start the bgnd with a fade of 1 sec.)
  • Create a new task named “preset 2”, add the “WO” output and in the cue settings, data to send=”setInput "Bgnd_Opacity" 0 1000$0D” (let’s say the preset 1 stop the bgnd)

By doing this you can call many time “preset 2” with no effects if the bgnd is already stopped.

And you can mix many inputs for fade, pip position etc. in your presets.

 

Of course, it’s a bit tricky and there is certainly many others things to do, but it works…

 

Tell me if it’s not clear,

 

Benoit

Link to comment
Share on other sites

2 Things:

 

I typically give my aux timelines(which is how I typically organize all my video cues) a .5 second buffer at the start, ie the whole timeline starts at .5 instead of 0. Just to allow the automatic preroll to take effect.

 

Also, in some situations with live inputs, I've found it necessary to add a black solid above it, and adjust opacity on that to do live fades. If you allow that to roll for roughly half a second, the live input will have been activated beneath it, then the fade occurs. I know this is a 'hack', but I don't believe this is a watchout problem, I believe this is related to the capture card driver activating the capture card's input stream, but I'm sure one of the guys who knows the inner workings of the software better can elaborate or correct me.

Link to comment
Share on other sites

Based on what I am expriencing and reading here, it doesn't seem that this is realistically achievable with any level of speed or flexibility on a live event site, so I'll cross this off the list.

 

However, what does seem reasonable is to have WO handle the "PiP" keyframes and PGM exit/entry while the virtual display that the PiP(s) is based off of is always fed the same live input. At this point, an external hardware switch feeds that live input. This should greatly simplify the task at hand.

 

The first thing that comes to mind is a Blackmagic ATEM - specifically the TVS or their new TVS-HD (supports 3G SDI and has buttons). Both of these units are inexpensive and rack mountable. They also both accept network commands and have full SDK available, so let's use these in the example.

 

With WO managing the full composition, the live input is fed by an ATEM. The operator can control what content is on screen directly with the ATEM or, alternatively, by sending commands to the ATEM from WO. The PGM out on the ATEM hits the live input of each display computer of course. If you want to have more than one live input on screen at a time, more ATEMs can be used to feed additional live inputs. In this way, each live input can be thought of as a layer (like on Spyder) and the ATEM can be considered the mixer.

 

Why bother?

My thought on starting this thread is to bring a most cost-effective option to the table for events that have many destinations requiring WO but not the budget to have a screen management system in between WO and the destinations. Instead, it can be done with the WO rig that we know is needed no matter what, but instead of something like Spyder (and maybe a router) to create PiPs on top of watchout, we can instead use an inexpensive simple switch, like ATEM. If we want to show 4 different things on screen at once, we just need 4 live inputs and 4 ATEM. Those ATEM inputs could be spread across, say, 32 WO outputs many times where if we had Spyder make the PiP it would require the same WO system AND 4x Spyder X20-1608. This is why I am interested in doing this.

 

Of course, there are many applications where it is simply better to stick a Spyder downstream of Watchout, but I was looking for a way to handle the events that just don't need all that.

Link to comment
Share on other sites

  • 2 weeks later...

I am posting this here as it is relevant to the thread; however, (once approved) I'll also quote it in the feature requests.

 

Millumin has a great implementation of the "Looks" that have been discussed in this thread via their "dashboard", which is more of a cue list than a timeline. Their dashboard supports timelines as cues and there is a way to advance through cues and jump to cues directly in any order. When advancing or popping to cues, the software also allows a variety of transitions, which is exactly what I'm looking for and the layers in Millumin behave very similarly to  those in a screen management system like Spyder or E2. Finally, Millumin also also for super easy drop-in of video files for playback without 3min of fussing with manual fade tweens. Here is a good example

 

For corporate events Millumin is in many ways a much easier and more affordable system than WO. It is lacking many of the more advanced features of WO - especially the scalability and overall flexibility. That being said, I think there are a number of lessons to be learned from the way Millumin handles cues which would make WO superior to where it stands today.

Link to comment
Share on other sites

  • Dataton Partner

Michael,

 

Although in many cases our clients would stick for Spyder (not so common over here), E2 or Analogway Ascender, there are many small projects where we also tested to use WATCHOUT to do the whole trick. Hardware scalers maybe tend to be a bit more reliable and their manual interfaces (shot boxes, control desks,..) give you more control but there are also ways to achieve almost the same with WATCHOUT.

 

The second point is latency. In a dedicated system like Spyder etc. the latency is very low whatever the source frequency might be. In WATCHOUT latency is much better than it used to be if you use the more expensive capture cards of Datapath but it is probably still slightly longer than what the hardware units have to offer. This is due to the vast amount of DSP power they put into their systems and which will cost you a lot of money. Capture cards and a capable WATCHOUT system are way cheaper but lack the DSP power of course.

 

A thing to look at might be NDI. This technology of NewTek allows you to stream SDI or other signals over the network and this will be supported by WATCHOUT 6.2. We demonstrated this at the ISE in Amsterdam a few weeks ago. NewTek offers a free software to mix two signals before streaming those into the network. The professional software with more channels is available for around 1000$. You could then set up a dedicated small server with capture cards and stream those signals into the WATCHOUT rig. We are about to test this and measure latencies but this might still take a month or so.

Link to comment
Share on other sites

  • Member

Just to give some idea of what is possible in a high end setup and to add another example of using Watchout as video processing tool:

https://www.film-tv-video.de/productions/2016/06/22/euro-2016-groesser-denn-je/3/   The Article is in German, but you might have a look at the photos.

For the ARD/ZDF (biggest German broadcasters) Euro 2016 Studio Watchout was used to not only playout the contents but also to handle all PIPs on the high res LED wall in a flexible long hour live broadcast with millions of viewers on air. We used the Blackmagic cards with 8 HD-SDI inputs. 1 input was a super wide 4k camera signal which we put in using 4 inputs simultaneusly without problems.

 

Its always a question of the workflow you choose. What is an important fact to consider is that one operator alone can program the show without triggering (might be through a tcp action or by a classical cue from the director) the E2/ASC/Spyder which makes for a lot faster programming workflow. Especially when latency is not a problem (PPT or other Playback sources) this can be the faster workflow. Also not every show has the budget for E2 and operator. Animated overlays like lower 3rds on the live cam PIPs can be done more easily with simple Alpha Layer files compared to cut and fill workflows on external mixers. Like Rainer was saying, I guess with NDI also input latency will be even shorter than via capture cards today. Until now capture cars are a proven workflow which work very efficent depending on your setup. On the other hand an E2 is a standard piece of hardware which you can rent arround the globe and with a skilled operator you can do a lot of very nice stuff, especially in combination with Watchout. But I think to have the option of doing a lot of video processor tasks in watchout will make you consider it for certain projects!

 

Regarding CUEs in Watchout. For the  Euro 2016 studio mentioned above I wrote something in another thread regarding MIDI control of Watchout. Here a copy of what I wrote:

 

"We are currently developing a control panel which will talk to Watchout. With 9 -18 OLED displays and broadcast style buttons you will be able to control and monitor

all Aux Timelines you choose. You will see the name and current timecode of the playhead per Aux Timeline. Of course the buttons will reflect the current state of the timeline with blinking and coloured lights. :D

 

Last year we were working on a TV show for a big football event on television. Over the weeks of the event over 500 Aux Timelines were created in the project.

Most of them had to be triggered by directors command very fast. To deal with it more efficiently in the future we decided to build a control panel to be able to trigger Aux Timelines with the click of a button. We talk to Watchout via standard integrated tcp commands. You will be able to arrange Aux Timelines as you wish and create different pages.

 

The panel will also be able to trigger Blackmagic Atem mixers macros and send TCP/UDP/MIDI/OSC messages to whatever device you choose.

More device integration is planned for future software updates. There will also be integration to another media control software to give it the option of physical buttons.

The idea is not to replace touchscreen control apps, but to give users a physical button to trigger events. Of course you can build your own controller using the tcp commands provided in the documentations here in the forum and on the website. Just search for it. Im a big fan of midi controllers as they are cheap and easy to use. Especially in fixed installations you can do a lot of stuff with these controllers and different software solutions! :) With the getStatus request you should be able to get the informations you want back from watchout and connect aux timelines to midi buttons. There are lots of midi software solutions to do this. However to make it fully automatic you have to write a bit of code.

 

What would you like to see in a control panel? We are planning to build the first prototypes this summer!"

Link to comment
Share on other sites

  • Dataton Partner

Another way to control such a system would be to use WATCHNET. Version 1.4 will be able to control the production computer and can run side-by-side with WATCHMAKER. It just takes one more dongle of course. You could even consider to give a portable device (iPad, Android,...) to a speaker on stage or the show caller etc. and have them control certain aspects of the show.

 

We are actually showing a sample of WATCHNET 1.4 in Frankfurt at the PL+S this week.

Link to comment
Share on other sites

Based on what I am expriencing and reading here, it doesn't seem that this is realistically achievable with any level of speed or flexibility on a live event site, so I'll cross this off the list.

 

However, what does seem reasonable is to have WO handle the "PiP" keyframes and PGM exit/entry while the virtual display that the PiP(s) is based off of is always fed the same live input. At this point, an external hardware switch feeds that live input. This should greatly simplify the task at hand.

 

The first thing that comes to mind is a Blackmagic ATEM - specifically the TVS or their new TVS-HD (supports 3G SDI and has buttons). Both of these units are inexpensive and rack mountable. They also both accept network commands and have full SDK available, so let's use these in the example.

 

With WO managing the full composition, the live input is fed by an ATEM. The operator can control what content is on screen directly with the ATEM or, alternatively, by sending commands to the ATEM from WO. The PGM out on the ATEM hits the live input of each display computer of course. If you want to have more than one live input on screen at a time, more ATEMs can be used to feed additional live inputs. In this way, each live input can be thought of as a layer (like on Spyder) and the ATEM can be considered the mixer.

 

Why bother?

My thought on starting this thread is to bring a most cost-effective option to the table for events that have many destinations requiring WO but not the budget to have a screen management system in between WO and the destinations. Instead, it can be done with the WO rig that we know is needed no matter what, but instead of something like Spyder (and maybe a router) to create PiPs on top of watchout, we can instead use an inexpensive simple switch, like ATEM. If we want to show 4 different things on screen at once, we just need 4 live inputs and 4 ATEM. Those ATEM inputs could be spread across, say, 32 WO outputs many times where if we had Spyder make the PiP it would require the same WO system AND 4x Spyder X20-1608. This is why I am interested in doing this.

 

Of course, there are many applications where it is simply better to stick a Spyder downstream of Watchout, but I was looking for a way to handle the events that just don't need all that.

 

While I always like the idea of making things work in ways their designers never thought of, and I have also experimented with using live capture with a subswitch to bring images into a WO-based system. However, my understanding is you would need a capture card on each machine that needs to EVER see a particular input. Meaning, if you want your 4 ATEMs attached to 32 WO outputs, at minimum you would need 24 capture cards to allow all 4 ATEMs into all 6 WO display machines to accomplish what you want. There's no ability in WO to program a live input to a machine that doesn't have that feed already attached to a capture card in that display machine. 

 

As such, I think the rental of an E2/Spyder/Asscender is way below the outlay for all the capture cards in this system your proposing, never mind that the switchers are generally more flexible for last minute changes and features, never mind the improved latency issues.

 

I hope I'm making this clear, and not confusing the issue further...

 

Kevin Lawson

Link to comment
Share on other sites

  • Dataton Partner

While I always like the idea of making things work in ways their designers never thought of, and I have also experimented with using live capture with a subswitch to bring images into a WO-based system. However, my understanding is you would need a capture card on each machine that needs to EVER see a particular input. Meaning, if you want your 4 ATEMs attached to 32 WO outputs, at minimum you would need 24 capture cards to allow all 4 ATEMs into all 6 WO display machines to accomplish what you want. There's no ability in WO to program a live input to a machine that doesn't have that feed already attached to a capture card in that display machine. 

 

As such, I think the rental of an E2/Spyder/Asscender is way below the outlay for all the capture cards in this system your proposing, never mind that the switchers are generally more flexible for last minute changes and features, never mind the improved latency issues.

 

I hope I'm making this clear, and not confusing the issue further...

 

Kevin Lawson

 

With capture cards you have to add one to each server plus d/a boxes to distribute the signal etc.

 

A new way to reduce the hardware costs could be NDI. It would just take one dedicated computer to catch the signals through fast capture cards and then stream the content using the NDI protocol. This does not need any extra hardware on the display server side and would also work with devices like the new WATCHPAX 4.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...