Jump to content
Dataton Forum

matkeane

Member
  • Content Count

    117
  • Joined

  • Last visited

1 Follower

About matkeane

  • Rank

Contact Methods

  • Website URL
    http://matkeane.com

Profile Information

  • Gender
    Male
  • Location
    Paris

Recent Profile Visitors

613 profile views
  1. Are you looping by jumping back to the start of the aux timeline, or by placing the audio tracks in a composition and setting free-run/looping on that in an aux timeline? I have found that the composition method usually works better, but I haven't actually tried with a seamless audio loop.
  2. Hi Mike, The problem I have encountered with compositions is that (sometimes) when media cues within a composition use a blend mode other than normal, the cues 'pop' on and off instead of fading smoothly, once I apply an opacity tween to the whole composition in a task. I initially ran into this with a fairly complex wait loop containing free-running loops and various blend modes. Nesting the whole thing in a composition gave hiccups with opacity tweens and looping, but placed in a task everything ran smoothly, but then it was difficult to fade everything out mid-sequence if necessary.
  3. I would find it really useful to have a master opacity and volume control for all cues within an auxiliary timeline, so that I can say something like 'Task 11 - fade out nicely over 2 seconds and then stop!'. Currently I'm doing this by creating generic inputs for my_task_opacity and my_task_volume, but then I need to add them to every media cue in each task timeline, which can be a slow process. For some simpler tasks, like live inputs, I nest a composition in a task with a fade-in/pause/fade-out which achieves the same thing but, having to create compositions for every task is also time consuming. If I could do something like 'setInput my_task_01.opacity 0 1000', I'd find that really useful.
  4. To piggyback on Cowboyclint's suggestion, it would be nice if it were possible to change the timecode display format for a timeline (or for all timelines based on the Project frame rate) and let Watchout deal with the conversion to milliseconds. Content creators (in my part of the world at least) are more familiar with working at 25fps, and doing timecode calculations in my head while also multiplying by 40ms per frame seems like something a computer would probably do faster and more accurately! Also, the ability to enter relative timecodes would be great - e.g. hit ctrl+J and then type +12.20 to jump forward 12 seconds and 20 frames from the current timeline position.
  5. I'm currently working on a Watchout installation at the Paris airshow. In addition to the production and player machines, I have a Watchnet 1.4 server running and a SurfacePro that the client can use to cue various clips on demand. So far the Watchnet setup is working well, but I have a few questions... Occasionally, before or after the show opens, I need to make changes in the control booth and so I put the show in Standby (triggered by Production, not Watchnet). Is there a way to show the standby status in a Watchnet panel, so that the user outside on the floor can understand why the buttons are unresponsive? I wondered about creating a 'standby' panel and forcing the Watchnet UI to navigate to a holding page, but that would require somehow triggering a 'navigate' command on the remote UI. I can just switch the screen off, of course, but a blank screen tends to worry the client, plus it means a lot of walking back and forth! Is it possible to trigger Watchnet commands from Production? Now I have all my scripts set up in the Watchnet server, It would sometimes be handy to be able to trigger them when I need to launch a specific task from production, without recreating the same events in my Tasks. Is there a way to temporarily disable buttons via a script? Once certain tasks are running, I'd like to disable the other buttons until the current task is finished to avoid lots of videos being launched at the same time. And finally, just a detail, is there a way to add a newline to button text? I was trying to add the clip duration under the title, but my attempts at adding '\n' and '<br/>' didn't get me anywhere, and extra spaces seem to get stripped out. Thanks for any help and suggestions!
  6. To expand on JFK's suggestion - I usually then put the live input and the drop shadow/border layer together in a composition so that I can move and scale the whole thing as required without 2 sets of Tweens to manage. Placing the live input in a composition doesn't seem to affect the latency.
  7. Hi Josef, Yes, and that is a really nice time-saver, but it would be even better if it were possible to change the 'advanced' properties of multiple cues (blend mode, etc) at once - similar to the way it's now possible to edit the specifications of multiple displays at once.
  8. The recent update to thread about the Photoshop import script reminded me of a feature I think would be a big improvement to Watchout: a scripting API for show creation and modification. When building shows with Watchout, I seem to spend a large part of my time double-clicking and copy-pasting. I’ve already created scripts and small apps to speed up some things, but the current ‘copy-and-paste’ API has its limits. A scriptable API for Watchout would make automating repetitive tasks simpler and quicker. The creator of the Photoshop import script has also created a clipboard manager app for storing frequently used Tweens, etc, which is a good example of what could be simplified. An example I encounter quite often: I have a folder of client videos, each with a different duration. I create a Task with the first files, add the audio file with a generic input for the volume, a countdown and fade out for the end, and so on. Then I duplicate the Task and replace the video file, try not to forget the audio file, drag out the video file to the new duration, move the countdown, slide the opacity tween points to the new end point, rename the Task and so on and so on with my folder of videos, all the while hoping I don’t get distracted in the middle of the process and skip something. If, instead, I could write a little script (Python, Java, Javascript, Lua, whatever...) to loop through all the videos in a folder and build the tasks, with the video set to the correct duration, generic inputs created and assigned, it would be a big time saver. Imagine if, at conferences with hundreds of participants, I could point a CSV file at the Watchout text tool and build all the names with one click! When I’ve shown Watchout to people in the past - especially those familiar with other media server systems - their reaction is often that the Watchout UI is a bit ‘clunky’. A scripting API wouldn’t require adding any complexity to the UI, so it wouldn’t change using Watchout in the normal way. But, when I look at the eco-system of plugins and scripts that has developed around After Effects, for example, I think Watchout could benefit from the added flexibility. It could make it easier to add certain feature requests using scripts - re-usable tween curves anyone? Check out the Flow script for After Effects! Need to change the blend mode of 97 video clips without going crazy? Check out this little script… etc, etc. In the meantime, if it were possible to copy the duration of a video clip from the Media window, that would already be an improvement.
  9. Quite often, yes. There's another thread on this forum about strategies for show backups/spares, but quite often I run shows with main & backup player machines controlled from 1 producer machine (usually with a spare producer laptop nearby). Occasionally, on bigger shows, there are redundant systems for everything - production, players, projectors - but then it all syncs to one timecode source... It usually comes down to the budget for the project.
  10. It is still possible to create data-driven PSD files, although I think the name of the feature has changed in the CC versions of Photoshop - the data & variables functions allow you to create layers or files from a CSV data source. More info here: https://helpx.adobe.com/photoshop/using/creating-data-driven-graphics.html
  11. @Mike Fahl: That makes sense now, thanks! I just did some quick tests and it works as Mike describes - but uses the STOP, not RESET, command. I opened up a recent project in which I had 6 audio jingle tasks, so I added a new composition into which I pasted a control cue, and then duplicated and modified it to create 6 cues to stop each of the audio jingle tasks. I pasted the composition into all of the audio tasks and now, starting any of those tasks will kill the other tasks, but not the task from which it was called. Are control cues in compositions actually supported behaviour though? They can't be created directly in compositions, but can be copy-and-pasted into place and then modified. This has turned out to be extremely handy in the past, but I'm never sure whether it's supposed to work, or just a useful bug. And, as Mike says, you still need to manually add a control cue for each task you want to kill, but at least the composition trick makes it reusable, and updatable in one place.
  12. Somewhere way back in the epic Feature Request thread, Mike Fahl suggested that this is possible, but I've never quite worked out what he meant - perhaps he can chime in to enlighten us!
  13. Hi jfk, The source and target cues were identical (except stage coordinates). It was somewhat related to the other thread about large resolution video playback... Long story short - I hit the HAP import limit at 8000px on the production machine I was using, so split content into 4 UHD slices. Each video clip was made up of 4 separate cues on the timeline but, of course, that then meant that any colour correction had to be applied identically to 4 clips, which is why I was trying to copy-paste hue tweens with a shared generic input. I set up (and tested) the tween and formula on the first video cue, and tried to paste it to the 3 other slices, but no joy. It was about 3am at this point, so I didn't really have the energy to go through all the options methodically to work out why it wasn't working but, as I mentioned earlier in this thread, once I had time to calmly reproduce the problem at the office (with the same media and also some small test images), adding a single keyframe (even if it's ignored by the tween formula) is enough to enable copy-pasting.
  14. @jfk, @Erik - Thanks for that info about the HAP limits. Is the limit imposed by the GPU in the production machine, or is Watchout checking with connected players what their GPU limits are? Just curious, as I often run shows with production laptops which are much less powerful than the player machines - does that mean I might hit the import limit for HAP on production even though a player machine would be capable of playing back the file?
  15. The maximum resolution at which a HAP video file can be imported into Watchout is currently 8000x8000px. I don't know whether that is a software limitation, or whether it is limited by the GPU. Either way, HAP media at higher resolutions must be split into pieces for playback. In general, the graphics company will deliver content at full resolution and quality (i.e. uncompressed, if feasible, or using lossless compression) which might, for example, be a ProRes video file or an image sequence (which makes partial content updates/corrections easier). The file size and data rate will usually make this unsuitable for playback. However, the next step is to use this as your master for slicing up and compressing the content, using a codec more suitable for playback in Watchout - HAP, MPEG2, h264, etc, depending on circumstances. The delivery format may also depend on your physical proximity to the graphics company - lossless codecs are more suitable for delivery on a hard-drive. If files are being sent over the internet, you may want to compromise with some form of compression to speed up transfer times, or use image sequences which can be transferred image-by-image rather than as one huge file.
×
×
  • Create New...