Jump to content

jfk

Moderator
  • Posts

    1,808
  • Joined

  • Last visited

Everything posted by jfk

  1. I can answer one ... No original WATCHPAX = active required dual output WATCHPAX = passive is ok (quad-core CPU, dual-output, USB 3.0 support))
  2. This is sort of covered in the WATCHOUT academy cookbook: Photo with iPad/iPhone Possibly you can extrapolate from that and use it as a baseline for questions. i.e. the sections of web server communication with Dynamic Image Server might be helpful
  3. it is reasonable, albeit formidable - not for neophytes. No, not in the WATCHOUT software. There are practical hardware limits on how many that can concurrently be active, to some extent you will need to manually manage that. Response time is affected by programming technique. i.e. if you can wake up (pause) a subset of aux timelines prior to the selection, the paused (and therefore cached) aux timelines can respond close to instantly. The unused paused timelines need to be managed after the selection as well. If you are sending a play to a stopped aux timeline, it takes time to cache media before it is available, adjust accordingly. Yes and NNinja's live tween suggesting is solid. Also, consider incorporating conditional layer(s). This can be useful in reducing the number of timelines, while still tailoring responses on fixed for the run variables like gender, age group, day of week, etc, etc. without creating a unique aux timeline for each one. Yes. This is an advanced technique and requires the wo programmer to manage resources so you don't bog it down over the run. You probably would not use MIDI for this type of thing, you can do anything MIDI can (and much more) with IP. If you are going to the trouble to create the custom "digital glue" to interface such a device, might as well use IP for easier access to the full feature set.
  4. Also, tell us about the display signal path. Yes, I know, it has all the symptoms of a software / computer issue. Yet when we observed events that would fit your description, it worked out to be !*%& HDMI connections. Our signal path was graphics card, MDP->HDMI 1.4a active adaptor, 1m 4k rated HDMI cable, 4k display Granted we were driving HDMI at 3840x2160@30p. Just the same, we were at a trade show exhibiting WATCHOUT with a pretty solid brain trust on hand. What to everyone appeared to be a software instability, and make no mistakes, watchpoint was locking up and the display never stopped showing an image. Yet in fact this was a mechanical, display chain hardware triggered event. Thanks to Fredrik Svahnberg for identifying this one. Solution was to strain relief and tape all display connections.
  5. Depends a lot on the movie pixel size, quantity and encoding. Suffice is to say a quad-core gpu is very close or possibly past that threshold. With a hi-speed SSD, it will likely need to be a best case scenario to get four concurrent 1080p30 mpeg2 to run smoothly on that setup.
  6. Yes, geometry mostly adds to the video card (GPU) load. Hard to quantify that, suffice is to say, with the higher end gaming GPUs, we have never seen it bump the threshold.
  7. Yes I know that, reread Mike's original comment.
  8. Is the gpu capability actually hard a limit on any movie resolution, or is it a recomendation on sizing pre-split movies? In any case, if the 4096 is a graphics card limitation, it might make sense to examine the specs of the graphics card you are using, since it appears that is the source of the resolution limitation. i.e. YMMV For example, with a four output AMD FirePro W7000 graphics card .... AMD FirePro™ W7000 Graphics> ... Memory Size/Type: 4GB GDDR5 Interface: 256-bit Bandwidth: 154GB Compute Performance 2.4 TFLOPs single precision and 152 GFLOPs double precision floating point performanceDisplay Outputs DisplayPort: Four standard Max DisplayPort 1.2 Resolution: 4096x2160 Max DisplayPort 1.1 Resolution: 2560x1600 ...
  9. and yes, WATCHOUT 5 does narrow the possibilities - thanks If some of your .png files are disappearing, you might want to try setting the file's Transparency specifically instead of the default Auto Detect. reference page 36, Chapter 3, Media in the WATCHOUT 5 User Guide This is done by changes to the the Transparency setting in Image Specifications, found in the media window, select the image file and open the Image dialog. This has been known to fix some issues of disappearing images before.
  10. You are talking about resources provided by Windows, and Widnows WDM audio support does not provide for the remapping of outputs. A simple workaround is to remap them in the .wav file headers. This will accomplish what you desire, albeit not as elegant as you request. Dataton has provided an as is / free of charge utility to accomplish this -> ChannelShifter.air
  11. Is the 8 simultaneous captures a hardware or software restriction? i.e. if WATCHOUT allows 12, if the hardware can be found to do it, won't it allow 12 simultaneous captures? I am not asking if you have tested this, I am asking where the restriction lies?
  12. x2 Big difference in the way large stills are handled between v4 (and earlier) and v5. What is the resolutions of the missing images? file type? do the images have transparency?
  13. Passing this suggestion along from a freelance programmer. With the advent of display computer names, it would be helpful to automatically open the Network Window on startup and show load like the Main Timeline, Stage, Media, (Task). The Network Window size and position is remembered, but you have to re-open it from the Windows menu. Having it available without looking for it would provide a gentle and useful reminder of this new capability. ---- along the topic of computer names ---- I would find it useful when a cluster name is used, that it appear in the display dialog much the way the IP Prefix appears now. It is just to obscure that the show's cluster name only appears in File - Preferences - General Will eliminate a fair amount of confusion.
  14. Ahh, yes, thanks for the reminder. Win 7 and Win XP interpret channel assignments differently. i.e. even Microsoft can not agree with themselves on the "standard". Quickest way to test is with a set of mono .wav files, each assigned to a different channel using the Channel Assignment Tool. Assign a mono .wav file to each of the eighteen output channels available in the tool above. You will likely find eight that work.
  15. Out of curiosity, did you attempt to remap channels 7 and 8 to higher channel numbers, i.e. 9 and 10, etc. I have seen situations where the last two channels map differently and this has been a solution. The hardware is still limited to eight channels of outputs, it is just some drivers interpret the higher channel numbers differently.
  16. Yes, if you wish to suppress the logo, you must still use the command line -NoLogo option. Note, that swtich is case sensitive. Default behavior is (and always has been) to display the logo screen whenever a show is not loaded and whenever an update is performed while not in standby.
  17. x2 Try making your videos 1920 x 1080 (instead of 1080 x 1920) and then rotate them in watchout.
  18. It could work, it simply transfers the driver issue to an unknown third party. i.e. if you can find a WDM driver that takes the WATCHOUT sound playback and integrates the audio into the computer's HDMI video output, then I suppose that box would take you the rest of the way. Finding a stable driver to do that is still the key.
  19. WATCHOUT is multi-threaded. And more. Even though WATCHOUT is multi-threaded, the heavy lifting is done by Microsoft DirectX, and its child processes. multi-core cpus come into play primarily when decoding movies. The movie codec will impact how well those cores are utilized. .wmv, and codecs installed by WATCHOUT are also multi-threaded (mpeg2, mpeg4, animation codec 32+ .mov). Other QT .mov codecs on a PC, not so much The limits are pretty much defined by Microsoft DirectX, and they are pretty impressive. Microsoft does a good job of keeping their software up to the levels the hardware can provide. I would think so. We are scheduling testing along those lines, a dual cpu system with 12 cores (24 threads), and we discussed the AMD cpus as well. We can currently get three 4k-24p mpeg4s to run smoothly and output 4k-30p on an i7 six-core Extreme Edition / quad channel memory / single screaming SSD platform. Throughput is also a concern in that stratosphere. SSDs, PCIe speed / throughput to motherboard, memory and memory channels all come into play for that gargantuan task. Interested in seeing how far that can be taken with a lot more cpu resources as well as PCIe x8 RAID controller handling four screaming SSDs in RAID 0. Depends on the market, WATCHOUT is very versatile and serves a wide variety of markets. 4k is arriving and we are exploring WATCHOUT play out @ 3940x2160 - 30p. Installation costs per channel currently hovering around USD $3k. (not including production) That is a lower cost per channel than 1080p playout in version 4 (around 4 years ago). So it may be practical in some applications. Show Sage has been building WATCHOUT computers for 12 years, so far, that has always been true in our testing. Depends on the demands of the show content. When the tasks required (content make-up) do not stress the cpu, then more is just more. For example, most of what the tween track functions provide is carried out by the GPU, so the graphic sub-systems speed and gpu multiple cores come in to play for those functions. Tween functions have little load for the cpu. So if your show is made up simply of hi-resolution RGB uncompressed stills animated in WATCHOUT at progressive full frame rates, but no movies or modest movie loads, then cpu is not critical to success, but GPU is. That said, it is a very rare show that will need the extra power of dual graphics cards (Crossfire/SLI), and since the second card only adds to the graphics systems overall performance, you do not pick up any more outputs with the second card. GPU is where most laptops and minis fall short. Keying, masking stress the GPU the most. However, if you are only playing movies and lightly using tweens, laptop or motherboard integrated gpus may suffice.
  20. As Rainer indicated, it is more likely a tweaking issue like a firewall. We have also seen issues like this cured when fixed IP in the display dialog is changed to computer names instead.
  21. Some comments that may provide an equivalent function for now If this is important, you could make the aux timelines all conditional layers and control it that way. Kind of the concept shows and clusters accomplishes. Using the new computer name / cluster addressing handily arranges the groups. But it is designed to accommodate one show file per group (cluster). Cluster allows you to use the same display names within multiple groups, aiding in moving shows from one group to another. Having a little trouble following that, there are three kinds of midi command supported by WATCHOUT, note and controller input types or MIDI Show Control. (I doubt you are talking about MSC, but to be complete ...) Notes are often used to trigger tasks, but can have other uses. Controller is often used to control live tweens. Inputs used in live tween formulas are linked to the tween not the layer. I do use the same controller input over and over again in tweens, so I am not certain i follow what you are asking. You may be able to accomplish something more specific with a little planning. For example, create a midi note input or a generic input with a way to set its value to either 0 or 1 and for example, name it ConditionA. Then use it in conjunction with your live tween. i.e. tween formula might look something like myLiveInput * ConditionA
  22. No surprises there, the GPUs we use are extremely sensitive to EDID anomalies, as you attest.
  23. Very good catch, yes, that is the one exception. On any regular PC that is not the case. I'll probably get in hot water for saying this, but the WATCHPAX cpu is so wimpy, if it did not use the hardware acceleration for movie decoding, it could not play back HD video at all, so they did not have much choice. Just the same, Dataton has full control over the WATCHPAX hardware, so they could use the hardware acceleration in that case. Also the WATCHPAX is restricted to one output, which simplifies things a bit. Add accelerated movie decoding and multiple movies for multiple outputs and things get a bit different. With generic hardware, the variables are too significant to attempt the same, it would be a compatibility / support nightmare. watchpoint is the same inside the WATCHPAX, it is version updated with the standard WATCHOUT download. Clearly they have a way of recognizing their own hardware to allow the accelerated movie decoding.
  24. By default, both Windows Media Player and QuickTime will use the graphics card's hardware accelerator for assistance decoding movies. WATCHOUT on the other hand will not use the GPU assistance. To adjust WMP or QT to more closely approximate WATCHOUT behavuor, you can change WMP or QT settings to disable the hardware assistance. To disable WMP's use of hardware accelerated decoding … Open Windows Media Player and right click on the video window to open this menu … Select More options … Select the Performance tab of the options window Under DVD and video playback uncheck Turn on DirectX video Acceleration for wmv files Disable a similar setting in Windows QuickTime via QT Player (screenshot examples from Windows QuickTime Player 7.7.4) … Open Windows QuickTime Player and go to Edit – Preferences – QuickTime Preferences … in the Quicktime Preferences window, select the Advanced tab and under Video – DirectX, turn off / uncheck Enable Direct3D video acceleration These changes have no impact on WATCHOUT itself. The purpose is to obtain results in the viewers that are more closely related to the results you will achieve in WATCHOUT.
  25. That is correct. And they are decoded in software in the cpu. GPU hardware decoding is not used (Windows Media Player or QuickTime player will use GPU accelerators default, GPU hardware decoding can be turned off in WMP and QT to better approximate WO results). It is a bit more complex than that, but to answer your question, each still image original is individually rendered (into a set of objects). Native resolution and multiple pre-scaled versions are prepared. This is to manage GPU load with large bitmaps scaled way down (Dynamic Image Scaling introduced in v5). Furthermore, also to manage GPU load, each of those cached images is broken down in to subsections or 'tiles' which assemble the full image. (since about v1.1 i think) When panning, zooming, etc. visible areas are loaded and discarded as needed. In the old days when scrubbing or jumping around the timeline, you would sometimes see the image objects paint up in tiles, but with modern SSDs and hyper fast systems, not so much anymore.
×
×
  • Create New...