JG Posted January 17, 2014 Report Share Posted January 17, 2014 Am I correct in my understanding that when the display computer receives a file from the production computer it transcodes it into a Watchout proprietary format of raw RGB values and that this internal file is what's played back when the show is run? Jim Quote Link to comment Share on other sites More sharing options...
Moderator jfk Posted January 18, 2014 Moderator Report Share Posted January 18, 2014 A little bit more than that, but essentially correct for still image files. And that is why still image file format has no impact on playback performance, all files are converted to the common uncompressed form prior to playback. Quote Link to comment Share on other sites More sharing options...
JG Posted January 18, 2014 Author Report Share Posted January 18, 2014 Okay, so two follow-ups: A little bit more than that, but essentially correct for still image files. So video files are decoded during playback? When still images are rendered to (let's call it) Watchout Format, does Watchout render each object seperately, or does it render all the images into one frame of playback? Jim Quote Link to comment Share on other sites More sharing options...
Moderator jfk Posted January 18, 2014 Moderator Report Share Posted January 18, 2014 Okay, so two follow-ups: So video files are decoded during playback? That is correct. And they are decoded in software in the cpu. GPU hardware decoding is not used (Windows Media Player or QuickTime player will use GPU accelerators default, GPU hardware decoding can be turned off in WMP and QT to better approximate WO results). When still images are rendered to (let's call it) Watchout Format, does Watchout render each object seperately, or does it render all the images into one frame of playback? It is a bit more complex than that, but to answer your question, each still image original is individually rendered (into a set of objects). Native resolution and multiple pre-scaled versions are prepared. This is to manage GPU load with large bitmaps scaled way down (Dynamic Image Scaling introduced in v5). Furthermore, also to manage GPU load, each of those cached images is broken down in to subsections or 'tiles' which assemble the full image. (since about v1.1 i think) When panning, zooming, etc. visible areas are loaded and discarded as needed. In the old days when scrubbing or jumping around the timeline, you would sometimes see the image objects paint up in tiles, but with modern SSDs and hyper fast systems, not so much anymore. Quote Link to comment Share on other sites More sharing options...
Member Alex Ramos Posted January 19, 2014 Member Report Share Posted January 19, 2014 Is the same happening in the watchpax ? Its on the pax description "Hardware accelerated video playback (H.264)" How is the Watchout of the pax different from the downloadable version ? Alex Quote Link to comment Share on other sites More sharing options...
Moderator jfk Posted January 19, 2014 Moderator Report Share Posted January 19, 2014 Is the same happening in the watchpax ? Its on the pax description "Hardware accelerated video playback (H.264)" Very good catch, yes, that is the one exception. On any regular PC that is not the case. I'll probably get in hot water for saying this, but the WATCHPAX cpu is so wimpy, if it did not use the hardware acceleration for movie decoding, it could not play back HD video at all, so they did not have much choice. Just the same, Dataton has full control over the WATCHPAX hardware, so they could use the hardware acceleration in that case. Also the WATCHPAX is restricted to one output, which simplifies things a bit. Add accelerated movie decoding and multiple movies for multiple outputs and things get a bit different. With generic hardware, the variables are too significant to attempt the same, it would be a compatibility / support nightmare. How is the Watchout of the pax different from the downloadable version ? watchpoint is the same inside the WATCHPAX, it is version updated with the standard WATCHOUT download. Clearly they have a way of recognizing their own hardware to allow the accelerated movie decoding. Quote Link to comment Share on other sites More sharing options...
JG Posted January 22, 2014 Author Report Share Posted January 22, 2014 Excellent. And what about parallelism? If a display computer has a six-core CPU, can Watchpoint process, say, four tasks simultaneously? Is there even any need for that level of computing with the way the program handles playback? Lastly, assuming the program can run multiple threads in parallel, is there a theoretical limit? Could I build a machine with four twelve-core Opterons, for example? I freely admit this specific example would never by financially practical, I'm more curious about the upper bounds of the software. When does more computer equate to better performance, and when does it become just...more? Jim Quote Link to comment Share on other sites More sharing options...
Moderator jfk Posted January 22, 2014 Moderator Report Share Posted January 22, 2014 Excellent. And what about parallelism? WATCHOUT is multi-threaded. If a display computer has a six-core CPU, can Watchpoint process, say, four tasks simultaneously? And more. Even though WATCHOUT is multi-threaded, the heavy lifting is done by Microsoft DirectX, and its child processes. Is there even any need for that level of computing with the way the program handles playback? multi-core cpus come into play primarily when decoding movies. The movie codec will impact how well those cores are utilized. .wmv, and codecs installed by WATCHOUT are also multi-threaded (mpeg2, mpeg4, animation codec 32+ .mov). Other QT .mov codecs on a PC, not so much Lastly, assuming the program can run multiple threads in parallel, is there a theoretical limit? The limits are pretty much defined by Microsoft DirectX, and they are pretty impressive. Microsoft does a good job of keeping their software up to the levels the hardware can provide. Could I build a machine with four twelve-core Opterons, for example? I would think so. We are scheduling testing along those lines, a dual cpu system with 12 cores (24 threads), and we discussed the AMD cpus as well. We can currently get three 4k-24p mpeg4s to run smoothly and output 4k-30p on an i7 six-core Extreme Edition / quad channel memory / single screaming SSD platform. Throughput is also a concern in that stratosphere. SSDs, PCIe speed / throughput to motherboard, memory and memory channels all come into play for that gargantuan task. Interested in seeing how far that can be taken with a lot more cpu resources as well as PCIe x8 RAID controller handling four screaming SSDs in RAID 0. I freely admit this specific example would never by financially practical, Depends on the market, WATCHOUT is very versatile and serves a wide variety of markets. 4k is arriving and we are exploring WATCHOUT play out @ 3940x2160 - 30p. Installation costs per channel currently hovering around USD $3k. (not including production) That is a lower cost per channel than 1080p playout in version 4 (around 4 years ago). So it may be practical in some applications. I'm more curious about the upper bounds of the software. When does more computer equate to better performance, Show Sage has been building WATCHOUT computers for 12 years, so far, that has always been true in our testing. and when does it become just...more? Depends on the demands of the show content. When the tasks required (content make-up) do not stress the cpu, then more is just more. For example, most of what the tween track functions provide is carried out by the GPU, so the graphic sub-systems speed and gpu multiple cores come in to play for those functions. Tween functions have little load for the cpu. So if your show is made up simply of hi-resolution RGB uncompressed stills animated in WATCHOUT at progressive full frame rates, but no movies or modest movie loads, then cpu is not critical to success, but GPU is. That said, it is a very rare show that will need the extra power of dual graphics cards (Crossfire/SLI), and since the second card only adds to the graphics systems overall performance, you do not pick up any more outputs with the second card. GPU is where most laptops and minis fall short. Keying, masking stress the GPU the most. However, if you are only playing movies and lightly using tweens, laptop or motherboard integrated gpus may suffice. Quote Link to comment Share on other sites More sharing options...
JG Posted January 23, 2014 Author Report Share Posted January 23, 2014 Diving a little deeper into codecs, do I have the sources of these codecs correct? Implemented by Dataton: MPEG-2 H.264 Animation WMV Implemented through Quicktime Image sequences (Photoshop, JPEG, PNG, etc) FLV Others? MPEG-2 is still the preferred format, Animation is still the only option with an alpha channel, and image sequences are specifically discouraged. H.264 and WMV allow HD playback with smaller file sizes, but are much more processor intensive than MPEG-2. Jim mentioned MPEG-4, is this H.264, or MPEG-4 part 2, or something else? Are any decoders implemented through Windows Media? Anything worth mentioning that I missed? Jim Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.