Jump to content
Dataton Forum

WatchDog

Member
  • Content Count

    13
  • Joined

  • Last visited

About WatchDog

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. It has been a few months now since I went through a significant amount of render tests, which included side-by-side playback in WatchOut. I know I used FFMpeg natively and in conjunction with Render Garden for tests, including AE's encoding of Hap. I may have thrown in AVF's batch encoding as well. I'd have to revive my findings for definitive results, but I concluded that tweaking some parameters in the command line entries within Render Garden's GUI resulted in acceptable Hap files. I had to abandon finalizing my results so I could focus on existing projects. I do note your mention of QT/VLC playback not using GPU for decoding. I only point out the result because AE's native Hap decoder ala QT plays back with very few dropped fames in QT/VLC. That same video encoded via FFMpeg does not ala Render Garden does not. It is so bad that it looks like stop motion photography. I bring it up because there is something going on under the hood that is very different than what we have been accustomed to with AE Hap files, as far as a quick QC through QT or VLC. Initially I thought FFMpeg was producing junk encodes, but was quickly proved wrong when I was able to play back the file flawlessly in WO. For me, the problem with using the Disguise solution is that it farms out renders to AME, which makes the multi-render workflow far less efficient and effectively terminates the use of output modules. In short, it adds unnecessary steps into a workflow that knocks efficiency back 5 years. AME also crashed multiple times during initial tests, although I have not tested with the latest Adobe CC version, nor do I plan to migrate to any new version until this debacle is resolved to my satisfaction. Feel free to test out Render Garden. They do have a 7 day trial. Other than what was noted above, I did see a slight reduction in file size as well. The only reason I am not currently using Render Garden is that I'd have to purchase a license for every work station while simply reverting back to AE "pre-hap support drop" works seamlessly for free. I'm just waiting to see what happens at this point since this is a new issue for our industry and great minds are on it.
  2. Hello Keith, I have used the following software to export Hap out of AE: https://www.mekajiki.com/rendergarden/ It is similar to BG render since you can have multiple comps rendering at once, or even segments of the same comp rendering at once. It touts "network" rendering as well over multiple computers, hence the "Garden" in the name. It is very similar to a render "farm". It relies on FFMpeg for Hap encoding and that is done automatically after the initial render is done. I put the software through the wringer and it rendered faster, by at least 25%, than AE's native renderer. That includes the post render encoding to Hap. So I have rendered and encoded with this AE plugin faster than I could with AE alone doing Hap natively. The reason for the processing efficiency is that it utilizes multiple cores for rendering. I see this product as the only current viable alternative to Adobe's drop of Hap support. Going back to your concern of After Codec's FFMpeg poor quality Hap encodes, I have experienced the same quality loss using After Codecs. I did not have the same experience using Render Garden. Perhaps its native settings produce a better Hap encode? I know I had the ability to customize the encode to an extent using FFMpegs command line within the GUI, so perhaps that was it. I did not notice, to the degree you speak of, a significant quality difference between HAP encoding natively out of AE vs Render Garden's use of FFMpeg, except for a slight gamma shift. I did notice that any attempt to play back the resulting FFMpeg based Hap file through QT/VLC on a mac/pc resulted in consistent frame loss during playback. My 30fps file could only playback at 15fps. In contrast, those same Hap files played back fine in WO. I did side-by-side comparisons of AE's native Hap encoding vs Render Garden's to scrutinize quality/playback and did not notice a difference. Not sure what is going on under the hood with the FFMpeg based Hap files producing choppy playback with QT and other media players, but it is a non-issue as they play back fine in WatchOut since the servers do not rely on QT for decoding. My attempt to use Disguise's Alpha Hap encoder was not very successful in terms of actual use with AE, so not a viable alternative just yet. Adobe used Disguise's progress as a means to consider the "support Hap" case closed. The only thing Adobe did was add Hap decoding, so you can import Hap into AE. As far as encoding Hap, they left that to the 3rd party effort from Disguise. I do appreciate Disguise's timely effort, but it is still "Alpha" at best. Render Garden seems to be the best alternative to date. Like you, however, I'm sticking with a version of AE that still supports encoding Hap until I am forced to seek out paying for 3rd party solutions. I do not expect any further development or support from Adobe, so the clock is ticking.
  3. That release says it can "decode" Hap AKA import into AE. Their export/render solution comes from Disguise, which is a work in progress at best. Adobe now considers the matter "complete" touting Disquise's work as the permanent solution to all our Hap problems. Although Disguise should be credited with their effort, you can abandon all hope Adobe will listen to our market. Instead, try this FFMpeg based HAP render beast that works brilliantly with AE. They are very responsive to requests and are a joy to work with, unlike Adobe. I have tried it and tested. It is far superior to Adobe's stand alone AE render core in terms of time, quality, and Hap selection. There are a handful of other ones, such as "After Codecs", but Render Garden is the only one that I have found viable at this point and time, aside from using an older version of AE. Render Garden: https://www.mekajiki.com/rendergarden/
  4. I remember avoiding the use of cluster names over IP addresses when I found bugs related to proxy and audio issues while using cluster names. A few years have passed so I'm hopeful I do not encounter those bugs now that I am forced to use cluster names to maintain sync between displays. Fingers crossed!
  5. Official active list of keyboard shortcuts with each release. Not sure why this does not exist already...
  6. Thank you Walter! Those of us that jump from Adobe products on Macs to WO on PC benefit greatly from this script. The more we can tailor software to our needs and avoid subscribing to an engineer's ideal arrangement, the more efficient we become. Now if only Dataton could actually compile a keyboard shortcut list with each release. It is asking a lot, I know. Probably more fun for the end user to invest time scouring the internet for such a simple piece of info. I prefer software that actively enables a user to succeed in its environment without forking over hundreds of dollars for a training class.
  7. Thanks for the info! We do use on board raid config, but have had a well tuned recipe for quite some time now. I have successfully cloned O/S disks and distributed to display machines without issue before, so this one has me perplexed. As soon as the gear is back, I'm going to do some testing and utilize all of the above-mentioned advice. I will report findings here.
  8. Hello Jim! Thanks for the prompt reply! I will certainly remember to use that command line option moving forward and very much appreciate the insight. In regards to the GFX driver, we are still on AMD driver v14. That being said, I'm not sure if the driver/registery issue applies here, but great info none-the-less. I'm still a bit confused on what is actually being written where. Are we talking about the registry of the O/S or the w7100? What would be a good course correction for reviving a machine once that occurs? Thanks for the information Thomas. I did use the boot manager, via F11, to select the proper O/S disk. Are you saying it is strictly a bios level setting outside of this approach? I was under the impression that the F11 boot manager is what ultimately directs the selection of the system disk. The clone may indeed have been based on a UEFI install. I will certainly use your advice as I put this machine through the wringer. Thank you!
  9. Hello Fellow WatchOut ops, On a recent show, I had subscribed to an E2 EDID of 2880 x 720 using an AMD FirePro W7100. This particular EDID, combined with how WO digs into the GPUs, has potentially caused irreparable damage to the GFX card. I've found that a display set to that format would begin to cache and receive it’s files/data on a WO update, but would then just freeze in a state where it displays it’s display name. Once in that state, a forced exit of Watchpoint would be the minimal that is needed to be done to resolve. In one instance, I had to turn the hardware off, only to find that it never booted into Windows again. From that point on, that machine would boot to a screen that pertains to an error in the Windows installation. But it gets better! Any time I would image a new OS from a different computer, the resulting cloned disk would render the same results in this particular machine. The newly cloned OS would be able to boot to Windows in another working computer, but once inserted into the original machine, it would incur the Windows error and then that SSD would incur the same failure in any other working machine. Further testing needs to be done to discover exactly what components may have played a role, but I hope this info helps prevent problems for others. I will post official findings when thorough testing has been completed.
  10. WatchDog

    Pause cue

    ZaakQC does have a good point. We all work under the gun with very rigid deadlines. It would be far more effective if WatchOut just built the show for us so we would not have to trouble ourselves with daily annoyances like efficient workflows and software/show cue comprehension. This also mitigates the risk of end operator error and transfers show liability directly to the software environment. In the age of disappearing end-user accountability, I see this as the most logical next step in cue evolution. Please note the thick sarcasm For those of us still stuck in the mundane world of analog show building, I find that placing media on whole seconds with the preceding pause cue placed 0.1 second before is an effective way to go. If every cue was created following this recipe, hasty edits in the show can be easily discovered by stepping through each cue and noting the time. Until Dataton can focus more of its energy on creating algorithms to build a show, I guess we are stuck with the burden of understanding the software and cue environment to form sensible workflow habits... Annoying! Thick sarcasm done.
  11. We have multiple NICs on displays as well, but the secondary ports are disabled on all display machines. Although not apples to apples, we both have 2 NIC ports on production computers with shows that contain VDs. Not to mention cueing aux's from the main timeline. Thanks for the info. Let me know if you find anything.
  12. I have experienced this as well and am working to discover the culprit. If anyone has any relative info related to this "bug", please chime in. This post is over a year old so I hope someone has been able to shed more light on the problem. Likewise, I will share anything I find. Cheers!
  13. WO version: 6.1.6 Production computer utilizing 2 NIC ports with separate IP range 1 Switch in network Issue: I have experienced cases where a display machine or group of display machines from a cluster react to a UDP command with a consistent latency. The latency can be anywhere from tenths of a second to 2 seconds. Jumping to a marker or scrubbing the timeline does not induce latency… only when running a play command from the production machine. That alone suggests it is something inherent with WO’s use of UDP commands. My theory is that there is timing information built into the UDP packet, and somehow either the timing information is incorrect within the packet… or on the Windows end, the UDP packet is being delivered late, and thus the timing information in the packet is offset to what the display thinks is the system time. Either way, it is a disastrous bug. I have had it occur during heavy workload moments (transferring lots of data over the file share, making a high volume of changes to the timeline, etc…). On this last show, I didn’t remember to disable the file share port until I got into WalkIn. And the bug didn’t occur until we were about 80% into the show. But this is the first time I had the bug occur seemingly out of nowhere – as in I was not in a heavy production workload state when it occurred – just cueing through my show. Although I am not 100% convinced that it has anything to do with using a 2nd nic, I still feel like the 2nd nic being a root problem is a strong theory. I also have only encountered this when using virtual displays for video that then becomes re-entered onto the stage for distribution. I am in the process of testing whether or not the active 2nd NIC port or use of virtual displays to map content/standard video playback to displays is responsible. In the mean time, has anyone else experienced this issue when using a control cue to execute an auxiliary timeline play command?
×
×
  • Create New...