Jump to content

Mike Fahl

Member
  • Posts

    721
  • Joined

  • Last visited

Posts posted by Mike Fahl

  1. Just a clarification to Jim's excellent advice. If you religously copy all media files into the directory containing the .watch file of the show (or a subdirectory thereof), all media file references will be relative. Such references do not begin with a slash in their file path. If so, you can copy the folder containing the whole shebang as is. But if you're unsure as to whether there may be media files stored in other locations, Jim's advise is your best option, since it will copy such files in a way that makes them relative (by copying them into a subfolder of the consolidation target location, and adjusting their paths accordingly).

    Mike

  2. One tricky aspect with the PJLink protocol is that you may NOT send any command to the projector until after its initial handshake.  If there's no password set, this is the "PJLINK 0" string received initially from the projector. There's no easy way to make WATCHOUT wait for this initial handshake before sending its command, which is probably why the first attempt after a disconnect sometimes fail and a second command may succeed (because by then the handshake has been sent by the projector). As a side note, PIXILAB's recently introduced control system named BLOCKS provides "intelligent" drivers for projectors and other devices. Our PJLink driver manages this initial handshake before it starts sending commands to the projector. This makes for a more robust implementation than just blindly banging out the commands.

    Mike

  3. No, the upsampling will be to the framerate actually used by WATCHOUT, which is often 60 fps (not 30). If the source is 24, there will always be a discrepancy when upsampling to something that isn't an even multiple. Frame blending, and similar techniques, can be used to improve this (whether done in WATCHOUT or ahead-of-time in, e.g., After Effects). But upsampling to 30 or 60 from 24 is also going to give you larger/heavier files - especially whe using codecs such as HAP, which may negate some of the advantage of upsample ahead of time.

  4. Woops, seems I have a mistake above. You may be able to get away with sending just this:

     

    GET /cgi-bin/proj_ctl.cgi?key=shutter_on&lang=e&osd=on$0D$0A$0D$0A

     

    Note there are two end-of lines at the end of that (last) line. The HTTP protocol specifies header lines should be terminated by CR/LF pairs, and not just $0D as in my example above. For the remaining lines in my example above, my guess is that the projector won't really need any of them. But if you do include them, each line should be terminated by $0D$0A, with an extra $0D$0A at the very end of the whole shebang. You write the whole thing as a single line in WATCHOUT – my example above is split across multiple lines just to make it more readable.

     

    Hope this helps!

  5. I doubt you'll be able to simulate a browser request using the network output of WATCHOUT, especially one that requires some form of authentication.  If it's possible to turn off the authentication, it may work. Note, however, that you need to send a *complete* GET request, which is a multi-line thing that looks something like this:

     

     

    GET /cgi-bin/proj_ctl.cgi?key=shutter_on&lang=e&osd=on$0D
    cache-control: no-cache
    user-agent: PostmanRuntime/7.1.1
    accept: */*
    host: 123.123.123.123:80
     

     

    Last ine specifies the IP address of the projector. Each line terminated y a carriage return. There need to be TWO carriage returns at the end of the whole shebang. Using smething like wireshark to snoop the line while sending a working command may hwlp getting things right form WATCHOUT.

     

    Even though WATCHOUT's network port disconnects after some time of inactivity, it automatically re-establishes the communication once a new command is sent. However, PJLink may be a bit tricky to deal with due to its initial handshake. Assuming you can make it connect reliably once, your best bet may be to have an auxiliary timeline then sending some harmless command to the projector every 20 seconds or so to keep the port open, and then your command should work without the need to re-open the port.

     

    I also believe WATCHNET now has PJLink support, so throwing that into the mix could be an option. Although WATCHNET requires a separate license key and adds some complexity to the system.

     

    Mike

  6. The most common "work-around" is probably to use something like this in between:

     

       https://www.tweaking4all.com/home-theatre/remove-hdcp-hdmi-signal/

       https://www.alibaba.com/product-detail/HDMI-Splitter-1x4-Ultra-High-Definition_60711570924.html

     

    While such a device defeats the whole purpose of HDCP, I'd guess that as long as you're not using it to make illegal copies of copyrighted material you're fine (but, as they say, IANAL).

  7. Yet another option, especially if youre not happy with the green screen solution, could be the VMIX broeser renderer:

     

        https://www.vmix.com/help20/index.htm?WebBrowser.html

     

    This can output NDI, including an alpha channel from what they say. I haven't personally tested it, but from their description, and what I understand of how NDI works in WATCHOUT, it should work. Presumably, the VMIX renderer also supports HTML5 video. If you (or someone else) have tried this, please post your findings here!

     

    Mike

  8. You're not saying whether tearing is within a single display or across displays. Hetre I'm going to assume within a display.

     

    The only reason I can think of for tearing in stills within a single display is that vsync is disabled in the graphics card driver, or that the display device does something stupid. 

     

    I've tried different file formats

     

    File formats shouldn't make a difference, since all images are decoded before you run the show, and cached in an internal format, which is independent of the original image file format.

     

    So the first part of the question is, what part of the computer does the heavy lifting of fading a still? Is that the GPU or CPU?

     

    GPU.

     

    And would additional system RAM or video ram help that?

     

    Probably not.

     

    We have the appropriate Sync I and Sync II modules.

     

    This could indicate you're talking about tearing across display outputs. In which case it more comes down to the behavior of those ync cards, that can sometiumes b rather finicky.

     

    I still intermittently get tearing between machines

     

    OK, so you're likely talking tearing between computers. Some of my answers above may still help, though.

     

    And finally, can someone elaborate on what the "sync chain master" checkbox does under the hood? 

     

    You should set up one of the cards (the first in the sync chain) to act as the master. The remaining ones are slaved to this master. The master may get external sync if desired, but doesn't have to. Others should not need external sync, but will be synced from the master. Check the "sync chain master" checkbox for the display that acts as the sync master in the chain. I believe this setup is rather important for successful operation.

     

    Hope this helps. It's been a while for me, but I believe my recollection here is correct.

     

    Mike

  9. I don't think there is a command in WATCNET to enable imecode chase. Your best solution is probably to put this into a startup script (assuming you're feeding it to the display PC, and not production PC). See "timecodeMode" and "File-based Control" in the WATCHOUT 6 maual for details on how to do this.

     

    Mike

  10. In WATCHOUT 6, the production PC command set is fairly similar to the display PC protocol (as indicated by the two protocols being described together in the WO6 manual). You may be able to use the same library to control the production PC as well, by just changing the port to 3040. Don't forget to enable TCP control in Preferences in the show you want to control. See under "“CONTROLLING THE PRODUCTION SOFTWARE” in the WO6 manual for more details.

     

    Mike

  11. You're correct, Jim (as usual). This change was made in WO6 production software to make it behave like the display software here, thus making protocol clients targeting both programs easier to maintain (since the same authentication sequence can be used with both, with similar reponse, rather than an error in one case). But, as you say, authentication isn't really required on the production software (all that's needed is the checkbox in preferences).

     

    As a side note, the whole "authenticate" dance was originally devised to allow for some kind of password protection of display computers. In such a case, the response would be different, and the controller would have to "authenticate" itself by means of a password (that should really have been "authorize", but I wasn't very clear on the terminology here when I wrote that part of the protocol way back).

     

    Mike

  12. As Jim alludes, it indicates the applicaiton is in a modal dialog or similar modal state (e.g., in the midst of a drag-and-drop operation). This can only happen with the production software. It will happen if someone is actively operating the production software while you're sending commands from outside to it.

     

    This is not an indication of a "bug" per se. It's an indication that the command could not be performed due to some application mode initiated by the local user. This "blocking" is done to avoid the jarring user experience that would otherwise ensue if an external command caused a timeline to jump while the local user was dragging media to drop onto a cue on that same timeline, or similar actions.

     

    The only way to not get this error occasionally is to not operate the production PC at the same time.

     

    Mike

×
×
  • Create New...