Jump to content

Mike Fahl

Member
  • Posts

    720
  • Joined

  • Last visited

Everything posted by Mike Fahl

  1. Any "PC remote control" that can fire of arbitrary keypresses should do (or combined with some software that can map keys as desired). You may prefer to send the numpad Enter key instead, since that will always start playback of the timeline, while the spacebar is a start/stop toggle (unless that's what you desire). Mike
  2. If you attempt Eddys' suggestion, I believe you should terminate by $0D$0A$0D$0A (i.g., CR, LF, CR, LF): https://stackoverflow.com/questions/50447483/end-of-http-header#:~:text=Each header ends with CRLF,for the HTTP protocol spec. This may work for single commands spaced "far apart", since WO holds the connection open for a bit after sending the command. It will then send any following commands on that already open TCP connection, which I doubt the other end will like in this case. Also, the action may be deferred until WO closes the connection (depending a bit on how the device implemets the HTTP protocol). I don't recall how the line endings for the other header lines are terminated when you paste in multiple lines like this. Those would also need to be terminated each by a CR/LF pair. But perhaps that's the case already, or he device may not be too picky here. Mike
  3. I dubt you'll succeed in sending HTTP commands from WATCHOUT. It just has basic TCP and UDP support. HTTP is a more complex protocol. The camera also seems to have a serial control protocol:´ https://eww.pass.panasonic.co.jp/pro-av/support/content/guide/DEF/HE50_120_SERIAL/ConvertibleProtocol.pdf As far as I can recall (it's been a while), WO can do serial. Alternatively, using a TCP-to-serial bridge, such as a MOXA nPort. If you really need HTTP, you probably need something else to talk to the camera, such as PIXILAB Blocks or some other control solution. Mike
  4. I guess you could make a timeline that's always active that sets an input depending on enabled layer condition. I believe you can get the input value from WO. That's more of a work-around for obtaining this, and would really only work for a single layer condition being enabled at a time, but could get you out of a pinch if need be. Mike
  5. For the record, an MP4 file is actually already a MOV file as far as the container format is concerned, since MP4 is a proper subset of MOV. Apple donated the MOV file file format, and MP4 ended up using a limited subset of what may go into a MOV file. So while every MP4 file is a valid MOV file, a MOV file isn't necessarily a valid MP4 file (since it may contain stuff that isn't allowed in an MP4 file). When it comes to the actual video encoding (the "codec" required to play the file), that's a separate story, as pointed out above. Both MOV and MP4 files can contain a variety of video and/or audio formats. Use a program such as mediainfo or ffprobe (part of the ffmpeg program suite) to see what's inside a MOV or MP4 file. Mike
  6. That's done on the "Output channel assignments" tab, as suggested by Jim above.
  7. The geometry correction available for the regular display can have any number of adjustment points. Should let you warp things to match, assuming this is a static warp. Mike
  8. Do you really mean "VPN", as in a remote Virtual Private Network connection between peoduction and display computer(s) over the internet? I get a feeling what you're referring to is VLAN, which would make sense "to separate different equipment and control traffic". Running WATCHOUT over a VLAN should not be a problem, assuming switches are configured to route all packets properly. The VLAN "bundling" and "unbundling" (applying the VLAN "tags") can be done by most professional grade managed switches, and once the packets leave the switch that "unbundles" the tagged VLAN packets, they should work the same as on a non-VLAN-subnet. Mike
  9. Discovery uses UDP broadcast. My guess is that you have something on your network (such as a managed switch) or a computer that prevents such data communication from getting through. Mike
  10. We have a basic but very useful Artnet monitor available here: https://github.com/pixilab/artnet-monitor The file to run using the instructions on that page (ArtNet-Monitor.jar) is located in the dist directory. This will let you verify that WATCHOUT is outputting the exected Artnet data on the expected universe and channel. Artnet addressing can be a bit confusing – especially related to the universe numbering schem, which sometimes is 1-based and sometimes 0-based. So trying one universe up/down can help figuring this out. Mike
  11. For your first question above, this should happen by just using the MIDI Note input as a trigger for the auxiliary timeline. When this is pressed, it makes the timeline Run. For your second question, you would have to program an explicit Pause control cue where you want the timeline to stop. Hence, this does not rely on the currently selected layer. Mike
  12. That window is resizable as far as I recall. Make it as large as you need. Perhaps you missed that. Mike
  13. By using a MIDI note as a trigger for the timeline, you'll trigger the "next cue" by pressing that note, rather than a specific cue. The timeline will then run up to the next deliberate "pause" control cue. Mike
  14. I believe right/left arrow keys should jump to next/previous cue on the currently selected layer only. Set any desired MIDI note as the trigger for the timeline. Then use a Control cue set to Pause to stop where desired. Mike
  15. As Jim says, you "just" need to send some text commands using a TCP connection. The commands are trivial. Doing this from Unity is unfortunately not trivial. Some pointers here: https://stackoverflow.com/questions/70328315/unity-tcp-client-connection-to-a-server And here's a complete "TCPTestClient" that may be of help: https://gist.github.com/danielbierwirth/0636650b005834204cb19ef5ae6ccedb And here's a video, in case you prefer that: https://www.youtube.com/watch?v=uh8XaC0Y5MA&list=PLXkn83W0QkfnqsK8I0RAz5AbUxfg3bOQ5 Mike
  16. I found this rather good description of the E2 protocol: https://barcoprdwebsitefs.azureedge.net/barcoprdfs/Data/secure/downloads/tde/Active/ProductFiles/TechnicalNotes/TDE11446_00_Note.pdf?ZgdW9xEUnXbBaoX2XvdFFaTmdq9uJZj-PbbCrUZKZJkf1uCfmewLNAQt_6fm93cMIz6S8F9uOgeLCKt6RC7gdtXhP6bu The E2 expects a HTTP POST request with JSON data. WO can do a plain TCP command. So you would have to fashion a TCP package that mimics a HTTP POST (likely including the ContentLength header). I'm still not sure that would actually work, but it may be worth trying. Another option is to stick something in between WO and E2, that's comfortable dealing with both, such as PIXILAB Blocks. If this is really all you need to accomplish in terms of "control", you could perhaps have something whipped up in python, node.js, or similar low-level tool to act as an intermediary. Mike
  17. Well, that wasn't the case when I did the free-running and looping features. Although video is free-running in relation to the cue, it's still locked to the internal timebase of WATCHOUT. All computers in a cluster is locked to that same timebase, controlled by the cluster master. So they should not drift apart over time. At least that was the way it worked back then. Mike
  18. OK, it seems from Miro's description that it should be able to do 8k width, as long as the total pixel count stays within limits (e.g., the 8192 x 1024 x 60 limit mentioned by Miro). Anyway, the playback device being used in this particular case was a WATCHPAX 4, so it may not allow even that, though, as it wasn't in your list of devices above. Mike
  19. OK, so they all can play video up to 8k width then. We had to downscale such a wide H.264 video recently to below 4k to work. Before doing that, the video output was just black, with no error or warning message being displayed. Mike
  20. Which WATCHPAX/WATCHMAX models support 8k video, at least in terms of width? Mike
  21. You don't "calibrate the OBJ files" using the built-in calibration function, you calibrate the projector's actual position in 3D space. So, the simple answer is NO. Mike
  22. If memory serves me correctly, the WOB file contains additional metadata, that isn't easily created manually. So I think you're out of luck attempting to synthsize sich a file. Mike
  23. No. Besides, WATCHNET is no longer available as far as I can tell. Mike
  24. There's nothing built into WATCHOUT (WO) do do this. In most typical cases, WO plays video that has been pre-loaded using the WO production software. It sounds, from your request, that videos will be provided in a more "dynamic" manner – not using the WO production software. E.g. you want to upload them through a web page, and then have them appear through WO. WO could then pull such a video using a URL Video Proxy, pointing to the server holding the video file. However, such videos would need to have the same aspect ratio (and preferrably same resolution), since WO won't adapt dynamically to the actual resolution of each video. You would also need an external control system to manage such uploading of video content, as well as possibly to control WO to then play the video. PIXILAB Blocks could provide such control system capabilities, allowing you to upload video as well as controlling WO for playback. Mike
  25. I don't think WATCHNET can do this, since there's no incoming API endpoints of this kind there AFAIK. I do know that PIXILAB Blocks can handle this, though, and it has full control capabilities for WATCHOUT. Blocks could even act as the web server for providing those six buttons through the internet, if so desired. Or use an external web server, connecting to an API end point in Blocks. Another option, if Blocks is too much for what you need here, would be to have a VPN connection between the web server and the network where WATCHOUT lives, and then make the web server use the regular WATCHOUT control protocol commands through that VPN tunnel. How to do so depends on the web server being used and how the VPN is configured. Mike
×
×
  • Create New...