Jump to content

Benoit

Dataton Partner
  • Posts

    134
  • Joined

  • Last visited

Posts posted by Benoit

  1. Hi,

    I think the easiest would be to create the displays of the big room in one area of the Watchout stage with traditional soft edge.

    On 2 others area of the stage, you can create the displays for the 2 smaller venues: 3 big virtual displays for the duplicated parts and the additional projectors. The VD displaying on the real display in the full room stage area.

    I think what you want to achieve is most of the wall use default soft edge you had set for the full room, only a small part displaying on top of the edge blend. To do this you could use 2 timelines, one to display always on top, the other to display above the soft edge and crop your virtual display media to define the limit where you want soft edge of not…

    I know it’s not very clear but feel free to call me if you want to discuss about it.

    Best,

    Benoit

  2. Hello Kevin,

    You can use a 3D .obj file with a specific UV to warp your source image as you want. Never tried with a curved LED screen but it works to transform Equirecrangular to fisheye  or cylindric image to fisheye for dome projections.

    If standard OBJ asset can be found online for traditional transformations, I'm afraid for a such specific one you will have to create your own obj file.

    Benoit

  3. Hi Alex,

    To what I know, chunks are for playback and not encoding. In the FFMpeg doc, it states you should not use more chunks than your cores, but I never had any issues, it's probably best for performance but my experience is when HAP doesn’t play smoothly, you add more chunks (let say up to 8), and your problem is solved.

    Wish you all a happy new year.

    Best,

    Benoit

  4. Hi Alex,

    For HAP, depending on the resolution, I suggest adding chunks:

    ffmpeg -i "%file%" -an -c:v hap -format hap -chunks 4 "%n_name%_HAP.mov"

    Chunks allow the CPU to use multiple cores for a single video stream.

    My personal experience is 4 chunks for 4K and above.

    Can you explain what the "-dn" option is for, never used it before... and not sure to understand the manual.

    Thanks,

    Benoit

  5. Strange, on what hardware do you have this issue? Watchout write somewhere the settings and is not supposed to lost it…

    Yes, increasing the buffer size might have a positive effect to reduce audio pops. Basically there is a buffer of data between the application and the real audio output.

    The smaller it is, the less delay there is between the app and the real audio output. Which is something good on DAW, when you’re a musician and the delay of the system might conflict with real instruments or other sound processors.

    For audio playback, delay doesn’t matter, because a “big” buffer is something like 4096 samples, but there are typically 48 000 samples per seconds. I’ll let you do the math, but you may delay the audio by 1 or 2 video frames… Not noticable.

    On another side, a small buffer requires the app to fill the buffer more often and when you have other highly CPU intensive things to do, your system might not be able to do it as often as required. It’s way easier for a computer to do it less often and fill a bigger buffer.

    Short answer: For Watchout, set the audio buffer to the maximum possible value, you will not have visible downside.

    Hope this helps.

    Benoit

  6. Hi Matthew,

    Nice trick!

    I've already tried to modify some options directly in the watch file to enable some unavailable features with more or less success, but your trick is way easier.

    Regarding the composition position control from external input, to what I remember of the discussion I had with the dev team (a long time ago) the problem is there is no way to be sure all elements of the composition move in sync together, don’t remember if it’s on the same display or across several displays…

    Now for your application it’s probably not a real problem.

    Best,

    Benoit

     

  7. You can always use the control protocol.

    In the perference menu, control tab, check "production computer control UDP".

    Create a string output with the following settings: Network port / address: 127.0.0.1 / Port number : 3040 / Protocol UDP

    Then you can drop this output in a timeline and write "enableLayerCond 1$0D" in the data to send.

    So when you play the timeline the cue will make the producer to speak to himself to enable the condition 1.

    Hope this help.

    Benoit

  8. 21 hours ago, diskin said:

    According to your explanation, this is the only solution to the problem.
    Can you recommend projectors you work with and what are the exact commands for dimming?

     

    Most of the Panasonic and Epson projectors can be controlled via ArtNet, I have personally tested Panasonic PT-FRQ50 on which the light source can be dimmed very smoothly and react incredibly fast and Epson EB-PU1008 which also react very fast but can’t totally be switched off.asonic and Epson projectors can be controled via ArtNet, I have personnaly tested Panasonic PT-FRQ50 which can be dimmed very smoothly and react incredibly fast.

    Other brands and models can work too.

  9. Hi diskin,

    Some laser projector models can have their laser power dimmed with artnet, it react quickly allowing to adjust the projectors luminosity to your content, even to replace the fade in and fade out.

    We had some pretty good results in dark setups with mixed content of both ultra bright and dark scene.

    Benoit

  10. Hi Rainer,

    Strange issue...

    I've already seen some NDI stream shuttering when using YUVY pixel format, switching to BGRA solved the issue in my case. I think it was the NDI sender that was configured for BGRA and asking for YUVY takes him too much work.

    But that's stange you see also the inssue with the live input.

    Whish you good luck.

    Benoit

×
×
  • Create New...