Jump to content

Mike Fahl

Member
  • Posts

    721
  • Joined

  • Last visited

Everything posted by Mike Fahl

  1. Have you accidentally applied a blend, such as Add or Screen mode to the video? Mike
  2. In order for WATCHOUT production software to be able to wake up a display computer, it must have been used to remotely shut down said display computer at least once. Doing so makes the production PC remember the MAC address of the display computer, which is needed to wake it up again at some future moment. Mike
  3. Did you try from a simple telnet client, on a Mac or a PC? What was the outcome? Mike
  4. The message 'failed to deliver command' indicates that noone is listening for TCP traffic on the specified port and IP address. Trty with a telnet client first, just to make sure you can connect. Also, are you sure it's TCP, and not UDP? Mike
  5. You can send MIDI Controller and MIDI Note data to WATCHOUT through the operating system's MIDI port. WATCHOUT also supports MIDI Show Control. Whether such data comes from a program or an external device makes no difference. Mike
  6. A nifty method is to use Photoshop on a second monitor (full screen). You can set up photoshop to render the image being edited full screen to a secondary monitor. Then bring that secondaty monitor into WATCHOUT either using capture card (full framerate and expensive) or using "Remote Computer" (not full framerate but cheap). You can then "paint" a mask more or less in real time on that monitor, while seeing the result projected through WATCHOUT. Once you have the mask, you can use it as a mask in WATCHOUT through the "masked by layer above" feature. If you want to mask just a video, this is straightforward. If you want to mask a more complex timeilne, make the timeline as a composition, then mask the composition with the mask. This video discusses using a Cintiq pen display, but the same technique should work regardless of monitor type: Mike
  7. I believe that problem was fixed in WO6 (6.0 as far as I can recall). WATCHOUT should now use the resolution as specified in the Live Video media item, rather than relying on the source feed having the proper resolution (which it may not have if the feed isn't there from the outset). So you may want to double-check that the "Dimensions" set matches your feed, as described under "Live Video Settings" here: http://academy.dataton.com/wo6/Menus.xhtml#Anchor Mike
  8. Not that use of embedded audio in the video file isn't supported when playing at other speed than 100%. Mike
  9. While Miro's solution is very good, it is more taxing for the hardware, so make sure you re-test before going on site, to make sure everything still plays as expected. Another trick to improve this a bit, if you're rendering through a virtual display, is to make the virtual display two pixels wider and taller than the content you're playing on the surface. This trick assumes the content is just played as is, and doesn't move about or extend outside the virtual display. Place the content centered on the virtual display, leaving a 1 pixel "transparent" border of pixels all around. This should take off the jaggies, at the expense of the image not extending all the way to the edge of the 3D geometry (due to this "one pixel off" trick). Since the edge pixels are transparent, you'll get antialiasing between those and the edge pixels of the content. Mike
  10. I believe it does still support "animations". I.e., it supports images that change over time. I once did a live PONG game that played quite well through the dynamic image server. Although the framerate will often be lower than what you usually get from Flash when played in a browser. This is particularly the case if the imge is "complex" in terms of its content, colors, gradients, etc (that PONG game I did was all black and white, just for this very reason). Mike
  11. Note that since WO6 you can assign entire timelines and compositions to tiers; not just layers. This may help with keeping such situations manageable. Mike
  12. Take a screenshot of what your screen looks like just before you "add the point" that makes the production software crash. Submit that screenshot, along with the above mentioned dmp file and a copy of the show file saved JUST BEFORE you add that point. This should allow the problem, if any, to be reproduced and investigated. Send to support@dataton.com Mike
  13. Are you referring mainly to the edge of the image? Do you apply the image directly to the 3D geometry, or through a virtual display? Mike
  14. X-Keys seems to be able to run a program when a key is pressed. Assuming you can pass command line parameters to such a program, you can make it run something like netcat or (perhaps even more convenient) the udpsend sub-command of swiss-file-knife. The udpsend sub-command can send a string with both ascii characters as well as other data specified as hex bytes. Seems quite useful for this kind of purpose. More details here: http://stahlworks.com/dev/index.php?tool=udpsend
  15. I don't understand your question "I heard that maybe z-depth would allow correction for this and manually convergence the imagery in WO?". Please clarify. Also, I don't understand what you mean by "US wall and the DS wall", and how the walls are arranged and projected onto. A drawing of the room, with walls and projectors, would help.
  16. If it's a flat wall, then, yes, it may be simpler to just use the perspective and/or geometry correction with a 2D projector.
  17. The official name is -ShowsPath, as stated in the docs. I believe -ShowPath is still also supported for historical reasons, but shoul be considered deprecated.
  18. This command line option, and how to use it, is documented in the WATCHOUT 6 User's Guide. http://academy.dataton.com/wo6/Command%20Line%20Options.xhtml#toc_marker-15-1 Mike
  19. You can put both models in the same 3D file. They will then maintain ther relationship when brought into WATCHOUT, while still being able to be textured separately. Alternatively, export them as two separate models after positioning them as desired in the 3D program, and use the model's origo as anchor point in WATCHOUT, and place those anchor points at the same stage position. This will essentially do the same thing, but with separate files.
  20. MSC can be used to trigger Aux timelines, where you can use different cue lists to address different timelines. By putting control cues on those aux timelines, you can make them du pretty much anything you like.
  21. While I'm out of the loop these days, I do have some insights into how compositions work in general (as I wrote the code in the first place). In essence, a composition works exactly the same as if you would take all the cues that are inside the composition and paste them directly onto the enclosing timeline, replacing the cue that plays the composition. There should be no rendering overhead associated with playing a composition. Having said that, a composition do give you some additional capabilities: 1. You can loop and /or free-run a composition (this is similar to how a video can be looped or free-run). 2. You can transform the entire composition (position, rotate, scale) as a whole. Neither of those actions should have any noticeable impact on performance, since they have pretty much zero runtime overhead. I'll now try to answer some specific questions from above. > When I play files inside a composition the computer has a harder time. This sounds like a bug. It should make no difference whether the cues are in a composition or out on the enclosing main/aux timeline. Please provide a reproducible test case showing this to support. > I've also noticed that it's not possible to pre-load a composition Content inside compositions have automatic preloading, just like everything else in WATCHOUT. There's no manual preroll override on compositions (as there is on video cues), if that's what you mean. > Do compositions bring a performance hit? No. They should not. > Would they work better with a different codec? What codecs you use is orthognal to whether you use compositions or not. > Is it possible to hold on the last frame of a composition? No. A composition is not a video. It is a collection of cues bundled together. > Do the "Free Running" and "Loop" checkboxes of video files insides Comps actually do anything? I believe they should work the same as for cues on the main/aux timeline, but I may be overlooking some detail here. It's been a while... > I assume that Compositions add complexity to the rendering pipeline. They do not. > I don't know whether Watchout pre-renders the elements It does not. > I don't know whether the dimensions specified for a Composition have any effect It does not. This is called the "Reference Frame", and is just that. A frame of reference, that's also shown as the rectangle in the Stage window allowing you to select and move the composition on stage. But this frame doesn't specify any pixel boundary. You can have images in the composition render outside this frame just as well. > I don't know whether unnecessarily large dimensions mean a performance hit It does not. > looping & free-run seems to work fine with one level of Compositions but not when they're nested That's correct. There are some limitations related to looping and free-running in nested compositions. I believe those options only affect the topmost composition level. Trying to wrap you mind around all the possibly permutations here makes your head hurt. Don't even try. > As for pre-loading, ... the media still has to be pre-loaded. That's correct, as I mentioned above about automatic pre-loading. > I assume that Watchout is scanning ahead in the timeline pre-loading media in Compositions about to be displayed That's the same regardless of whether cues are in a composition or not. Automatic pre-loading starts about 3 seconds before the beginning of the cue unless overridden manually (available in video cues as the Preroll option). > is there any advantage in placing media in the Main Timeline as opposed to running from aux timelines No. Main and Aux timelines are rendered in the same way. > Let's say I have a dozen compositions which run sequentially, is there any gain in performance in placing these on the Main Timeline, as opposed to placing them in an Aux Timeline for more flexibility? No. Mike
  22. I agree with Jim. Not seeing the IP address on the display computer screen means it is not seeing the network or getting an IP address. What does the "ipconfig" terminal command show? Open a "cmd" shell, and type ipconfig then copy the result and paste it here. This will tell us what your PC things about its network environment, which may give a clue.
  23. Sounds like a bug to me. Send a simple, reproducible test case demoing this problem to support@dataton.com, to be looked at and fixed. Mike
  24. If the problem persists even after quitting and re-starting WATCHOUT on that computer, and isn't resolved unless you reboot the computer, the problem most likely lies in the capture driver. Trying a different driver verison may help.
  25. Not really. If possible, try to provide a pared-down minimum example show file with media that exhibits this problem to support for investigation. Blend modes shouldn't adversely affect performance, as they're all fairly simple.
×
×
  • Create New...