Jump to content
Dataton Forum

Mike Fahl

Member
  • Content Count

    647
  • Joined

  • Last visited

1 Follower

About Mike Fahl

  • Rank
     CTO, PIXILAB AB

Contact Methods

  • Website URL
    http://pixilab.se/

Profile Information

  • Gender
    Male
  • Location
    Linköping, Sweden

Recent Profile Visitors

1,208 profile views
  1. I believe WO is still 32 bit only, so throwing more than 16GB of RAM at it shouldn't make much difference. Just make sure you have RAM installed in an optimal way for the motherboard (some mobos benefit from parallell channels if modules installed across all RAM lanes). Mike
  2. Sounds like you have some video content that causes WO to lock up. To figure out which one, remove video one by one, and try to "update" between each, until you found the offending item, then forward that file to Dataton. Such problems are sometimes caused by rouge codecs, but since you've just installed the machine according to the guidelines, that shouldn't be the case.
  3. The Instagram API I used back then is no longer available, so this function wouldn't work now anyway. PIXILAB Blocks has built-in support for Instagram feeds, but even that has been quite constrained by the latest round of Instagram API restrictions. Mike
  4. As far as I recall, this can't be done in a WATCHNET script. In case you're not tied to WATCHNET for your application, PIXILAB Blocks provides similar control capabilities for WATCHOUT (plus a whole lot more), and provides more flexible programming, allowing for the kind of functionality you're asking for here. In Blocks, scripts are called "tasks". Tasks are organized into groups. Tasks in a group can be set to be mutually exclusive, in which case starting another task will automatically terminate any previous task in that group, which sounds exactly like what you ask for. More about Blocks here (with a link to the manual at the bottom of the page): http://pixilab.se/blocks Mike
  5. The reason you can't create them in comps is that they can''t control the comp itself (since the comp is entirely governed by its enclosing timeline). Hence, to protect you from this potentilly confusing situation, I decided to not allow control cues to be created in comps. However, as you noticed, you can "cheat" by pasting control cues into the timeline. And if you're controlling another timeline (by name), then it can actually be useful, as in this special case, where you're telling timelines to STOP. It then also make sense that the control cue would NOT be applied to its enclosing timeline (since control cues in comps weren't allowed for this very reason). So control cues in this context targeting the governing timeline are therefor simply ignored, which can be quite useful in this way. If the behavior is still there, I doubt it will be removed (even though it is undocumented). And I did make it that way for a reason ;-). Mike
  6. I velieve what I recalled was that you could make an aux timeline (or composition) with a bunch of control cues to kill timelines. This "bunch of control cues" can include the aux timeline that's firing the bunch (allowing you to use the same comp from all those timelines). If it DOES include the aux timeline that's firing the bunch, that one will be ignored in this context. However, it was quite some years since I put that in, and I doubt anyone have ever used it. I have no idea whether it still works that way. But that was my idea at the time. However, this is NOT a "shotgun" kill-all-but-me method. You still have to add an individual control cue for each target timeline in the bunch of timelines you need to deal with. This mechanism just saved you having to create a separate set of control cues for each timeline including all EXCEPT the firing timeline, which would makt the amount of duplication far worse. Mike
  7. I made some tests with bumping the priority of WP.EXE way back, but came to the conclusion at that time that increasting the priority of WP.EXE actually made things WORSE. There are a lot of things going on under the hood, which all need to share the same CPU. Bumping the priority of some tasks usually have a detrimental effect on others, resulting in an "unbalanced" system. Also, keep in mind that there are ususally TWO processes related to the display computer. WATCHPOINT.EXE is really just the "watchdog" process (to restart the display software if it crashes), while WP.EXE is the actual player process. At least that was the case last I looked. Mike – http://pixilab.se
  8. Quim, I doubt that will give good performance. The whole idea with HAP is to use the GPU to do the final decoding step. I don't think WO5 exposes what's needed for a DS or QT codec to do its job efficiently. Mike – http://pixilab.se
  9. I don't believe "brain surgery" at that granularity is possible with WATCHNET (at least it wasn't when I wrote it). It is however possible in PIXILAB Blocks, which does what WATCHNET does and a whole lot more. Read more about Blocks on Dataton's website: https://www.dataton.com/press/dataton-appoints-pixilab-as-new-solution-provider or direct from the "horses mouth" at http://pixilab.se/blocks Mike
  10. That seems correct. There are only 30 conditions, internally mapped to bits in a 32 bit word if I recall correctly, where 1073741824 dec corresonds 0x40000000 hex, which is all low 30 bits zero. Nifty trick to turn all conditions OFF, rather than reverting to the setting in Preferences (which a zero here would do). Another, somewhat less cryptic, option would of course be to leave the Preference settings all turned OFF and then setting to 0 would indeed turn them all off. But if you want to keep both options, your "hack" seems valid. Mike – http://pixilab.se
  11. Kinect is no longger available AFAIK. When you say "motion sensor", perhaps you're just talking "someone moving in front of a sensor" in general, rather than tracking some motion? if so, you may want to look at standard PIR sensors (infrared motion detectors). Although you can't connect such a sensor directly to WATCHOUT, you can do so through some kind of control system, such as PIXILAB Blocks. Mike
  12. Wouldn't the virtual display rendering delay be dependent on the order of render target processing? I.e., if a particular virtual display is rendered to before or after another one? If virtual display A comes before virtual display B in the rendering sequence, A would have bee updated before its content is potentially rendered to B, resulting in no delay between the two. However, if A comes after B in the rendering sequence, there will be 1 WO frame delay between the two. Or am I missing something here? Assuming things work as I think they do here, what's needed would be some way to control the rendering sequence order. Then this would be more predictable. Perhaps Justin's idea of sorting them top/bottom and left/right would be good starting point. Then one could put virtual displays at negative Y coordinates, in the order on prefers them to render. My 2c anyway... Mike - http://pixilab.se
  13. Stutter may come from many sources, for example: Inadaquate hardware to play the content (either in isolation or along with other content). Mismatch in framerate (e.g., 25 fps in; 60 fps out). The first point can only be fixed by having adequate hardware for the content at hand. The second point seems to be most of what you're concerned with here. Historically, this has always been a concern in WATCHOUT. And having a source framerate that's an even multiple of the output (graphics card) framerate is always advantageous. For example, if WATCHOUT outputs 60fps, using video that plays at 60 or 30 fps is optimal. If you play 25fps video in this case, there will always be some "temporal aliasing" going on, that can be seen as stutter. The introduction of frame blending in WATCHOUT alleviates this to some extent, by blending adjacent source video frames together when framerates don't match, to make the resulting video framerate match the output rate. This often results in smoother perceived playback, but may also introduce some blurriness due to the frame blending itself. Finally, I would see no real benefit in upsampling 30fps to 60fps when making the video files (regardless of codec). If the source material is 30fps, you won't really gain anything by outputting two identical frames (at 60fps) for every input frame. You're really just wasting resources by having to play back twice the amount of data without any advantage. If there's some processing, though (such as After Effects vector-based frame blending), that may in some cases result in a smoother playback, since you then synthesize those missing in-between frames. For some content, this may result in dramatic improvements in smoothness, while for other content it results in strange artifacts that just make thing look worse. Of course, the smoothest results will be achieved when playing back content shot at 60p with an output rate of 60fps (or 50p on 50fps if you're in PAL land). Mike - http://pixilab.se/
  14. Are you setting the proper resolution in Windows or AMDs control panel before going online? If not, WATCHOUT will try to switch the resolution (along with the frequency). When I did this part of WATCHOUT way back, I made sure that if the res was set properly before going online, WATCHOUT would not attempt to make any changes at all to the display config. That my have changed since, though. Mike - http://pixilab.se/
×
×
  • Create New...