Jump to content
Dataton Forum

Walter

Member
  • Content Count

    224
  • Joined

  • Last visited

Everything posted by Walter

  1. To me it seems the issue is exactly as Morgan pointed out. As the files are free running and thus theoratically not starting at the exact same time, the software is reading out the exact same sector on your hard disks, and I can imagine you're reaching a bottleneck just there. As this is the only different factor as oppose to running the same cues with different video's in it. If you still need to play these exact same video's the solution is probably extremely easy : just make some copies of the original file and play these, thus not reading the exact same location on your drive so many times. Could well be, right?
  2. Steve, so using virtual displays didn't help you here? I'd say it is the way to solve this for you...
  3. Steve, if i understand you correctly, you have media on a certain point in your projection surface, you need to adjust that, and any other media positioned on that position in the projection surface? Well, if that's the case, use a virtual display. You just position all that media in the virtual display and position the single virtual display cue in the correct position. Then all you have to do is adjust the corners of the virtual display and all your cues are the same. Good luck!
  4. Your command is wrong I'm afraid. Don't know it by heart, have it somewhere in my documents. But, if you search the forum you will find a previous post where the correct command it stated. Good luck! Walter
  5. Using greenkey in stead of alpha is a way, but for quality reasons I tend to render out an alpha mask version and using that as mask above the video. (No chance of green artifacts and also way better performance than using alpha in the videostream.
  6. Hey, I'm just thinking outside of the box for OP here, no need for judging his chosen solution. If he somehow needs this particular workflow, then this is an option for his case. We're all techies here, we're born to solve issues. Sometimes you just gotta make it work, and hey, that's what just happened (a better option than "no, you can't".)
  7. Question@JFK: would a running aux timeline also suffer from a main timeline jump? If no: a solution would be to start an aux from the main and have that running until after the jump thus covering the main during the jump?
  8. I suppose the above reactions are all right, however, what is your exact question? You can have as many displays "perform" as you like by disabling or not assigning the unused outputs of the display machine BUT I don't think it's possible to then use those outputs for another application. If you start the display software it will open on all outputs wether you use them or not within production software. Unused displays will just remain black.
  9. I suppose that if the projector or display performs the alternating, you could however just create a topbottom / sidebyside signal in Watchout... If I stream a sidebyside signal to my tv, the signal sent to the tv simply consists of two squeezed and aren't alternated. This is all done by the tv. But of course this way the signals should already by rendered out for stereoscopic use and you can't use watchout's stereoscopic features. Unless.... You output regular passive 3D with all the features, take these two signals back into Watchout, squeeze them SbS and output an "active" single signal and let the display do the rest...
  10. Please note that you should have turned off the display server via production at least once in order to be able to start the server from production. At least, that's my experience.
  11. Just reprogram the edid in the fiber transmitter...
  12. Miro, dare to share an E.T.A. on 6.1.1? Anytime before June 6th would be highly appreciated ;-)
  13. Rick, please note that 1920x1200 is very well possible as mpeg2. Perhaps not directly from the editing program or AME or such, but using ffmpeg you should encounter no issues. Good luck, looking forward to your experience with the suggested solution(s). Walter
  14. Tim, for such a situation and the required flexibility, why don't you use the live update mode? You can adjust a logo on a screen which is out of shot at that moment and it will adjust the position live without the need for performing a general update. I usually only use live update mode during programming because once the show starts I'd rather not perform any editing anymore. But for such a broadcast situation... Why not.
  15. Why don't you use the pre split feature?
  16. Well. Lets start by telling is more about your systems hardware configuration and the number of simultaneous feeds / streams used on your output. I do assume your display server got stuck? Why wmv?
  17. Hi there, quick question. Got files delivered in 5240x1200 resolution in HAP. Now, a 16second fragment is 1.3GB. If I convert it to mpeg2 using ffmpeg at 100mbit (which should be sufficient for this resolution) the result is a 200MB file. Both seem to play back with equal graphic result. I do expect multiple simultanious streams. Tested this earlier with only HAP files with success (on a 4SSD configuration, so bandwidth isn't an issue). Now, in terms of playback stability, which fileformat should I go for?
  18. Nope. Not possible. Only playback speed adjustable, no time-remapping.
  19. Simple: just create a white (or 15 / 20%grey ) square with a 2 / 3 pixel edge feather that matches the size of the non-blended area. Add it to the timeline and position it exactly there. Adjust its opacity until the brightness (when projecting only black of course) matches the blended area. Then project white and several primary colors and use any color correction if needed.
  20. Indeed, as jfk states. To be more specific, back then we finally discover that something called jumbo-packets had to be turned on or off to resolve this issue. (a setting somewhere deep in the network cards menu). It might be that somehow these settings do not match up with the settings in the Mac? Try fiddle around on that level. Grtz Walter
  21. Hey guys, need to add a lot of dmx-channels as input-control for an upcoming job. Creating each channel (need at least 160ch) is teadious. Anyone care to share a copy-paste of his/her input list where one has used such number of inputs? As an added note for a feature request = would be awesome if we could automate the creation of a certain number of dmx-channels ;-) Thanks! Walter
  22. Hi there, i fail to agree with the recurring remark that sending dvi over a long distance and using emulators is an expensive choice and / or it's hard to install and irreliable? Compared to the projectors or a proper multi screen processor these costs are peanuts which leaves me wondering what type of market you and other pro-sdi techies are in? I have never experienced issues with fiber connections or at least no more than copper connections. Most important reason for not wanting to use sdi, apart from the fact a computer doesn't natively support it (which isn't really an issue in most productions as watchout is fed into a background layer somewhere) is the fact it only supports smpte resolutions. Going for a wuxga native for instance would be impossible. Just my two cents.
  23. Hi there. Apparently nobody active here is using this that way. My I ask why you would like to achieve this as oppose to using wp2 for instance? With regards to capture card : for a PowerPoint presentation of such, yes, capturing via usb3 (or preferably thunderbolt (use a tb hub if you need the outputs) will work. Live camera will lead to too much of a delay to my knowledge.
  24. Hi there, gonna pass on advise on graphics card for now, but I suppose your media drive will be 240gb and not 120? Or are you using 2 60gb drives? One personal advise : I recently found out 240 is too small for 4K hap projects ;-). Might wanna go 2 x 240 if you expect such projects.
  25. Nice work Floris. Thanks for this update! Keep up the good work!
×
×
  • Create New...