Can I control Processing Order of Linked Tasks

I’ve been tweaking my setup lately and ended up splitting/combining everything into 3 tasks. I have task 1 linked to the HVEC MP4 profile and it does my cuts/conversions but does no renaming/metadata handling.
Task 1 dumps the result into another folder where I’ve got 1 task monitoring for TV Shows and another task that monitors for movies. Both these tasks do custom renaming and metadata lookup, and no conversion.

To simplify the question lets just say Task 1 processes the file first and then Task 2 picks it up and finishes with it.
I’ve made task 2 insert at top, but when I dump in a bunch of items, I’d like it to do them as:
Task 1
Task 2
Task 1
Task 2, etc.

Instead because all the items are pulled in at the start, it goes:
Task 1 - File 1
Task 1 - File 2
Task 2 - File 1
Task 1 - File 3
Task 2 - File 2

I’ve been trying to figure out some what to put a delay before task 1 starts to give task 2 a chance to see the file inserted at the top.
Hopefully that makes sense, but the point is I don’t want to wait for file 2 to complete the long conversion before file 1 completes.
Figured if I could introduce a delay of some sort after each task that it’d give time for the scan to pick up the new file, but only thing I could think of was a delay in the postprocessing task but that does not seem to work.

You can just roll the Task1 action (transcoding) into both of the renaming tasks. That’s what I do.

I have separate tasks for Movies, Shows, Sports, and Specials (the 4 types MCEBuddy can detect), each with different rename/destination folders.

Tip: Check Specials as shows that don’t match a movie or show will end up there. They tend to be where the guide data is vague or just the show title. Some of the sub-channels that have old reruns do this, as well as PBS shows that usually aren’t in TheTVDB or MovieDB.

You need to make sure that only one task will ever fire for the input file.

In addition to the type selection, I have Channel filters for SD and HD channel sources that have different compression/quality values.

I’ve debated that, but I have the renaming actions separated (also different input folder) so that I can occasionally dump files directly there so it’ll only rename them and not try to recode them.
I could keep a separate copy of it I suppose, but then it requires keeping 2-pairs up to date. So 2 tasks for tv (one with transcoding) that need the same selection filters, and the same metadata correction rules.
I also have postprocessing triggers I’d need to maintain in multiple places the next time I upgrade since those are not carried over. Postprocessing in my case involves custom renaming with data that is available to the postprocessing, but not the naming rules.

I don’t think there is a way to do what you want with your setup. The directory watchers run in parallel and just add tasks to the queues for each action task. The tasks also all run in parallel, in the sense that one task has no information about any other task. So the only way to “chain” things is exactly how you’ve done it - a watcher on Folder1 triggers Task1 and a watcher on Folder2 triggers Task2, and the last thing Task1 does is output the file into Folder2.

Either you will need to only drop one file at a time into the starting folder and block dropping any more files there until the file completes processing by the final task, or you could setup separate folders for each “flow” with separate watchers and tasks copied for each watcher. That sounds like a hot mess to keep track of. So my suggestion is to use a batch “scheduler” that itself watches your “real” input media drop folder, and only moves files one at a time into the MCE “starting” folder, and blocks until the file is processed by the tasks. That will get tricky if there is ever a failure in the tasks and you need to 1) detect the failure at any point in the processing, and 2) figure out what the recovery and restart process is for that file.

MCE Buddy just isn’t designed for a single linear workflow - especially with multi-core CPUs and GPUs and multiple MCE Buddy engines. So that linear workflow will have to be gated and managed by you outside of MCE Buddy, I think. It’s certainly do-able with marker files to let your scheduler “know” where a file is and what state MCE Buddy is in by watching the same folders MCE Buddy does and adding marker files to your post-processing to let your outside scheduler task know their state.

It might be something you can hire a freelance programmer/coder or computer science student to take on as a project. Maybe something for a high school or college student that thinks like an engineer and understands state machines. Essentially you are using empty files (they exist or they don’t) as markers to indicate and save state, using the filesystem as your state machine “memory”. That’s probably beyond a typical high schooler taking a “computers and/or programming” class, but you never know until you ask around. For a freelance programmer, there are some “gig” sites that connect programmers with people that need a programmer to write up something to automate things for somebody.