-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Currently we handle the merging of raw and analyzed data in an independent step.
We could handle this as the duty of the principle node.
The idea would be to have ToEventStream nodes have caches for the documents.
When it came time for a particular document to be issued it would be combined with the cached document.
This would make certain that the cache was written to before the data was issued from the FromEventStream node, preventing any nasty race conditions.
The event cache would be a single value cache, so if data was skipped (filtered out) then the next unfiltered data would be correctly matched with the incoming event.
This would not prevent additional data being put into the system since we could use the existing combining nodes.
Overall this would simply the pipeline creation process, since we wouldn't need any of the muxing code and for most cases this would provide the correct answer.