I hope to also have a diagram soon! -fenn
NOTE: in this document, modules doesn't mean kernel modules, just a hunk of dynamically linked code with a well defined, standard interface. You may say "why do I need to make xxx dynamically linkable if it will always be there?" Well, that's the point of a modular system, silly! All the modules behave the same way. You should only have to write the code once for all of the modules, anyway.
keep in mind - possible modes a machine may be operating in:
I would love to have a single config file/process to set the configuration of how all these modules are linked together, what constant parameters get set, etc. Ideally this would all be a mirror image of halconf, but in user space. Instead of pins and signals, you'd be routing input and output messages.
Here are some different ways of dealing with input and output data from each module. Keep in mind, this is for commands, like Gcodes, not for stuff like "abort" or "buffer low".
option a) "pull": all modules have a static FIFO buffer as the input queue, and no buffer for output. When the buffer drops below a certain value (v=size of the buffer minus the largest expected command chunk), it requests the next higher-up module to squirt some more commands into the buffer. result: the buffer is usually slightly more full than v.
option b) "push": all modules have a static FIFO buffer as the input queue, and no buffer for output. The module calculates how many lower-level commands will be generated by the current input command. If the number of generated commands exceeds the capacity of the next module's input buffer, it waits until the next module's buffer has enough empty slots to accept the expanded command. result: increased overhead, but more efficient memory usage since the buffer is always almost full.
option c) daemon: all modules use dynamically allocated input queues. Total memory usage is monitored by a separate control process. When memory is near full, the highest-level module is instructed to wait until some memory is freed. pros: memory is efficiently used, at a low processing overhead. Cons: requires a separate process to monitor memory usage, must determine highest-level module, code is more complex.
All modules plug into the messaging layer "backplane" for communication (errors, buffer low, config during initialization) but the bulk data is exchanged directly between them. <- is this method better than running everything through the messaging layer? You will have to use message layer wrappers anyway to get data across a network, but is it significantly more overhead to use the messaging layer instead of accessing the input queue/output function directly for each step in the chain?
(Finally) The command chain. This is the path data takes, from top to bottom. Higher level commands get turned into several lower level commands at each step of the chain, until eventually becoming canonical commands. (And I mean really really canonical. They will do one function call max.) Commands are passed forward unmodified if the module has no rules defined for what to do with them. Modules can be inserted into or removed from the chain to suit the configuration.
Links in the chain:
Questions: painfully obvious q/a with myself
A5: (stolen shamelessly) Many programs use several processes, it's often a necessary design if you want to avoid threads. The way to fix the problem is not to avoid using several processes, but to create a higher level mechanism that knows which processes belong together and shows them in a user-friendly way.