[Home]CommandChain

LinuxCNCKnowledgeBase | RecentChanges | PageIndex | Preferences | LinuxCNC.org

Difference (from prior major revision) (minor diff, author diff)

Changed: 2c2
http://www.linuxcnc.org/Dropbox/tasker.pdf
http://www.linuxcnc.org/dropbox/tasker.pdf

Changed: 65c65,67
A1: in the same way as you have put geometry planning (line->spline, circle->spline,helix->spline) on the non-realtime side and a geometry interpolator (spline-interpolator) on the realtime side, I think that the trajectory planning can be done on the nonrealtime side and a separate trajectory interpolator runs on the realtime side. afaics nurbs are ok for geometry, and 3rd or 4th order (jerk limited or double jerk limited) trajectories would be the way to go. (just some thoughts by AW)
A1: in the same way as you have put geometry planning (line->spline, circle->spline,helix->spline) on the non-realtime side and a geometry interpolator (spline-interpolator) on the realtime side, I think that the trajectory planning can be done on the nonrealtime side and a separate trajectory interpolator runs on the realtime side. afaics nurbs are ok for geometry, and 3rd or 4th order (jerk limited or double jerk limited) trajectories would be the way to go.
An interesting question is also where the realtime/nonrealtime boundary goes wrt. kinematics i.e. axes/joints ??
(just some thoughts by AW)

Ray Henry made a diagram of what he was thinking at: http://www.linuxcnc.org/dropbox/tasker.pdf

I hope to also have a diagram soon! -fenn

NOTE: in this document, modules doesn't mean kernel modules, just a hunk of dynamically linked code with a well defined, standard interface. You may say "why do I need to make xxx dynamically linkable if it will always be there?" Well, that's the point of a modular system, silly! All the modules behave the same way. You should only have to write the code once for all of the modules, anyway.

keep in mind - possible modes a machine may be operating in:

1) machining a part
2) homing
3) changing tools
4) teach-in (welding bot)
5) probing

I would love to have a single config file/process to set the configuration of how all these modules are linked together, what constant parameters get set, etc. Ideally this would all be a mirror image of halconf, but in user space. Instead of pins and signals, you'd be routing input and output messages.

Here are some different ways of dealing with input and output data from each module. Keep in mind, this is for commands, like Gcodes, not for stuff like "abort" or "buffer low".

option a) "pull": all modules have a static FIFO buffer as the input queue, and no buffer for output. When the buffer drops below a certain value (v=size of the buffer minus the largest expected command chunk), it requests the next higher-up module to squirt some more commands into the buffer. result: the buffer is usually slightly more full than v.

option b) "push": all modules have a static FIFO buffer as the input queue, and no buffer for output. The module calculates how many lower-level commands will be generated by the current input command. If the number of generated commands exceeds the capacity of the next module's input buffer, it waits until the next module's buffer has enough empty slots to accept the expanded command. result: increased overhead, but more efficient memory usage since the buffer is always almost full.

option c) daemon: all modules use dynamically allocated input queues. Total memory usage is monitored by a separate control process. When memory is near full, the highest-level module is instructed to wait until some memory is freed. pros: memory is efficiently used, at a low processing overhead. Cons: requires a separate process to monitor memory usage, must determine highest-level module, code is more complex.

All modules plug into the messaging layer "backplane" for communication (errors, buffer low, config during initialization) but the bulk data is exchanged directly between them. <- is this method better than running everything through the messaging layer? You will have to use message layer wrappers anyway to get data across a network, but is it significantly more overhead to use the messaging layer instead of accessing the input queue/output function directly for each step in the chain?

(Finally) The command chain. This is the path data takes, from top to bottom. Higher level commands get turned into several lower level commands at each step of the chain, until eventually becoming canonical commands. (And I mean really really canonical. They will do one function call max.) Commands are passed forward unmodified if the module has no rules defined for what to do with them. Modules can be inserted into or removed from the chain to suit the configuration.

Links in the chain:

Workflow/factory control level
Gui conversational machining level (not yet implemented)
Gui high level ( select mode MDI, auto, manual/teleop etc)
File reader
canned cycles
Gui med level (feed override, offsets, teleop commands
work offsets
tool change
gear change
spindle
linear -> spline
helical -> spline
circular -> spline
spline blending - "native" interpolation type at the bottom
tool offsets
inverse kinematics
Gui low level (axis jog, homing,

realtime boundary
homing
teach-in
probing
spline interpolation
trajectory planning
HAL

Questions: painfully obvious q/a with myself

  1. where does the realtime boundary fall? I've made an educated guess where I think it should go.. perhaps you disagree. Please correct my ignorance by thoroughly explaining why your placement is better. obviously, for example, spline interpolation must be located on the same computer as the axis it is controlling, or it wouldn't be realtime. but do homing, teach-in, and probing need to be realtime?
  2. can you (and should you) have multiple instances of the same kind of module on separate machines? i think yes, you can as long as the same hierarchy order is maintained on both machines, so that no command gets skipped over by bouncing back and forth between machines. yes, you should have multiple instances in order to distribute cpu load, provide multiple sources of control.
  3. what should be the native interpolation format? i vote for nurbs, since they can represent all of the other formats (helical, circular, linear) and are the most compressed data format.
  4. will compressing/globbing the data help to speed up transfers between user/realtime memory? ?
  5. won't there be a whole bunch of crap running?
A1: in the same way as you have put geometry planning (line->spline, circle->spline,helix->spline) on the non-realtime side and a geometry interpolator (spline-interpolator) on the realtime side, I think that the trajectory planning can be done on the nonrealtime side and a separate trajectory interpolator runs on the realtime side. afaics nurbs are ok for geometry, and 3rd or 4th order (jerk limited or double jerk limited) trajectories would be the way to go. An interesting question is also where the realtime/nonrealtime boundary goes wrt. kinematics i.e. axes/joints ?? (just some thoughts by AW)

A5: (stolen shamelessly) Many programs use several processes, it's often a necessary design if you want to avoid threads. The way to fix the problem is not to avoid using several processes, but to create a higher level mechanism that knows which processes belong together and shows them in a user-friendly way.


LinuxCNCKnowledgeBase | RecentChanges | PageIndex | Preferences | LinuxCNC.org
This page is read-only. Follow the BasicSteps to edit pages. | View other revisions
Last edited April 2, 2008 11:21 pm by SWpadnos (diff)
Search:
Published under a Creative Commons License