-
Notifications
You must be signed in to change notification settings - Fork 0
interpreter API meeting notes 20140415
Alex Rössler, Michael Haberler
related documents and issues:
The 'remotification' of HAL has made good progress and can be introduced piecemeal since there is no direct intrusion into task/interpreter/UI - piecemeal migration possible (gladevcp already works remotely without impact on local operations).
This is more intricate with the task/interpreter/motion/UI complex. Thanks to a lack of clearly defined API’s and the intimate mingling of NML it is hard to revamp this whole complex in one go; chances are high that we’re left with a nonworking build for weeks if not months, and that is undesirable. We were therefore looking for a more piecemeal approach where each milestone shows a step accomplished and can be verified in functionality and performance.
We started out from the Interpreter API requirements thread, the goal being to define a next step towards restructuring the ask/interpreter/motion/UI complex, and breaking down the overall process into more manageable steps with tangible results at each stage.
Our overall approach to this now looks like so:
-
focus on Interpreter API and Preview/Progress display first: make the canon layer support several output formats (NML, protobuf, "SAI canon" (debug text), maybe JSON - as needed), as well as several output methods (interp_list, zeromq, text output, maybe Websockets to transport JSON) (much of it is done already).
-
Preview: chosen 'to go first' because it
-
enables parallel work on the UI support of same (Qt5, maybe WebGL)
-
because it is much more performance critical than the actual machining execution path
-
once we work out the basics of interfacing/API work can then progress in parallel
-
we have something to show beyond vaporware
-
this is a good testbed to work out the flow control problem: consumer→producer queue backpressure will be needed, and is immediately visible if it fails
-
-
milestone #1 is: standalone interpreter is fed an NGC file, preview display appears. mhaberler: does interpreter and canon rework, alex: comes up with a preview method which plugs into the API defined 'between us' (either zmq/protobuf or JSON/websockets, depending on UI preview technology).
-
this verifies the new canon layer works. From here performance optimizations for preview becomes possible (see below).
-
-
next step: hook the interpreter with the protobuf/zeromq 'driver' directly into messagebus (task is being left out for the moment). Motion needs to be adapted to hook into messagebus, and Pavel’s multiframe code for RT messaging must be ready and integrated by then.
-
this step 'does move motors' but not in a fashion one is used to with machine modes, states, estop and all the features task brings in.
-
progress display can be tacked on relatively easily now by using haltalk and a HAL group reporting current position and related attributes
-
it validates the decision to go protobuf end-to-end, and the messagebus concept
-
the result of this step also requires work on back-reporting from RT, in particular completion reporting of motion commands and associated ID’s.
-
-
step #3: re-insert task between interpreter output and messagebus.
-
This will likely result in a significant simplification of task since interpreter control issues, as well as the task/motion interface transcoding support (translate NML messages into something an RT component can digest) can go.
-
this step will include reworking the task API (mostly along the lines of src/emc/usr_intf/shcom.hh) into a remote capable zeromq/protobuf API.
-
-
step #4: revamp select UI’s to use the new API’s and preview/progress display interface. This might include making gremlin remote capable, i.e. separate from the intertwining with the UI-local interpreter.
-
motion and context ID’s: already noted here: https://github.com/machinekit/machinekit/issues/106#issuecomment-40447313 ; the natural place to do this is to tack these ID’s onto protobuf messages, hence the point in time to do this is once we progress towards milestone #2.
-
canon rewrite: the current canon layers are all free functions, not C++ classes, and tied to a particular output method (generate NML; generate Python callbacks for preview). The goal is to have an identical canon functionality for all output formats and purposes (if only for regression testing which currently isnt possible). Also, the canon layer should support runtime-switchable output formats and channels (protobuf/zeromq - maybe with a simplified output option for preview; sai; maybe NML; JSON/websockets). It is unclear how the Python callback will fit in here, this requires more thought.
-
Strategy: make canon a proper C++ abstract base class, base the semantic (i.e. non-output) layer for canon on emccanon.cc; add class parameters and switching methods to funnel canon output to different outputs as needed, and described above.
-
hook Alex’s UI work into this, and benchmark on huge NGC files like slicer3 output. Consider optimizations once the basic path works.
-
work out flow control here: once the UI stops (eg pause function, or not responding), the interpreter must stop producing output, otherwise it would overrun the output queue.
-
We considered various approaches to speeding up canon/preview display.
-
"CAM-aware" optimizations:
-
example: Frank Tkalcevic’s LinesToArcs
-
example: Charles’s suggestion to consider a CAM-specific form of preview
-
Alex idea: preview-display only every 10th or 20th plane when 3d printing
-
-
G-code/CAM unaware methods (backend/canon side): for instance, the Naive CAM Detector in emccanon.cc could be brought to bear for preview simplification/speedup.
-
batching: protobuf repeated submessages for preview primitives; this will cut down on per-packet overhead.
-
choice between speed and quality would be desirable.
-
Option1: use Douglas-Peucker algorithm in canon if NCD unusable for the purpose.
-
Option1: investigate if parametric representation and NURBS drawing can help reduce output (Ernesto?)
-
Right now we have gremlin.py which uses OpenGL to paint on a gtk canvas. Downstream we’d like to keep the route open for web-based UI’s, suggesting WebGL as medium.
In a remote UI scenario, we found basically two ways of doing preview:
-
replicate the OpenGL drawing locally in the UI application, and have that driven by a stream of preview canon commands, encoded as protobufs
-
use a webkit-based UI component, and run a Javascript-based preview app in there; for instance using three.js; in this case the natural choice would be a websockets connection with JSON to describe the preview wireframe, and any later progress updates.
We’re still unclear what the best route is here. All we have right now is a set of starting points:
-
Joseph’s OpenSCAM - Qt-based, uses OpenGL; local operations
-
the openSCAD viewer, based on Qt as well
-
the rockhopper based viewer; uses Javascript/JSON and http://www.x3dom.org/
-
the Nicholas Reynaud webcode viewer, example output
-
Dan Falck played with the latter and came up with this: https://github.com/danielfalck/webgcode/tree/dan
-
a 3dprinter software component called cura
-
the KDE project is migrating towards Qt. Some graphics programs like Marble might be a starting point.