Contents
Overview
If you have a robotic system with a somewhat simple control setup, ros_control is great, as it realizes the idea that "simple tasks should be easy". In this context, a simple setup is characterized by:
- Sensors and actuators update at the same periodic frequency.
- Controllers are implemented as a single module (single plugin).
- ...(more?)
This document is aimed at identifying shortcomings of the current implementation for more complex usecases, and setting up the stage for discussing how to overcome them without impacting the ease of use on simple setups.
Current shortcomings
Realtime-friendly dataflow interface
One major feature that is missing in ros_control is a proper realtime-friendly dataflow interface. The ROS data flow interface (ie. topics) is extensively used for controller frontends, but cannot be used inside a realtime control loop. This lack of a dataflow interface limits the modularity and composability of the control setups that can be expressed with ros_control.
Control pipelines
Example 1: Consider a realtime joint trajectory interpolator. Its natural output is a stream of desired joint positions, so it can be directly interfaced with position-controlled hardware, like so
ROS API -> Joint interpolator <-> Position-controlled robot
Now, if the hardware platform is effort-controlled, one would like to add a PID controller to convert position commands to effort commands.
ROS API -> Joint interpolator -> PID <-> Effort-controlled robot
This scenario is the simplest possible controller pipeline and can be hacked together in a more or less satisfactory way. Currently ros_control controllers can only communicate with the controller manager, so everything upstream of the robot should be bundled into a single controller. Anything more complicated than the above example cannot be represented in a clean and modular way, like the following example.
Example 2: Cartesian interpolator.
ROS API -> Cartesian interpolator -> IK -> PID <-> ControllerManager
To implement the above setup in ros_control, the Cartesian interpolator, IK, and PID modules must be bundled into a single controller.
Perception pipelines
TODO
Proposed solutions
RobotHW class breakup: Split resource and hardware interface management. The hardware interface management part could be used as the interface for chaining controllers, so for example a PID module would have (different) input and output hardware interfaces.
Orocos RTT:
- To which extent can the current API be used to implement single-threaded computation graphs with zero-copy dataflow?.
- How much overhead would this approach impose?.
Ecto: Can it be leveraged to encapsulate processing pipelines inside controllers?. Can it be used in realtime contexts?.
Question: Once control and perception pipelines are there, what will become of the (un)load and switch controllers services?, would one load and start parts of the pipeline, represented as what, scripts?. This will surely depend on the type of solution we implement, but it's good to keep in mind the user-facing interface.
Actuator/sensor data with different update rates
Not all sensors and actuators in a robot necessarily have the same update rate:
Example 1: Velocity controlled wheels of a mobile base need much lower control frequencies than the effort-controlled joints of a compliant arm.
Example 2: Apart from joint-level sensors (position, velocity, effort, etc.), IMUs, force-torque sensors or cameras may be used just as well inside a control loop. The refresh rates of these sensors can be very different, from tens to thousands of Hz.
Proposed solutions
- Hardware interfaces: Expose a timestamp of the last valid update.
Controllers data with different update rates
Apart from sensors and actuators, controllers might have different update rates as well. Currently controllers with different (slower) update rates than the controller manager implement custom logic so that they work every n>1 cycles.
This point opens the question of whether a non-periodic controller manager makes sense.
Proposed solutions
Allow data sources and sinks to specify their update policies, service them through Earliest Deadline First (EDF) scheduling.
Error handling
There is no way to inform controllers about hardware failures and error states, hence controllers have no way to make informed decisions / take action when these things occur. Further, different controllers might have different criteria to what is considered an error state.
Example 1: Joint foo can no longer accept position commands, but can still read its state. A joint position controller might consider this an error and shut itself down, but a read-only joint state publishers keeps on, "business as usual".
Example 2: Controller bar optionally uses IMU measurements to improve its performance, but can work without it, and with intermittent sensor updates (with a performance hit). Apart from logging a warning in this respect, no error state is entered.
Currently error handling must be performed at the application level, that is, the RobotHW instance or whatever wraps it.
Proposed solutions
- Hardware interfaces: Expose a timestamp of the last valid update and let controllers exploit this information locally.
- Controller interface: Allow controllers to stop themselves.
Question: Is there any other relevant piece of information that controllers should know about to properly handle non-nominal operation modes?.
Composite Controllers