Planet ROS
Planet ROS - http://planet.ros.org
Planet ROS - http://planet.ros.org
http://planet.ros.org
ROS Discourse General: Toio meets navigation2
I published ROS 2 package for navigation2 with toio.
So, user can study navigation2 using toio.
You can watch demo movie.
- Code: GitHub - atinfinity/toio_navigation: toio_navigation is ROS 2 package for navigation2 using toio
- Related post: Toio ROS 2 wrapper
1 post - 1 participant
ROS Discourse General: Space ROS Jazzy 2026.01.0 Release
Hello ROS community!
The Space ROS team is excited to announce Space ROS Jazzy 2026.01.0 was released last week and is available as osrf/space-ros:jazzy-2026.01.0 on DockerHub. Additionally, builds Move It 2 and Navigation 2 built on the jazzy-2026.01.0 underlay are also available to accelerate work using these systems as osrf/space-ros-moveit2:jazzy-2026.01.0 on DockerHub and osrf/space-ros-nav2:jazzy-2026.01.0 on DockerHub respectively.
For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.
Code
Current versions of all packages released with Space ROS are available at:
-
GitHub - space-ros/space-ros: The Space ROS meta operating system for space robotics.
-
GitHub - space-ros/docker: Docker images to facilitate Docker-based development.
-
GitHub - space-ros/simulation: Simulation assets of space-ros demos
-
GitHub - space-ros/process_sarif: Tools to process and aggregate SARIF output from static analysis.
What’s Next
This release comes 3 months after the last release. The next release is planned for April 30, 2026. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.
All the best,
The Space ROS Team
2 posts - 1 participant
ROS Discourse General: Abandoned joystick_drivers package
I noticed that the joystick drivers repository has not had any recent changes and there are several open pull requests which have not been addressed by the maintainers. Has this package been replaced or is it abandoned?
4 posts - 3 participants
ROS Discourse General: RealSense D435 mounted vertically (90° rotation) - What should camera_link and camera_depth_optical_frame TF orientations be?
Hi everyone,
I’m using an Intel RealSense D435 camera with ROS2 Jazzy and MoveIt2. My camera is mounted in a non-standard orientation: Vertically rather than horizontally. More specifically it is rotated 90° counterclockwise (USB port facing up) and tilted 8° downward.
I’ve set up my URDF with a camera_link joint that connects to my robot, and the RealSense ROS2 driver automatically publishes the camera_depth_optical_frame.
My questions:
Does camera_link need to follow a specific orientation convention? (I’ve read REP-103 says X=forward, Y=left, Z=up, but does this still apply when the camera is physically rotated?)
What should camera_depth_optical_frame look like in RViz after the 90° rotation? The driver creates this automatically - should I expect the axes to look different than a standard horizontal mount?
If my point cloud visually appears correctly aligned with reality (floor is horizontal, objects in correct positions), does the TF frame orientation actually matter? Or is it purely cosmetic at that point?
Is there a “correct” RPY for a vertically-mounted D435, or do I just need to ensure the point cloud aligns with my robot’s world frame?
Any guidance from anyone who has mounted a RealSense camera vertically would be really appreciated!
Thanks!
4 posts - 2 participants
ROS Discourse General: [Update] ros2_unbag: Fast, organized data extraction
Hi everyone,
I wanted to share an update on ros2_unbag, a tool I authored to simplify the process of getting data out of ROS 2 bags and into organized formats.
ros2_unbag is the ultimate “un-packer” for your robot’s messy suitcase. Since the initial release last year, the focus has been on making extraction as “painless” as possible, even for large-scale datasets. If you find yourself writing repetitive scripts to dump images or point clouds, this might save you some time.
Current Capabilities:
-
Quick Export: Direct conversion to images (.png, .jpg), videos (.mp4, .avi), and point clouds (.pcd, .xyz), plus structured text (.json, .yaml, .csv) for any other message type.
-
User-friendly GUI: Recently updated to make the workflow as intuitive as possible.
-
Flexible Processors: Define your own routines to handle virtually any message type or custom export logic. We currently use this for cloudini support and specialized automoted driving message types.
-
Organized Output: Automatically sorts data into clear directory structures for easy downstream use.
-
High Performance: Optimized through parallelization; the bottleneck is often your drive speed, not your compute.
I’ve been maintaining this since July 2025 and would love to hear if there are specific message types or features the community is still struggling to “unbag.”
https://github.com/ika-rwth-aachen/ros2_unbag
1 post - 1 participant
ROS Discourse General: Panthera-HT —— A Fully Open-Source 6-Axis Robotic Arm
Panthera-HT —— A Fully Open-Source 6-Axis Robotic Arm
Hello Developers,
Compact robotic arms play a vital role for individual developers, data acquisition centers, and task execution in small-scale scenarios. Now, after a long time of development,the full 6-DOF (six degrees of freedom) robotic arm industry welcomes a new player—the Panthera-HT from hightorque.
The Panthera-HT currently offers control interfaces in C++, Python, and ROS2, featuring capabilities including:
-
Position/Velocity/Torque control
-
Impedance control
-
Gravity compensation mode
-
Gravity and friction compensation mode
-
Master-slave teleoperation (dual-arm/bimanual)
-
Hand-guided teaching / Drag-to-teach
Additionally, it supports data collection and inference within the LeRobot framework. For additional runtime scripts and implementation details, please refer to the SDK documentation.
Project Originemphasized text
This started as Ragtime-LAB/Ragtime_Panthera’s open-source project, and we’ve since taken it further with improvements and polish. Huge thanks to the original author wEch1ng(芝士榴莲肥牛) for sharing their work so generously with the community!*
Repository
To quickly access to our project , here get the links which are listed cintuitively
| Repository | License | Description |
|---|---|---|
| Panthera-HT_Main | MIT | Main project repository, including project introduction, repository links, and feature requests. |
| Panthera-HT_Model | MIT | SolidWorks original design files, sheet metal unfolding diagrams, 3D printing files, and Bill of Materials (BOM). |
| Panthera-HT_SDK | MIT | Python SDK development package, providing quick-start example code and development toolchain. |
| Panthera-HT_ROS2 | MIT | ROS2 development package providing robotic arm drivers, control, and simulation support. |
| Panthera-HT_lerobot | MIT | LeRobot integration package, supporting imitation learning and robot learning algorithms. |
Control Examples
Let’s get into the project and finish the qucik start of the Panthera-HT robot , you will find lots of interesting functions waiting for you , and I know you definitely want to have a preview of the control examples ![]()
Position and Speed Control:
Master-Slave Teleoperation:
Master-Slave Teleoperated Grasping:

Epilogue
We sincerely thank you for your time in reviewing the content above, and extend our gratitude to all developers visiting the project on GitHub. Wishing you smooth development workflows and outstanding project performance!
2 posts - 2 participants
ROS Discourse General: Stop SSH-ing into robots to find the right rosbag. We built a visual Rolling Buffer for ROS2
Hi everyone,
I’m back with an update on INSAION, the observability platform my co-founder and I are building. Last time, we discussed general fleet monitoring, but today I want to share a specific feature we just released that targets a massive pain point we faced as roboticists: Managing local recordings without filling up the disk.
We’ve all been there: A robot fails in production, you SSH in, navigate to the log directory, and start playing “guess the timestamp” to find the right bag file. It’s tedious, and usually, you either missed the data or the disk is already full.
So, we built a smart Rolling Buffer to solve this.
How it actually works (It’s more than just a loop):
It’s not just a simple circular buffer. We built a storage management system directly into the agent. You allocate a specific amount of storage (e.g., 10GB) and select a policy via the Web UI (no config files!):
-
FIFO: Oldest data gets evicted automatically when the limit is reached.
-
HARD: Recording stops when the limit is reached to preserve exact history.
-
NONE: Standard recording until disk saturation.
The “No-SSH” Workflow:
As you can see in the video attached, we visualized the timeline.
-
The Timeline: You see exactly where the Incidents (red blocks) happened relative to the Recordings (yellow/green blocks).
-
Visual correlation: No need to grep logs or match timestamps manually. You can see at a glance if you have data covering the crash.
-
Selective Sync: You don’t need to upload terabytes of data. You just select the relevant block from the timeline and click “Sync.” The heavy sensor data (Lidar, Images, Costmaps) is then uploaded to the cloud for analysis.
Closing the Loop:
Our goal is to give you the full picture. We start with lightweight telemetry for live monitoring, which triggers alerts. Then, we close the loop by letting you easily grab the high-fidelity, heavy data stored locally—only when you actually need it.
We’re trying to build the tool we wish we had in our previous robotics jobs. I’d love to hear your thoughts on this “smart recording” approach—does this sound like something that would save you time debugging?
I’d love to hear your feedback on it
Check it out at app.insaion.com if you want to dig deeper. It’s free to get started
Cheers!
1 post - 1 participant
ROS Discourse General: Implementation of UR Robotic Arm Teleoperation with PIKA SDK
Implementation of UR Robotic Arm Teleoperation with PIKA SDK
Demo Demonstration
Pika Teleoperation of UR Robotic Arm Demo Video
Getting Started with PIKA Teleoperation (UR Edition)
We recommend reading [Methods for Teleoperating Any Robotic Arm with PIKA] before you begin.
Once you understand the underlying principles, let’s guide you through writing a teleoperation program step by step. To quickly implement teleoperation functionality, we will use the following tools:
- PIKA SDK: Enables fast access to all PIKA Sense data and out-of-the-box gripper control capabilities
- Various transformation tools: Such as converting XYZRPY to 4x4 homogeneous transformation matrices, converting XYZ and quaternions to 4x4 homogeneous transformation matrices, and converting RPY angles (rotations around X/Y/Z axes) to rotation vectors
- UR Robotic Arm Control Interface: This interface is primarily built on the ur-rtde library. It enables real-time control by sending target poses (XYZ and rotation vectors), speed, acceleration, control interval (frequency), lookahead time, and proportional gain
Environment Setup
- Clone the code
git clone --recursive https://github.com/RoboPPN/pika_remote_ur.git
2、Install Dependencies
cd pika_remote_ur/pika_sdk
pip3 install -r requirements.txt
pip3 install -e .
pip3 install ur-rtde
UR Control Interface
Let's start with the control interface. To implement teleoperation, you first need to develop a proper control interface. For instance, the native control interface of UR robots accepts XYZ coordinates and rotation vectors as inputs, while teleoperation code typically outputs XYZRPY data. This requires a coordinate transformation, which can be implemented either in the control interface or the main teleoperation program. Here, we perform the transformation in the main teleoperation program.The UR robotic arm control interface code is located at pika_remote_ur/ur_control.py:
import rtde_control
import rtde_receive
class URCONTROL:
def __init__(self,robot_ip):
# Connect to the robot
self.rtde_c = rtde_control.RTDEControlInterface(robot_ip)
self.rtde_r = rtde_receive.RTDEReceiveInterface(robot_ip)
if not self.rtde_c.isConnected():
print("Failed to connect to the robot control interface.")
return
if not self.rtde_r.isConnected():
print("Failed to connect to the robot receive interface.")
return
print("Connected to the robot.")
# Define servoL parameters
self.speed = 0.15 # m/s
self.acceleration = 0.1 # m/s^2
self.dt = 1.0/50 # dt for 500Hz, or 1.0/125 for 125Hz
self.lookahead_time = 0.1 # s
self.gain = 300 # proportional gain
def sevol_l(self, target_pose):
self.rtde_c.servoL(target_pose, self.speed, self.acceleration, self.dt, self.lookahead_time, self.gain)
def get_tcp_pose(self):
return self.rtde_r.getActualTCPPose()
def disconnect(self):
if self.rtde_c:
self.rtde_c.disconnect()
if self.rtde_r:
self.rtde_r.disconnect()
print("Disconnected from UR robot")
# example
# if __name__ == "__main__":
# ur = URCONTROL("192.168.1.15")
# target_pose = [0.437, -0.1, 0.846, -0.11019068574221307, 1.59479642933605, 0.07061926626169934]
# ur.sevol_l(target_pose)
The code defines a Python class named URCONTROL for communicating and controlling UR robots. This class encapsulates the functionalities of the rtde_control and rtde_receive libraries, providing methods for connecting to the robot, disconnecting, sending servoL commands, and retrieving TCP poses.
Core Teleoperation Code
The teleoperation code is located at `pika_remote_ur/teleop_ur.py`As outlined in [Methods for Teleoperating Any Robotic Arm with PIKA], the teleoperation principle can be summarized in four key steps:
- Obtain 6D Pose data
- Coordinate System Alignment
- Incremental Control
- Map 6D Pose data to the robotic arm
Obtaining Pose Data
The code is as follows:# Get pose data of the tracker device
def get_tracker_pose(self):
logger.info(f"Starting to obtain pose data of {self.target_device}...")
while True:
# Get pose data
pose = self.sense.get_pose(self.target_device)
if pose:
# Extract position and rotation data for further processing
position = pose.position # [x, y, z]
rotation = self.tools.quaternion_to_rpy(pose.rotation[0],pose.rotation[1],pose.rotation[2],pose.rotation[3]) # [x, y, z, w] quaternion
self.x,self.y,self.z, self.roll, self.pitch, self.yaw = self.adjustment(position[0],position[1],position[2],
rotation[0],rotation[1],rotation[2])
else:
logger.warning(f"Failed to obtain pose data for {self.target_device}, retrying in the next cycle...")
time.sleep(0.02) # Obtain data every 0.02 seconds (50Hz)
This code retrieves the pose information of the tracker named “T20” every 0.02 seconds. There are two types of tracker device names: those starting with WM and those starting with T. When connecting trackers to the computer via a wired connection, the first connected tracker is named T20, the second T21, and so on. For wireless connections, the first connected tracker is named WM0, the second WM1, and so forth.
The acquired pose data requires further processing. The adjustment function is used to adjust the coordinates to match the coordinate system of the UR robotic arm’s end effector, achieving alignment between the two systems.
Coordinate System Alignment
The code is as follows:# Coordinate transformation adjustment function
def adjustment(self,x,y,z,Rx,Ry,Rz):
transform = self.tools.xyzrpy2Mat(x,y,z, Rx, Ry, Rz)
r_adj = self.tools.xyzrpy2Mat(self.pika_to_arm[0],self.pika_to_arm[1],self.pika_to_arm[2],
self.pika_to_arm[3],self.pika_to_arm[4],self.pika_to_arm[5],) # Adjust coordinate axis direction: Pika ---> Robotic Arm End Effector
transform = np.dot(transform, r_adj)
x_,y_,z_,Rx_,Ry_,Rz_ = self.tools.mat2xyzrpy(transform)
return x_,y_,z_,Rx_,Ry_,Rz_
The function implements coordinate transformation and adjustment with the following steps:
- Convert the input pose (x,y,z,Rx,Ry,Rz) into a transformation matrix.
- Obtain the adjustment matrix for transforming the Pika coordinate system to the robotic arm’s end effector coordinate system.
- Combine the two transformations through matrix multiplication.
- Convert the final transformation matrix back to pose parameters and return the result.
The adjusted pose parameters matching the robotic arm’s coordinate system can be obtained through this function.
Incremental Control
In teleoperation, the pose data provided by Pika Sense is absolute. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we want the robotic arm to follow the relative movements of the operator starting from its current position. In simple terms, this involves converting the absolute pose changes of the control device into relative pose commands for the robotic arm.The code is as follows:
# Incremental control
def calc_pose_incre(self,base_pose, pose_data):
begin_matrix = self.tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
base_pose[3], base_pose[4], base_pose[5])
zero_matrix = self.tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
end_matrix = self.tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
pose_data[3], pose_data[4], pose_data[5])
result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
xyzrpy = self.tools.mat2xyzrpy(result_matrix)
return xyzrpy
This function uses transformation matrix arithmetic to implement incremental control. Let’s break down the code step by step:
Input Parameters:
- base_pose: The reference pose at the start of teleoperation. When teleoperation is triggered, the system records the current pose of the control device and stores it as
self.base_pose. This pose serves as the “starting point” or “reference zero point” for calculating all subsequent increments. - pose_data: The real-time pose data received from the control device (Pika Sense) at the current moment.
Matrix Transformation:The function first converts three key poses (represented in [x, y, z, roll, pitch, yaw] format) into 4x4 homogeneous transformation matrices, typically implemented by the tools.xyzrpy2Mat function.
- begin_matrix: Converted from
base_pose, representing the pose matrix of the control device at the start of teleoperation (denoted as T_begin). - zero_matrix: Converted from
self.initial_pose_rpy, representing the pose matrix of the robotic arm’s end effector at the start of teleoperation. This is the “starting point” for the robotic arm’s movement (denoted as T_zero). - end_matrix: Converted from
pose_data, representing the pose matrix of the control device at the current moment (denoted as T_end).
Core Calculation:This is the critical line of code:
result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
Let’s analyze it using matrix multiplication:The formula can be expressed as: Result = T_zero * (T_begin)⁻¹ * T_end
np.linalg.inv(begin_matrix): Calculates the inverse matrix ofbegin_matrix, i.e., (T_begin)⁻¹. In robotics, the inverse of a transformation matrix represents the reverse transformation.np.dot(np.linalg.inv(begin_matrix), end_matrix): This calculates (T_begin)⁻¹ * T_end, which physically represents the transformation required to convert from the begin coordinate system to the end coordinate system. In other words, it accurately describes the relative pose change (increment) of the control device from the start of teleoperation to the current moment (denoted as ΔT).np.dot(zero_matrix, ...): This calculates T_zero * ΔT, which physically applies the calculated relative pose change (ΔT) to the initial pose of the robotic arm (T_zero).
Result Conversion and Return:
xyzrpy = tools.mat2xyzrpy(result_matrix): Converts the calculated 4x4 target pose matrixresult_matrixback to the [x, y, z, roll, pitch, yaw] format that the robot controller can interpret.return xyzrpy: Returns the calculated target pose.
Teleoperation Triggering
There are various ways to trigger teleoperation:- Voice Trigger: The operator can trigger teleoperation using a wake word.
- Server Request Trigger: Teleoperation is triggered via a server request.
However, both methods have usability limitations. Voice triggering requires an additional voice input module and may suffer from low wake word recognition accuracy—you might have to repeat the wake word multiple times before successful triggering, leaving you frustrated before even starting teleoperation. Server request triggering requires sending a request from the control computer, which works well with two-person collaboration but becomes cumbersome when operating alone.
Instead, we use Pika Sense’s state transition detection to trigger teleoperation. The operator simply holds the Pika Sense and double-clicks it to reverse the state, thereby initiating teleoperation. The code is as follows:
# Teleoperation trigger
def handle_trigger(self):
current_value = self.sense.get_command_state()
if self.last_value is None:
self.last_value = current_value
if current_value != self.last_value: # Detect state change
self.bool_trigger = not self.bool_trigger # Reverse bool_trigger
self.last_value = current_value # Update last_value
# Perform corresponding operations based on the new bool_trigger value
if self.bool_trigger :
self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
self.flag = True
print("Teleoperation started")
elif not self.bool_trigger :
self.flag = False
#-------------------------------------------------Option 1: Robotic arm stops at current pose after teleoperation ends; resumes from current pose in next teleoperation---------------------------------------------------
self.initial_pose_rotvec = self.ur_control.get_tcp_pose()
temp_rotvec = [self.initial_pose_rotvec[3], self.initial_pose_rotvec[4], self.initial_pose_rotvec[5]]
# Convert rotation vector to Euler angles
roll, pitch, yaw = self.tools.rotvec_to_rpy(temp_rotvec)
self.initial_pose_rpy = self.initial_pose_rotvec[:]
self.initial_pose_rpy[3] = roll
self.initial_pose_rpy[4] = pitch
self.initial_pose_rpy[5] = yaw
self.base_pose = self.initial_pose_rpy # Desired target pose data
print("Teleoperation stopped")
#-------------------------------------------------Option 2: Robotic arm returns to initial pose after teleoperation ends; starts from initial pose in next teleoperation---------------------------------------------------
# # Get current pose of the robotic arm
# current_pose = self.ur_control.get_tcp_pose()
# # Define interpolation steps
# num_steps = 100 # Adjust steps as needed; more steps result in smoother transition
# for i in range(1, num_steps + 1):
# # Calculate interpolated pose at current step
# # Assume initial_pose_rotvec and current_pose are both in [x, y, z, Rx, Ry, Rz] format
# interpolated_pose = [
# current_pose[j] + (self.initial_pose_rotvec[j] - current_pose[j]) * i / num_steps
# for j in range(6)
# ]
# self.ur_control.sevol_l(interpolated_pose)
# time.sleep(0.01) # Short delay between interpolations to control speed
# # Ensure the robotic arm reaches the initial position
# self.ur_control.sevol_l(self.initial_pose_rotvec)
# self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
# print("Teleoperation stopped")
The code continuously retrieves the current state of Pika Sense using self.sense.get_command_state(), which outputs either 0 or 1. When the program starts, bool_trigger defaults to False. On the first state reversal, bool_trigger is set to True—the tracker’s pose is set as the zero point, self.flag is set to True, and control data is sent to the robotic arm for motion control.
To stop teleoperation, double-click the Pika Sense again to reverse the state. The robotic arm will then stop at its current pose and resume from this pose in the next teleoperation session (Option 1). Option 2 allows the robotic arm to return to its initial pose after teleoperation stops and start from there in subsequent sessions. You can choose the appropriate option based on your specific needs.
Mapping Pika Pose Data to the Robotic Arm
The code for this section is as follows:def start(self):
self.tracker_thread.start() # Start the thread
# Main thread continues with other tasks
while self.running:
self.handle_trigger()
self.control_gripper()
current_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
increment_pose = self.calc_pose_incre(self.base_pose,current_pose)
finally_pose = self.tools.rpy_to_rotvec(increment_pose[3], increment_pose[4], increment_pose[5])
increment_pose[3:6] = finally_pose
# Send pose to robotic arm
if self.flag:
self.ur_control.sevol_l(increment_pose)
time.sleep(0.02) # Update at 50Hz
This section of code converts the RPY rotation data of the calculated increment_pose into a rotation vector and sends it to the robotic arm (UR robots accept XYZ coordinates and rotation vectors for control). Control data is only sent to the robotic arm when self.flag is set to True.
Practical Operation
The teleoperation code is located at: `pika_remote_ur/teleop_ur.py`-
Power on the UR robotic arm and enable the joint motors. If the robotic arm’s end effector is equipped with a gripper or other actuators, enter the corresponding load parameters.
-
Configure the robotic arm’s IP address on the tablet.
3、Configure the Tool Coordinate System.
The end effector coordinate system must be set with the Z-axis pointing forward, X-axis pointing downward, and Y-axis pointing left. In the code, we rotate the Pika coordinate system 90° counterclockwise around the Y-axis, resulting in the Pika coordinate system having the Z-axis forward, X-axis downward, and Y-axis left. Therefore, the robotic arm’s end effector (tool) coordinate system must be aligned with this configuration; otherwise, the control will malfunction.
-
For first-time use, set the speed to 20-30% and enable remote control of the robotic arm.
-
Connect the tracker to the computer via a USB cable and calibrate the tracker and base station.
Navigate to the ~/pika_ros/scripts directory and run:
bash calibration.bash
Once positioning calibration is complete, close the program.
-
Connect Pika Sense and Pika Gripper to the computer using USB 3.0 cables. Note: Connect Pika Sense first (it should be assigned the port
/dev/ttyUSB0), then connect the Pika Gripper (which requires 24V DC power supply, and should be assigned the port/dev/ttyUSB1). -
Run the code:
cd pika_remote_ur
python3 teleop_ur.py
The terminal will output numerous logs, with the most common one being:
teleop_ur - WARNING - Failed to obtain pose data for T20, retrying in the next cycle...
Teleoperation can be initiated by double-clicking once the above log stops appearing and is replaced by:
pika.vive_tracker - INFO - Detected new device update: T20
Then you can start the remote operation by double-clicking.
1 post - 1 participant
ROS Discourse General: Should feature-adding/deprecating changes to core repos define feature flags?
When a new feature or deprecation is added to a C++ repo, it would be useful to have an easy way of detecting whether this feature is available.
Currently, it’s possible to use has_include (if the feature added a whole new header file), or you’re left with try_compile in CMake. Or version checks, which get very quickly very complicated.
Take for example Add imu & mag support in `tf2_sensor_msgs` (#800) by roncapat · Pull Request #813 · ros2/geometry2 · GitHub which added support for transforming IMU messages. If my package uses this feature and has a fallback for the releases where it’s missing, I need a reliable way for detecting the presence of the feature. I went with try_compile and it works.
However, imagine that tf2_sensor_msgs::tf2_sensor_msgs target automatically adds a compile definition like `-DTF2_SENSOR_MSGS_HAS_IMU_SUPPORT`. It would be much easier for downstream packages.
As long as it’s feasible, I very much want to have single-branch ROS2 packages for all distros out there, and this kind of packages would benefit a lot.
Another example: ament_index_cpp added std::filesystem interface recently. For downstream packages that want to work with both the old and new interfaces, there are some ifdefs needed in the implementation. But it doesn’t make sense to me for each package using ament_index_cpp to do the try_compile check…
What do you think about adding such feature flags to packages? Would it be maintainable? Would there be any drawbacks?
1 post - 1 participant
ROS Discourse General: PlotJugler 2026: it needs your
Why PlotJuggler 2026
Soon it will be the 10th anniversary of my first commit in the PlotJuggler repo.
I built this to help people visualize their data faster and almost everyone I know, in the ROS community, uses it (Bubble? False confirmation bias? We will never know).
What I do know is that in an era where we have impressive (and VC-backed) visualization tools, PlotJuggler is still used and loved by thousands of roboticists.
I believe the reason is that it is not just a “visualization” tool, but a debugging one; fast, nimble and effective, like vi… if you are into that.
I decided that PJ deserves better… my users do! And I have big plans to make that happen… with your help (mostly, your company’s help).
Crowdfunding: PlotJuggler’s future, shaped together
This is the reason why I am launching a crowdfunding campaign, targeted to companies, not individuals.
This worked for me quite well 3 years ago, when I partially financed the development of Groot2, the BehaviorTree.CPP editor. But this time is different: if I reach my goals, 100% of the result will be open source and truly free, forever.
This is my roadmap: PlotJuggler 2026 - Google Slides
- Extension Marketplace — discover and install plugins with one click.
- Connectors to data in the cloud — access your logs wherever they are.
- Connect to your robot from any computer, with or without ROS.
- New data transform editor — who needs Matlab?
- Efficient data storage for “big data”.
- Images and videos (at last?).
- PlotJuggler on the web? I want to believe. You want too.
Contact me at dfaconti@aurynrobotics.com if you want to know more.
Why you should join
- You or your team already uses PlotJuggler. Invest in the tool that saves you debugging hours every week.
- Shape the roadmap. Backers get a voice in prioritizing features that matter to your workflow.
- Public recognition. Your company logo in the documentation and release announcements.
- Be the change you want to see in the open-source world. We all like a good underdog story. Davide VS Goliath (pun intended), open-source vs closed-source (reference intended). Yes, you can make that happen.
FAQ
What if I want another feature?
Contact me and tell me more.
What if I am not interested in all these features, but only 1 or 2?
We will find a way and negotiate a contribution that is proportional to what you care about.
How much should a backer contribute?
I am not giving you an upper limit
, but use €5,000 as the smallest “quantum”. This is the reason why this is targeted to companies, not individuals.
How will you use that money?
I plan to hire 1-3 full-time employees for 1 year. The more budget I can obtain, the more I can build.
“I think it is great, but I am not in charge of making this decision at my company”
Give me the email of the decision maker I need to bother, and I will do it for you!
2 posts - 2 participants
ROS Discourse General: Ros2cli, meet fzf
ros2cli Interactive Selection: Fuzzy Finding
ros2cli just got a UX upgrade: fuzzy finding!

Tab completion is nice, but it still requires you to remember how the name starts. Now you can just type any part you remember and see everything that matches. Think “search” not “autocomplete”.
Tab Completion vs. Fuzzy Finding
Tab completion:
$ ros2 topic echo /rob<TAB>
# Shows: /robot/
$ ros2 topic echo /robot/<TAB><TAB><TAB>...
# Cycles through: base_controller, cmd_vel, diagnostics, joint_states...
# Was it under /robot? Or /robot/sensors? Or was it /sensors/robot?
# Start over: Ctrl+C
Fuzzy finding (new):
$ ros2 topic echo
# Type: "lidar"
# Instantly see ALL topics with "lidar" anywhere in the name:
/robot/sensors/lidar/scan
/front_lidar/points
/safety/lidar_monitor
# Pick with arrows, done!
What Works Now?
-
ros2 topic echo/info/hz/bw/type- Find topics by any part of their name -
ros2 node info- Browse all nodes, filter as you type -
ros2 param get- Pick node, then browse its parameters -
ros2 run- Find packages/executables without remembering exact names
There are plenty more opportunities where we could integrate fzf, not only in more verbs of ros2cli (e.g. ros2 service) but also in other tools in the ROS ecosystem (e.g. colcon).
I’d love to to see this practice propagate but for this I need the help of the community!
Links
5 posts - 3 participants
ROS Discourse General: LinkForge 1.2.0: Centralized ros2_control Dashboard & Inertial Precision
Hi Everyone,
Following the initial announcement of LinkForge, I’m appreciative of the feedback. Today I’m releasing v1.2.0, focused on internal stability and better diagnostic visuals.
Key Technical Changes:
-
Centralized ros2_control Dashboard: We’ve consolidated all hardware interfaces and transmissions into a single dashboard. This makes managing complex actuators much faster and prevents property-hunting across panels.
-
Inertial Origins & CoM Editing: We’ve exposed the inertial origin in the UI and added a GPU-based overlay showing a persistent Center of Mass sphere. This allows for manual fine-tuning and immediate visual verification of your physics model directly in the 3D viewport.
-
Hexagonal Architecture: The core logic is now decoupled from the Blender API, making the codebase more testable (now with near-full core coverage) and future-proof.
We also fixed several bugs related to Xacro generation and mesh cloning for export robustness. Getting the physics right in the editor is the best way to prevent “exploding robots” in simulation.
Download (Blender Extensions): LinkForge — Blender Extensions
Documentation: https://linkforge.readthedocs.io/
Source Code: GitHub - arounamounchili/linkforge: Build simulation-ready robots in Blender. Professional URDF/XACRO exporter with validation, sensors, and ros2_control support.
Feedback on the new dashboard workflow is very welcome!
1 post - 1 participant
ROS Discourse General: Native ROS 2 Jazzy Debian packages for Raspberry Pi OS / Debian Trixie (arm64)
After spending some time trying to get ROS 2 Jazzy working reliably on Raspberry Pi via Docker and Conda (and losing several rounds to OpenGL, Gazebo plugins, and cross-arch issues), I eventually concluded:)
On Raspberry Pi, ROS really only behaves when it’s installed natively.
So I built the full ROS 2 Jazzy stack as native Debian packages for Raspberry Pi OS / Debian Trixie (arm64), using a reproducible build pipeline:
-
bloom → dpkg-buildpackage → sbuild → reprepro
-
signed packages
-
rosdep-compatible
The result:
-
Native ROS 2 Jazzy on Pi OS / Debian Trixie
-
Uses system Mesa / OpenGL
-
Gazebo plugins load correctly
-
Cameras, udev, and ros2_control behave
-
Installable via plain apt
Public APT repository
GitHub - rospian/rospian-repo: ROS2 Jazzy on Raspberry OS Trixie debian repo
Build farm (if you want to reproduce or extend it)
Includes the full mini build-farm pipeline.
This was motivated mainly by reliability on embedded systems and multi-machine setups (Gazebo on desktop, control on Pi).
Feedback, testing, or suggestions very welcome.
3 posts - 2 participants
ROS Discourse General: Ros2_yolos_cpp High-Performance ROS 2 Wrappers for YOLOs-CPP [All models + All tasks]
Hi everyone! ![]()
I’m the author of ros2_yolos_cpp and YOLOs-CPP. I’m excited to share the first public release of this ROS 2 package!
Repository: ros2_yolos_cpp
What Is ros2_yolos_cpp?
ros2_yolos_cpp is a production-ready ROS 2 interface for the YOLOs-CPP inference engine — a high-performance, unified C++ library for YOLO models (v5 through v12 and YOLO26) built on ONNX Runtime and OpenCV.
This package provides composable and lifecycle-managed ROS 2 nodes for real-time:
- Object Detection
- Instance Segmentation
- Pose Estimation
- Oriented Bounding Boxes (OBB)
- Image Classification
All powered through ONNX models and optimized for both CPU and GPU inference.
Key Features
-
ROS 2 Lifecycle Nodes
Full support for ROS 2 managed node lifecycle (configure,activate, etc.) -
Composable Nodes
Efficient multi-model, multi-node setups in a single process -
Zero-Copy Image Transport
Optimized subscription for high-throughput video pipelines -
All Major Vision Tasks
Detection, segmentation, pose, OBB, and classification in one stack -
Standardized ROS 2 Messages
Usesvision_msgsand custom OBB types for interoperability -
Production-Ready
CI/CD workflows, strict parameters, and reusable launch configurations
Regards,
1 post - 1 participant
ROS Discourse General: The Canonical Observability Stack with Guillaume Beuzeboc | Cloud Robotics WG Meeting 2026-01-28
For this coming session on Wed, Jan 28, 2026 4:00 PM UTC→Wed, Jan 28, 2026 5:00 PM UTC, the CRWG has invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. Guillaume, a software engineer at Canonical, previously presented COS at ROSCon 2025 and has kindly agreed to join this meeting to discuss additional technical details with the CRWG.
At the previous meeting, the CRWG continued its review of the ROSCon 2025 talks, focusing on identifying the sessions most relevant to Logging and Observability. A blog post summarizing our findings will be published in the coming weeks. If you would like to watch the latest review meeting, it is available on YouTube.
The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.
Hopefully we will see you there!
1 post - 1 participant
ROS Discourse General: Deployment and Implementation of RDA_planner
Deployment and Implementation of RDA_planner
We reproduce the RDA Planner project from the IEEE paper RDA: An Accelerated Collision-Free Motion Planner for Autonomous Navigation in Cluttered Environments. We provide a step-by-step guide to help you quickly reproduce the RDA path planning algorithm in this paper, enabling efficient obstacle avoidance for autonomous navigation in complex environments.
Abstract
RDA Planner is a high-performance, optimization-based Model Predictive Control (MPC) motion planner designed for autonomous navigation in complex and cluttered environments. By leveraging the Alternating Direction Method of Multipliers (ADMM), RDA decomposes complex optimization problems into several simple subproblems.
This project is the open-source development of the RDA_ROS autonomous navigation project, proposed by researchers from the University of Hong Kong, Southern University of Science and Technology, University of Macau, Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, and Hong Kong University of Science and Technology (Guangzhou). It is developed based on the AgileX Limo simulator. Relevant papers have been published in IEEE Robotics and Automation Letters and IEEE Transactions on Mechatronics.
RDA planner: GitHub - hanruihua/RDA-planner: [RA-Letter 2023] RDA: An Accelerated Collision Free Motion Planner for Autonomous Navigation in Cluttered Environments
RDA_ROS: GitHub - hanruihua/rda_ros: ROS Wrapper of RDA planner
Tags
limo、RDA_planner、path planning
Respositories
- Navigation Repository: GitHub - agilexrobotics/Agilex-College: Agilex College
- Project Repository: https://github.com/agilexrobotics/limo/RDA_planner.git
Environment Requirements
System:ubuntu 20.04
ROS Version:noetic
python Version:python3.9
Deployment Process
1、Download and Install Conda
Choose Anaconda or Miniconda based on your system storage capacity
After downloading, run the following commands to install:
-
Miniconda:
bash Miniconda3-latest-Linux-x86_64.sh -
Anaconda:
bash Anaconda-latest-Linux-x86_64.sh
2、Create and Activate Conda Environment
conda create -n rda python=3.9
conda activate rda
3、Download RDA_planner
mkdir -p ~/rda_ws/src
cd ~/rda_ws/src
git clone https://github.com/hanruihua/RDA_planner
cd RDA_planner
pip install -e .
4、Download Simulator
pip install ir-sim
5、Run Examples in RDA_planner
cd RDA_planner/example/lidar_nav
python lidar_path_track_diff.py
The running effect is consistent with the official README.

Deployment Process of rda_ros
1、Install Dependencies in Conda Environment
conda activate rda
sudo apt install python3-empy
sudo apt install ros-noetic-costmap-converter
pip install empy==3.3.4
pip install rospkg
pip install catkin_pkg
2、Download Code
cd ~/rda_ws/src
git clone https://github.com/hanruihua/rda_ros
cd ~/rda_ws && catkin_make
cd ~/rda_ws/src/rda_ros
sh source_setup.sh && source ~/rda_ws/devel/setup.sh && rosdep install rda_ros
3、Download Simulation Components
This step will download two repositories: limo_ros and rvo_ros
limo_ros:Robot model for simulation
rvo_ros:Cylindrical obstacles used in the simulation environment
cd rda_ros/example/dynamic_collision_avoidance
sh gazebo_example_setup.sh
4、Run Gazebo Simulation
Run via Script
cd rda_ros/example/dynamic_collision_avoidance
sh run_rda_gazebo_scan.sh
Run via Individual Commands
Launch the simulation environment:
roslaunch rda_ros gazebo_limo_env10.launch
Launch RDA_planner
roslaunch rda_ros rda_gazebo_limo_scan.launch

1 post - 1 participant
ROS Discourse General: iRobot's ROS benchmarking suite now available!
We’ve just open-sourced our ROS benchmarking suite! Built on top of iRobot’s ros2-performance framework, this is a containerized environment for simulating arbitrary ROS2 systems and graph configurations both simple and complex, comparing the performance of various RMW implementations, and identifying performance issues and bottlenecks.
- Support for jazzy, kilted and rolling
- Fully containerized, with experimental support for ARM64 builds through docker bake
- Container includes fastdds, cyclonedds and zenoh out of the box.
- In-depth statistical analysis / performance graphs, wrapped up in
a pretty PDF format like so. (3.8 MB)
Are you building a custom RMW or ROS executor not included in this tooling, and want to compare against the existing implementations? We provide instructions and examples for how to add them to this suite.
Huge shoutout to Leonardo Neumarkt Fernandez for owning and driving the development of this benchmarking suite!
5 posts - 4 participants
ROS Discourse General: Can anyone recommend a C++ GUI framework where I can embed or integrate a 3D engine?
I know that Qt gives an opportunity to do it natively with Qt3D, but I didn’t find any examples demonstrating that I can rotate and view models in Qt3D within mouse. There are also a lot of 3D engines which provides integration with Qt. They are listed here. But I don’t want to try each of them, maybe someone already knows which one is suitable for me.
I am using C++ for everything, so it is better to use C++ for easier integration, but Rust and Python are also acceptable.
I am a big fun of the Open3D, so if somebody knows how to integrate it with some GUI frameworks, I will be glad)
3 posts - 2 participants
ROS Discourse General: Announcement: rclrs 0.7.0 Release
We’re happy to announce the release of rclrs v0.7.0!
Just like v0.6.0 landed right before ROSCon in Singapore, this release is arriving just in time for FOSDEM at the end of the month. Welcome to Conference-Driven Development (CDD)!
If you’re attending FOSDEM, come check out my talk on ros2-rust in the Robotics & Simulation devroom.
What’s New
Dynamic Messages
This release adds support for dynamic message publishers and subscribers. You can now work with ROS 2 topics without compile-time knowledge of message types, enabling tools like rosbag recorders, topic inspection utilities, and message bridges to be written entirely in Rust.
Best Available QoS
Added support for best available QoS profiles. Applications can now automatically negotiate quality of service settings when connecting to existing publishers or subscribers.
Other Changes
-
Fixed mismatched lifetime syntax warnings
-
Fixed duplicate typesupport extern declarations
Breaking Changes
- Minimum Rust version is now 1.85
For the next release, we are planning to switch to Rust 2024, but wanted to give enough notice.
Contributors
A huge thank you to everyone who contributed to this release! Your contributions make ros2-rust better for the entire community.
- Esteve Fernández
- Geoff Sokoll
- Jacob Hassold
- Kimberly N. McGuire
- Luca Della Vedova
- Michael X. Grey
- Nikolai Morin
- Sam Privett
Links
-
GitHub: GitHub - ros2-rust/ros2_rust: Rust bindings for ROS 2
-
Examples: GitHub - ros2-rust/examples: Example packages for ros2-rust
-
Changelog: ros2_rust/rclrs/CHANGELOG.md at main · ros2-rust/ros2_rust · GitHub
As always, we welcome feedback and contributions!
1 post - 1 participant
ROS Discourse General: LinkForge: Robot modeling does not have to be complicated
I recorded a short video to show how easy it is to build a simple mobile robot with ğ��‹ğ��¢ğ��§ğ��¤ğ��…ğ��¨ğ��«ğ�� ğ���, a Blender extension designed to bridge the gap between 3D modeling and robotics simulation.
All in a few straightforward steps.
The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.
If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.
All in a few straightforward steps.
The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.
If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.
Blender Extensions: https://extensions.blender.org/add-ons/linkforge/
GitHub: https://github.com/arounamounchili/linkforge
Documentation: https://linkforge.readthedocs.io/
1 post - 1 participant
ROS Industrial: First of 2026 ROS-I Developers' Meeting Looks at Upcoming Releases and Collaboration
The ROS-Industrial Developers’ Meeting provided updates on open-source robotics tools, with a focus on advancements in Tesseract, Helping developers still using MoveIt2, and Trajopt. These updates underscore the global push to innovate motion planning, perception, and tooling systems for industrial automation. Key developments revolved around stabilizing existing frameworks, improving performance, and leveraging modern technologies like GPUs for acceleration.
The Tesseract project, designed to address traditional motion planning tools' limitations, is moving steadily toward a 1.0 release. With about half of the work complete, remaining tasks include API polishing, unit test enhancements, and transitioning the motion planning pipeline to a plugin-based architecture. Tesseract is also integrating improved collision checkers and tools like the Task Composer, which supports modular backends, making it more adaptable for high-complexity manufacturing tasks.
On the MoveIt 2 front, ongoing community support will be critical as the prior suppor team shifts to supporting the commercial MoveItPro. To ensured Tesseract maintainability, updates include the migration of documentation directly into repositories via GitHub. This step simplifies synchronization between code and documentation, helping developers maintain robust, open-source solutions. There are plans to provide migration tutorials for those wanting to investigate Tesseract if MoveIt2 is not meeting development needs and not ready to move to MoveItPro. Ability to utilize MoveIt2 components within Tesseract are being investigated.
Trajopt, another critical component of the Tesseract ecosystem, is undergoing a rewrite to better handle complex trajectories and cost constraints. The new version, expected within weeks, will enable better time parameterization and overall performance improvements. Discussions also explored GPU acceleration, focusing on opportunities to optimize constraint and cost calculations using emerging GPU libraries, though some modifications will be needed to fully realize this potential.
Toolpath optimization also gained attention, with updates on the noether repository, which supports industrial toolpath generation and reconstruction. While still a work in progress, noether is set to play a pivotal role in enabling advanced workflows once the planned updates are implemented.
As the meeting concluded, contributors emphasized the importance of community engagement to further modernize and refine these tools. Upcoming events across Europe and Asia will foster collaboration and showcase advancements in the ROS-Industrial ecosystem. This collective effort promises to drive a smarter, more adaptable industrial automation landscape, ensuring open-source solutions stay at the forefront of global manufacturing innovation.
The next Developers' Meeting is slated to be hosted by the ROS-I Consortium EU. You can find all the info for Developers' Meetings over at the Developer Meeting page.
ROS Discourse General: Simple status webpage for a robot in localhost?
Hi, I’m just collecting info on how you’re solving some simple status pages running locally on robots that would show some basic information like battery status, driver status, sensor health etc. But nothing fancy like camera streaming, teleoperation and such. No cloud, everything local!
The use-case is just being able to quickly connect to a robot AP and see the status of important things. This can of course be done via rqt or remote desktop, but a status webpage is much more accessible from phones, tablets etc.
I’ve seen statically generated pages with autoreload (easiest to implement, but very custom).
I guess some people have something on top of rosbridge/RobotWebTools, right? But I haven’t found much info about this.
Introducing Robotics UI: A Web Interface Solution for ROS 2 Robots - sciota robotics seemed interesting, but it never did it over 8 commits…
So what do you use?
Is there some automatic /diagnostics_agg → HTML+JS+WS framework?
And no, I don’t count Foxglove, because self-hosted costs… who knows what ![]()
12 posts - 6 participants
ROS Discourse General: Tbai - towards better athletic intelligence
Introducing tbai, a framework designed to democratize robotics and embodied AI and to help us move towards better athletic intelligence.

Drawing inspiration from Hugging Face (more specifically lerobot
), tbai implements and makes fully open-source countless state-of-the-art methods for controlling various sorts of robots, including quadrupeds, humanoids, and industrial robotic arms.
With its well-established API and levels of abstraction, users can easily add new controllers while reusing the rest of the infrastructure, including utilities for time synchronization, visualization, config interaction, and state estimation, to name a few.
Everything is built out of lego-like components that can be seamlessly combined into a single, high-performing robot controller pipeline. Its wide pool of already implemented state-of-the-art controllers (many from Robotic Systems Lab), state estimators, and robot interfaces, together with simulation or real-robot deployment abstractions, allows anyone using tbai to easily start playing around and working on novel methods, using the existing framework as a baseline, or to change one component while keeping the rest, thus accelerating the iteration cycle.
No more starting from scratch, no more boilerplate code. Tbai takes care of all of that.
Tbai seeks to support as many robotic platforms as possible. Currently, there are nine robots that have at least one demo prepared, with many more to come. Specifically, we have controllers readily available for ANYmal B, ANYmal C, and ANYmal D from ANYbotics; Go2, Go2W, and G1 from Unitree Robotics; Franka Emika from Franka Robotics; and finally, Spot and Spot with arm from Boston Dynamics.
Tbai is an ongoing project that will continue making strides towards democratizing robotics and embodied AI. If you are a researcher or a tinkerer who is building cool controllers for a robot, be it an already supported robot or a completely new one, please do consider contributing to tbai so that as many people can benefit from your work as possible.
Finally, a huge thanks goes to all researchers and tinkerers who do robotics and publish papers together with their code for other people to learn from. Tbai would not be where it is now if it weren’t for the countless open-source projects it has drawn inspiration from. I hope tbai becomes an inspiration for other projects too.
Thank you all!
Link: https://github.com/tbai-lab/tbai
Link: https://github.com/tbai-lab/tbai_ros
3 posts - 2 participants
ROS Discourse General: [Humble] Upcoming behavior change: Improved log file flushing in rcl_logging_spdlog
Summary
The ROS PMC has approved backporting an improved log file flushing behavior to ROS 2 Humble. This change will be included in an next Humble sync and affects how rcl_logging_spdlog flushes log data to the filesystem.
What’s Changing?
Previously, rcl_logging_spdlog did not explicitly configure flushing behavior, which could result in:
- Missing log messages when an application crashes
- Empty or incomplete log files during debugging sessions
After this update, the logging behavior will:
- Flush log files every 5 seconds (periodic flush)
- Immediately flush on ERROR level messages (flush on error)
This provides a much better debugging experience, especially when investigating crashes or unexpected application terminations.
Compatibility
API/ABI compatible — No rebuild of your packages is required
Behavior change — Log files will be flushed more frequently
How to Revert to the Old Behavior
If you need to restore the previous flushing behavior (no explicit flushing), you can set the following environment variable:
export RCL_LOGGING_SPDLOG_EXPERIMENTAL_OLD_FLUSHING_BEHAVIOR=1
Note: This environment variable is marked as EXPERIMENTAL and is intended as a temporary measure. It may be removed in future ROS 2 releases when full logging configuration file support is implemented. Please do not rely on this variable being available in future versions.
Related Links
- Original PR (rolling): https://github.com/ros2/rcl_logging/pull/95
- Backport PR (humble): change flushing behavior for spdlog log files, and add env var to use old style (no explicit flushing) (backport #95) by mergify[bot] · Pull Request #136 · ros2/rcl_logging · GitHub
- Future logging configuration plans: [rolling] Update maintainers - 2022-11-07 by audrow · Pull Request #96 · ros2/rcl_logging · GitHub
Questions or Concerns?
If you experience any issues with this change or have feedback, please:
- Comment on this thread
- Open an issue at GitHub · Where software is built
Thanks,
Tomoya
2 posts - 2 participants
ROS Discourse General: Guidance on next steps after ROS 2 Jazzy fundamentals for a hospitality robot project
I’m keenly working on a hospitality robot project driven by personal interest and a genuine enthusiasm for robotics, and I’m seeking guidance on what to focus on next.
I currently have a solid grasp of ROS 2 Jazzy fundamentals, including nodes, topics, services, actions, lifecycle nodes, URDF/Xacro, launch files, and executors. I’m comfortable bringing up a robot model and understanding how the ROS 2 system fits together.
My aim is to build a simulation-first MVP for a lobby scenario (greeter, wayfinding, and escort use cases). I’m deliberately keeping the scope practical and do not plan to add arms initially unless they become necessary.
At this stage, I would really value direction from more experienced practitioners on how to progress from foundational ROS knowledge toward a real, working robot.
In particular, I’d appreciate insights on:
-
What are the most important areas to focus on after mastering ROS 2 basics?
-
Which subsystems are best tackled first, and in what sequence?
-
What level of completeness is typically expected in simulation before transitioning to physical hardware?
-
Are there recommended ROS 2 packages, example bringups, or architectural patterns well suited for this type of robot?
Any advice, lessons learned, or references that could help shape the next phase of development would be greatly appreciated.
1 post - 1 participant
























