Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Discourse General: Announcing rosetta: a ROS 2 ⇄ LeRobot bridge

Announcing rosetta: a ROS 2 ⇄ LeRobot bridge

LeRobot is great for experimenting with state-of-the-art policies and sharing datasets/pretrained models. But actually using or fine-tuning those models on ROS 2 robots isn’t easy.

I’m working on rosetta, a ROS 2 package that helps LeRobot models play nicely with ROS 2 robots.

What’s inside

  • EpisodeRecorderServer — Action-driven recording to rosbag2; each episode stores a task/prompt in bag metadata for later export.
  • bag_to_lerobot.py — Converts one or more bags into a ready-to-train LeRobot v3 dataset (Parquet + MP4 with rich metadata).
  • PolicyBridge — Runs a pretrained LeRobot policy at the contract rate, subscribes to ROS 2 topics as observations, and publishes actions (e.g., geometry_msgs/Twist).
  • Contract YAML — Assigns which topics are used as observations and which are used as actions (plus specifics on handling, timing, and rates). The same contract is used consistently from data collection through to inference to keep everything aligned.

A simple example to start

I’ve posted a TurtleBot3 demo dataset (53 short episodes) and a lightweight ACT checkpoint trained on that data for a couple hours on a laptop. It’s far from a great dataset (I crash during training in places and kept the episode count modest), but it should help you get going—then swap in your robot by editing the contract and start iterating.

What’s next

I’ve got a backlog I’m actively working through:

  • More built-in decoders/encoders for common message types
  • A refactor into separate client/policy composable nodes
  • Leveraging an async policy server so inference can run off-robot (no ROS 2 on the GPU box)
  • Fixes for a growing list of bugs

I’d love feedback

Thanks for taking a look!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-rosetta-a-ros-2-lerobot-bridge/50657

ROS Discourse General: Extending meta-ros with ros2 + openembedded (yocto)

Hey all,

not sure if this is the right place for it or if I should jump over to discord. We have our own ros2 workspace extending kilted that we’d like to include in our yocto image. Is there a preferred way for creating the receipes - and if so how? After a bit of looking around it seems there’s superflore but I’m not sure if it is intended for use outside of meta-ros.
It seems like the other option is to manually create the receipes for each package in our workspace based off the autogenerated ones?
Is there a third option I am missing?

Thanks in advance,
Nico

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/extending-meta-ros-with-ros2-openembedded-yocto/50650

ROS Discourse General: Announcing GitHub syntax highlighting for ROS msg files

Hi folks – wanted to share that msg/srv/action files are now highlighted with pretty colors on GitHub! :tada::rainbow: This works via a new ROS tmLanguage grammar that I added to GitHub Linguist.

For example, Imu.msg now looks like this:

You can also get highlighting in multi-line code blocks by tagging them with ```rosmsg.

The same syntax highlighting support is also available for Visual Studio Code via the Robotics Development Extensions maintained by @ooeygui.

Enjoy! I hope this makes your life just a little bit easier :smiling_face:

5 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-github-syntax-highlighting-for-ros-msg-files/50642

ROS Discourse General: Come Join Me At ROSCon 2025 in Singapore

Can’t wait to see you all at ROSCon 2025 in Singapore! :waving_hand:

I’ll be there with my NVIDIA colleagues, and we’re excited to connect with you at our workshops and talks—an awesome chance to learn directly from the experts.

:date: Here are some talks you won’t want to miss:

Monday, October 27th - 8am-12pm SST: Scalable Multi-Robot Scene Workflows Using ROS Simulation Interfaces Standard in Isaac Sim

The new ROS Simulation Interfaces standard (developed in collaboration with O3DE, NVIDIA Isaac Sim, and Gazebo) provides a unified approach to interacting with simulation environments, regardless of the engine used. This workshop highlights the practical implementation of Simulation Interfaces within NVIDIA Isaac Sim and will showcase how this standard facilitates multi robot scene generation, deployment and environmental manipulation, in a reproducible manner suited for automated testing. Isaac Sim’s support offers photorealistic simulation and is GPU-accelerated. By harnessing the capabilities of the new Simulation Interfaces standard, researchers and developers in perception, navigation, and manipulation can benefit by quickly building workflows, switching between simulators, or using multiple simulators in parallel.

Monday, October 27th - 12:30pm-1:30pm SST: Evolving ROS 2 for Real-Time Control of Learned Policy-Driven Robots

Advancing the state of the art in robotics today requires tackling the challenges of physical AI. For humanoids, this requires running computationally intensive AI models with extensive sets of sensors and actuators all with the strictest demands of real-time performance. There is a clear opportunity for a collaborative effort to grow ROS 2 as the ideal platform and ecosystem for enabling embodied AI agents. This talk will explore a vision for enhancing the ROS 2 core to become accelerator-aware yet accelerator-agnostic, where accelerated compute becomes a first-class partner in learned, real-time control. We will discuss open, vendor-neutral strategies for this integration, aiming to provide the entire community with a more powerful and reliable foundation for developing and deploying intelligent robots.

Tuesday, October 28th - 11:10am SST: On Use of Nav2 Route Server

We introduce Nav2’s newest server, Nav2 Route. It performs sparse route graph-based planning and progress tracking to enable routing across massive regions in real-time, enforcing deterministic execution, and/or performing operations throughout task execution leveraging arbitrary graph-defined metadata. This fills a broad need in the community for production capabilities in indoor logistics, massive outdoor spaces, and placing limits on navigable regions. This package uses several unique plugin interfaces to allow use-case specific behaviors to create bespoke applications easily. This talk will go over the key points of the server, demonstrations, and how to enable it in user applications.

Tuesday, October 28th - 15:50-16:00 SST: SWAGGER: Sparse Waypoint Graph Generation for Efficient Routing

SWAGGER (Sparse WAypoint Graph Generation for Efficient Routing) automatically generates sparse waypoint graphs from occupancy grid maps, enabling efficient path planning through pre-computed connectivity.

Using a multi-stage approach combining skeleton extraction and strategic node placement, SWAGGER creates optimized graphs that reduce computational complexity from O(n²) grid-based search to O(k²) graph search where k << n. Beyond single-robot navigation, these sparse graphs enable efficient large-scale multi-robot planning. We demonstrate practical integration as a drop-in ROS 2 Nav2 global planner plugin and show how the same graphs can guide neural navigation systems like NVIDIA’s COMPASS. This lightning talk will present the core algorithm, showcase integration patterns, and provide actionable insights for immediate adoption in existing ROS 2 navigation stacks.

Wednesday, October 29th - 17:00-17:10 SST: Building Foundation Models for Generalist Robots: Insights and Challenges in Robot Learning

This session focuses on the recent advancements in robot learning, specifically the development of foundation models for general-purpose robots. Key areas will be explored, including imitation learning, reinforcement learning, and vision-language-action (VLA) models. A highlight will be our NVIDIA’s GR00T with open-source VLA models. We will discuss the research work, infrastructure, ROS implementation, and challenges encountered in the domain. Finally, we will discuss the future of robotics enhanced by learning-based systems.

Wednesday, October 29th - 17:00-17:20 SST: Introducing the New ROS Simulation Standard

This talk brings forward simulation interfaces package, a new standard for ROS, and its implementation in popular simulators. There are multiple simulators with ROS integrations. Each has unique strengths; none is best for everyone. Still, the way they are used is often very similar. The new standard makes it easier to build integrations, switch between simulators, or even use multiple simulators in parallel. You will learn how to use its highly useful features such as spawning robots and other objects, moving things around for testing, stepping through simulation, and querying the virtual world for ground truth data.

Looking forward to connecting at ROSCon. Hope to see you there!

To learn more about NVIDIA at ROSCon go here > https://www.nvidia.com/en-us/events/roscon/

Full ROSCon Singapore details here :right_arrow: ROSCon 2025

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/come-join-me-at-roscon-2025-in-singapore/50637

ROS Discourse General: ManyMove v0.2.0 - Setup with Docker container and unified main branch

Hi!
I just created a new branch main for ManyMove: this represents a unified codebase that compiles on both Humble and Jazzy while retaining all previous functionalities.

To make testing ManyMove easier, I also created docker builds that will prepare a container ready to run examples.The most complete container is the one that also sets up xarm_ros2, so you’ll be able to run all examples.

Here’s an even shorter startup than the one in the docs for a quick setup for humble. Change
MANYMOVE_ROS_WS to your desired folder:

export MANYMOVE_ROS_WS=~/workspaces/dev_ws
mkdir -p ${MANYMOVE_ROS_WS}/src
cd ${MANYMOVE_ROS_WS}/src
git clone https://github.com/pastoriomarco/manymove.git -b main
${MANYMOVE_ROS_WS}/src/manymove/manymove_bringup/docker/run_manymove_xarm_container.sh humble

Then, inside the container, for Panda example:

ros2 launch manymove_bringup panda_moveitcpp_fake_cpp_trees.launch.py

Still inside the container, for UFactory dual robot example:

ros2 launch manymove_bringup dual_moveitcpp_fake_app.launch.py

I’m not sure this unified humble/jazzy approach is a good one, but I was investing too much time propagating the development on both branches. At least now I have some simple CI and tests to keep it checked, and I need to update just one branch. The dev branch will be the one where I keep developing, merging to main when the new code is stable and checked.

If anyone finds trouble setting up or running ManyMove let me know!


PS: I mainly tested it in Ubuntu 22.04 and 24.04, but it seems it works on WSL2 on Win11 too, just make sure to install docker following ubuntu instructions, including linux postinstall

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/manymove-v0-2-0-setup-with-docker-container-and-unified-main-branch/50630

ROS Discourse General: Join My Workshop at ROSCon Singapore

Excited to head to Singapore next week for ROSCon!

Join me Monday at 12:30 SST for my workshop:
Evolving ROS 2 for Real-Time Control of Learned Policy-Driven Robots

We’ll explore how ROS 2 can evolve to power the next generation of embodied AI and how we can make it both accelerator aware and accelerator agnostic to drive physical AI forward together.
See you there! :robot:
More about ROSCon Singapore :right_arrow: ROSCon 2025

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/join-my-workshop-at-roscon-singapore/50629

ROS Discourse General: New packages for Humble Hawksbill 2025-10-20

Package Updates for humble

Added Packages [61]:

  • ros-humble-bcr-arm: 0.1.1-1
  • ros-humble-bcr-arm-description: 0.1.1-1
  • ros-humble-bcr-arm-gazebo: 0.1.1-1
  • ros-humble-bcr-arm-moveit-config: 0.1.1-1
  • ros-humble-bcr-arm-ros2: 0.1.1-1
  • ros-humble-etsi-its-conversion-srvs: 3.4.0-1
  • ros-humble-etsi-its-conversion-srvs-dbgsym: 3.4.0-1
  • ros-humble-jrl-cmakemodules: 1.1.0-2
  • ros-humble-kuka-kss-message-handler: 1.0.0-1
  • ros-humble-kuka-kss-message-handler-dbgsym: 1.0.0-1
  • ros-humble-kuka-rsi-driver: 1.0.0-1
  • ros-humble-kuka-rsi-driver-dbgsym: 1.0.0-1
  • ros-humble-launch-frontend-py: 0.1.0-1
  • ros-humble-mola-input-lidar-bin-dataset: 2.0.0-1
  • ros-humble-mola-input-lidar-bin-dataset-dbgsym: 2.0.0-1
  • ros-humble-moveit-task-constructor-capabilities: 0.1.3-1
  • ros-humble-moveit-task-constructor-capabilities-dbgsym: 0.1.3-1
  • ros-humble-moveit-task-constructor-core: 0.1.3-1
  • ros-humble-moveit-task-constructor-core-dbgsym: 0.1.3-1
  • ros-humble-moveit-task-constructor-demo: 0.1.3-1
  • ros-humble-moveit-task-constructor-demo-dbgsym: 0.1.3-1
  • ros-humble-moveit-task-constructor-msgs: 0.1.3-1
  • ros-humble-moveit-task-constructor-msgs-dbgsym: 0.1.3-1
  • ros-humble-moveit-task-constructor-visualization: 0.1.3-1
  • ros-humble-moveit-task-constructor-visualization-dbgsym: 0.1.3-1
  • ros-humble-orbbec-camera: 1.5.14-1
  • ros-humble-orbbec-camera-dbgsym: 1.5.14-1
  • ros-humble-orbbec-camera-msgs: 1.5.14-1
  • ros-humble-orbbec-camera-msgs-dbgsym: 1.5.14-1
  • ros-humble-orbbec-description: 1.5.14-1
  • ros-humble-ouster-sensor-msgs: 0.13.14-2
  • ros-humble-ouster-sensor-msgs-dbgsym: 0.13.14-2
  • ros-humble-pal-pro-gripper: 1.7.2-1
  • ros-humble-pal-pro-gripper-bringup: 1.7.2-1
  • ros-humble-pal-pro-gripper-controller-configuration: 1.7.2-1
  • ros-humble-pal-pro-gripper-description: 1.7.2-1
  • ros-humble-pal-pro-gripper-wrapper: 1.7.2-1
  • ros-humble-play-motion-builder: 1.4.0-1
  • ros-humble-play-motion-builder-dbgsym: 1.4.0-1
  • ros-humble-play-motion-builder-msgs: 1.4.0-1
  • ros-humble-play-motion-builder-msgs-dbgsym: 1.4.0-1
  • ros-humble-pluginlib-dbgsym: 5.1.2-1
  • ros-humble-pymoveit2: 4.0.0-1
  • ros-humble-rko-lio: 0.1.6-1
  • ros-humble-rko-lio-dbgsym: 0.1.6-1
  • ros-humble-rmw-stats-shim: 0.2.2-1
  • ros-humble-rmw-stats-shim-dbgsym: 0.2.2-1
  • ros-humble-ros2-fmt-logger: 1.0.1-1
  • ros-humble-ros2-fmt-logger-dbgsym: 1.0.1-1
  • ros-humble-ros2plugin: 5.1.2-1
  • ros-humble-rosgraph-monitor: 0.2.2-1
  • ros-humble-rosgraph-monitor-dbgsym: 0.2.2-1
  • ros-humble-rosgraph-monitor-msgs: 0.2.2-1
  • ros-humble-rosgraph-monitor-msgs-dbgsym: 0.2.2-1
  • ros-humble-rqt-play-motion-builder: 1.4.0-1
  • ros-humble-rqt-play-motion-builder-dbgsym: 1.4.0-1
  • ros-humble-rviz-marker-tools: 0.1.3-1
  • ros-humble-rviz-marker-tools-dbgsym: 0.1.3-1
  • ros-humble-tiago-pro-head-bringup: 1.6.0-1
  • ros-humble-tiago-pro-head-controller-configuration: 1.6.0-1
  • ros-humble-tiago-pro-head-description: 1.6.0-1

Updated Packages [475]:

  • ros-humble-ackermann-steering-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-ackermann-steering-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-action-tutorials-cpp: 0.20.5-1 → 0.20.6-1
  • ros-humble-action-tutorials-cpp-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-action-tutorials-interfaces: 0.20.5-1 → 0.20.6-1
  • ros-humble-action-tutorials-interfaces-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-action-tutorials-py: 0.20.5-1 → 0.20.6-1
  • ros-humble-admittance-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-admittance-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-bicycle-steering-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-bicycle-steering-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-clearpath-common: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-config: 1.3.2-1 → 1.3.3-1
  • ros-humble-clearpath-control: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-customization: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-description: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-generator-common: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-generator-common-dbgsym: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-manipulators: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-manipulators-description: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-mounts-description: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-platform-description: 1.3.6-1 → 1.3.7-1
  • ros-humble-clearpath-sensors-description: 1.3.6-1 → 1.3.7-1
  • ros-humble-coal: 3.0.1-1 → 3.0.2-1
  • ros-humble-coal-dbgsym: 3.0.1-1 → 3.0.2-1
  • ros-humble-composition: 0.20.5-1 → 0.20.6-1
  • ros-humble-composition-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-control-toolbox: 3.6.1-1 → 3.6.2-1
  • ros-humble-control-toolbox-dbgsym: 3.6.1-1 → 3.6.2-1
  • ros-humble-costmap-queue: 1.1.18-1 → 1.1.19-1
  • ros-humble-costmap-queue-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-demo-nodes-cpp: 0.20.5-1 → 0.20.6-1
  • ros-humble-demo-nodes-cpp-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-demo-nodes-cpp-native: 0.20.5-1 → 0.20.6-1
  • ros-humble-demo-nodes-cpp-native-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-demo-nodes-py: 0.20.5-1 → 0.20.6-1
  • ros-humble-diff-drive-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-diff-drive-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-draco-point-cloud-transport: 1.0.11-1 → 1.0.12-1
  • ros-humble-draco-point-cloud-transport-dbgsym: 1.0.11-1 → 1.0.12-1
  • ros-humble-dummy-map-server: 0.20.5-1 → 0.20.6-1
  • ros-humble-dummy-map-server-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-dummy-robot-bringup: 0.20.5-1 → 0.20.6-1
  • ros-humble-dummy-sensors: 0.20.5-1 → 0.20.6-1
  • ros-humble-dummy-sensors-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-dwb-core: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-core-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-critics: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-critics-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-msgs: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-msgs-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-plugins: 1.1.18-1 → 1.1.19-1
  • ros-humble-dwb-plugins-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-dynamixel-hardware-interface: 1.4.14-1 → 1.4.16-1
  • ros-humble-dynamixel-hardware-interface-dbgsym: 1.4.14-1 → 1.4.16-1
  • ros-humble-effort-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-effort-controllers-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-eigenpy: 3.11.0-1 → 3.12.0-1
  • ros-humble-eigenpy-dbgsym: 3.11.0-1 → 3.12.0-1
  • ros-humble-eiquadprog: 1.2.9-2 → 1.3.0-1
  • ros-humble-eiquadprog-dbgsym: 1.2.9-2 → 1.3.0-1
  • ros-humble-etsi-its-cam-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cam-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-conversion-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cpm-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cpm-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cpm-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cpm-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-cpm-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-denm-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mapem-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mapem-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mapem-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mapem-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mapem-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mcm-uulm-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mcm-uulm-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mcm-uulm-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mcm-uulm-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-mcm-uulm-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-messages: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-msgs-utils: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-primitives-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-rviz-plugins: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-rviz-plugins-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-spatem-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-spatem-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-spatem-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-spatem-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-spatem-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-vam-ts-coding: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-vam-ts-coding-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-vam-ts-conversion: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-vam-ts-msgs: 3.3.0-1 → 3.4.0-1
  • ros-humble-etsi-its-vam-ts-msgs-dbgsym: 3.3.0-1 → 3.4.0-1
  • ros-humble-fastrtps-cmake-module: 2.2.2-2 → 2.2.3-1
  • ros-humble-force-torque-sensor-broadcaster: 2.50.0-1 → 2.50.1-1
  • ros-humble-force-torque-sensor-broadcaster-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-forward-command-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-forward-command-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-foxglove-bridge: 0.8.5-1 → 3.2.1-1
  • ros-humble-foxglove-bridge-dbgsym: 0.8.5-1 → 3.2.1-1
  • ros-humble-foxglove-msgs: 2.3.0-1 → 3.2.1-1
  • ros-humble-foxglove-msgs-dbgsym: 2.3.0-1 → 3.2.1-1
  • ros-humble-fri-configuration-controller: 0.9.2-1 → 1.0.0-1
  • ros-humble-fri-configuration-controller-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-fri-state-broadcaster: 0.9.2-1 → 1.0.0-1
  • ros-humble-fri-state-broadcaster-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-gpio-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-gpio-controllers-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-gripper-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-gripper-controllers-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-gz-ros2-control: 0.7.16-1 → 0.7.17-1
  • ros-humble-gz-ros2-control-dbgsym: 0.7.16-1 → 0.7.17-1
  • ros-humble-gz-ros2-control-demos: 0.7.16-1 → 0.7.17-1
  • ros-humble-gz-ros2-control-demos-dbgsym: 0.7.16-1 → 0.7.17-1
  • ros-humble-gz-ros2-control-tests: 0.7.16-1 → 0.7.17-1
  • ros-humble-hebi-cpp-api: 3.13.0-1 → 3.15.0-1
  • ros-humble-hebi-cpp-api-dbgsym: 3.13.0-1 → 3.15.0-1
  • ros-humble-ign-ros2-control: 0.7.16-1 → 0.7.17-1
  • ros-humble-ign-ros2-control-demos: 0.7.16-1 → 0.7.17-1
  • ros-humble-ign-ros2-control-demos-dbgsym: 0.7.16-1 → 0.7.17-1
  • ros-humble-iiqka-moveit-example: 0.9.2-1 → 1.0.0-1
  • ros-humble-iiqka-moveit-example-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-image-tools: 0.20.5-1 → 0.20.6-1
  • ros-humble-image-tools-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-imu-sensor-broadcaster: 2.50.0-1 → 2.50.1-1
  • ros-humble-imu-sensor-broadcaster-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-intra-process-demo: 0.20.5-1 → 0.20.6-1
  • ros-humble-intra-process-demo-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-joint-group-impedance-controller: 0.9.2-1 → 1.0.0-1
  • ros-humble-joint-group-impedance-controller-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-joint-state-broadcaster: 2.50.0-1 → 2.50.1-1
  • ros-humble-joint-state-broadcaster-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-joint-trajectory-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-joint-trajectory-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-kinematics-interface: 0.4.0-1 → 0.4.1-1
  • ros-humble-kinematics-interface-dbgsym: 0.4.0-1 → 0.4.1-1
  • ros-humble-kinematics-interface-kdl: 0.4.0-1 → 0.4.1-1
  • ros-humble-kinematics-interface-kdl-dbgsym: 0.4.0-1 → 0.4.1-1
  • ros-humble-kitti-metrics-eval: 1.9.0-1 → 2.0.0-1
  • ros-humble-kitti-metrics-eval-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-kompass: 0.3.1-1 → 0.3.2-1
  • ros-humble-kompass-interfaces: 0.3.1-1 → 0.3.2-1
  • ros-humble-kompass-interfaces-dbgsym: 0.3.1-1 → 0.3.2-1
  • ros-humble-kuka-control-mode-handler: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-control-mode-handler-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-controllers: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-driver-interfaces: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-driver-interfaces-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-drivers: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-drivers-core: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-drivers-core-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-event-broadcaster: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-event-broadcaster-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-iiqka-eac-driver: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-iiqka-eac-driver-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-rsi-simulator: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-sunrise-fri-driver: 0.9.2-1 → 1.0.0-1
  • ros-humble-kuka-sunrise-fri-driver-dbgsym: 0.9.2-1 → 1.0.0-1
  • ros-humble-laser-geometry: 2.4.0-2 → 2.4.1-1
  • ros-humble-laser-geometry-dbgsym: 2.4.0-2 → 2.4.1-1
  • ros-humble-launch: 1.0.10-1 → 1.0.11-1
  • ros-humble-launch-pytest: 1.0.10-1 → 1.0.11-1
  • ros-humble-launch-testing: 1.0.10-1 → 1.0.11-1
  • ros-humble-launch-testing-ament-cmake: 1.0.10-1 → 1.0.11-1
  • ros-humble-launch-xml: 1.0.10-1 → 1.0.11-1
  • ros-humble-launch-yaml: 1.0.10-1 → 1.0.11-1
  • ros-humble-lifecycle: 0.20.5-1 → 0.20.6-1
  • ros-humble-lifecycle-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-lifecycle-py: 0.20.5-1 → 0.20.6-1
  • ros-humble-logging-demo: 0.20.5-1 → 0.20.6-1
  • ros-humble-logging-demo-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-mecanum-drive-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-mecanum-drive-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-message-filters: 4.3.8-1 → 4.3.11-1
  • ros-humble-message-filters-dbgsym: 4.3.8-1 → 4.3.11-1
  • ros-humble-mola: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-bridge-ros2: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-bridge-ros2-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-demos: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-imu-preintegration: 1.10.0-1 → 1.13.1-1
  • ros-humble-mola-imu-preintegration-dbgsym: 1.10.0-1 → 1.13.1-1
  • ros-humble-mola-input-euroc-dataset: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-euroc-dataset-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-kitti-dataset: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-kitti-dataset-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-kitti360-dataset: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-kitti360-dataset-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-mulran-dataset: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-mulran-dataset-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-paris-luco-dataset: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-paris-luco-dataset-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-rawlog: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-rawlog-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-rosbag2: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-rosbag2-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-video: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-input-video-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-kernel: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-kernel-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-launcher: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-launcher-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-lidar-odometry: 0.9.0-1 → 1.0.0-1
  • ros-humble-mola-lidar-odometry-dbgsym: 0.9.0-1 → 1.0.0-1
  • ros-humble-mola-metric-maps: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-metric-maps-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-msgs: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-msgs-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-pose-list: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-pose-list-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-relocalization: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-relocalization-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-state-estimation: 1.10.0-1 → 1.11.0-1
  • ros-humble-mola-state-estimation-simple: 1.10.0-1 → 1.11.0-1
  • ros-humble-mola-state-estimation-simple-dbgsym: 1.10.0-1 → 1.11.0-1
  • ros-humble-mola-state-estimation-smoother: 1.10.0-1 → 1.11.0-1
  • ros-humble-mola-state-estimation-smoother-dbgsym: 1.10.0-1 → 1.11.0-1
  • ros-humble-mola-traj-tools: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-traj-tools-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-viz: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-viz-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-yaml: 1.9.0-1 → 2.0.0-1
  • ros-humble-mola-yaml-dbgsym: 1.9.0-1 → 2.0.0-1
  • ros-humble-mp2p-icp: 1.8.0-1 → 2.0.0-1
  • ros-humble-mp2p-icp-dbgsym: 1.8.0-1 → 2.0.0-1
  • ros-humble-mqtt-client: 2.4.1-1 → 2.4.1-2
  • ros-humble-mqtt-client-dbgsym: 2.4.1-1 → 2.4.1-2
  • ros-humble-mqtt-client-interfaces: 2.4.1-1 → 2.4.1-2
  • ros-humble-mqtt-client-interfaces-dbgsym: 2.4.1-1 → 2.4.1-2
  • ros-humble-mrpt-apps: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-apps-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libapps: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libapps-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libbase: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libbase-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libgui: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libgui-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libhwdrivers: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libhwdrivers-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libmaps: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libmaps-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libmath: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libmath-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libnav: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libnav-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libobs: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libobs-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libopengl: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libopengl-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libposes: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libposes-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libros-bridge: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libros-bridge-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libslam: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libslam-dbgsym: 2.14.12-1 → 2.14.16-1
  • ros-humble-mrpt-libtclap: 2.14.12-1 → 2.14.16-1
  • ros-humble-namosim: 0.0.3-1 → 0.0.4-2
  • ros-humble-nav-2d-msgs: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav-2d-msgs-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav-2d-utils: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav-2d-utils-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-amcl: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-amcl-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-behavior-tree: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-behavior-tree-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-behaviors: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-behaviors-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-bringup: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-bt-navigator: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-bt-navigator-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-collision-monitor: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-collision-monitor-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-common: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-constrained-smoother: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-constrained-smoother-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-controller-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-core: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-costmap-2d: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-costmap-2d-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-dwb-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-graceful-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-graceful-controller-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-lifecycle-manager: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-lifecycle-manager-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-map-server: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-map-server-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-mppi-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-mppi-controller-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-msgs: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-msgs-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-navfn-planner: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-navfn-planner-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-planner: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-planner-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-regulated-pure-pursuit-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-regulated-pure-pursuit-controller-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-rotation-shim-controller: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-rotation-shim-controller-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-rviz-plugins: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-rviz-plugins-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-simple-commander: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-smac-planner: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-smac-planner-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-smoother: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-smoother-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-system-tests: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-system-tests-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-theta-star-planner: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-theta-star-planner-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-util: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-util-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-velocity-smoother: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-velocity-smoother-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-voxel-grid: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-voxel-grid-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-waypoint-follower: 1.1.18-1 → 1.1.19-1
  • ros-humble-nav2-waypoint-follower-dbgsym: 1.1.18-1 → 1.1.19-1
  • ros-humble-navigation2: 1.1.18-1 → 1.1.19-1
  • ros-humble-novatel-oem7-driver: 20.7.0-1 → 20.8.0-1
  • ros-humble-novatel-oem7-driver-dbgsym: 20.7.0-1 → 20.8.0-1
  • ros-humble-novatel-oem7-msgs: 20.7.0-1 → 20.8.0-1
  • ros-humble-novatel-oem7-msgs-dbgsym: 20.7.0-1 → 20.8.0-1
  • ros-humble-pal-gazebo-plugins: 4.0.6-1 → 4.1.0-1
  • ros-humble-pal-gazebo-plugins-dbgsym: 4.0.6-1 → 4.1.0-1
  • ros-humble-pal-statistics: 2.6.4-1 → 2.7.0-1
  • ros-humble-pal-statistics-dbgsym: 2.6.4-1 → 2.7.0-1
  • ros-humble-pal-statistics-msgs: 2.6.4-1 → 2.7.0-1
  • ros-humble-pal-statistics-msgs-dbgsym: 2.6.4-1 → 2.7.0-1
  • ros-humble-pendulum-control: 0.20.5-1 → 0.20.6-1
  • ros-humble-pendulum-control-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-pendulum-msgs: 0.20.5-1 → 0.20.6-1
  • ros-humble-pendulum-msgs-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-pid-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-pid-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-pinocchio: 3.6.0-1 → 3.8.0-1
  • ros-humble-pinocchio-dbgsym: 3.6.0-1 → 3.8.0-1
  • ros-humble-play-motion2: 1.5.3-1 → 1.7.0-1
  • ros-humble-play-motion2-dbgsym: 1.5.3-1 → 1.7.0-1
  • ros-humble-play-motion2-msgs: 1.5.3-1 → 1.7.0-1
  • ros-humble-play-motion2-msgs-dbgsym: 1.5.3-1 → 1.7.0-1
  • ros-humble-plotjuggler: 3.10.11-1 → 3.13.2-1
  • ros-humble-plotjuggler-dbgsym: 3.10.11-1 → 3.13.2-1
  • ros-humble-pluginlib: 5.1.0-3 → 5.1.2-1
  • ros-humble-point-cloud-interfaces: 1.0.11-1 → 1.0.12-1
  • ros-humble-point-cloud-interfaces-dbgsym: 1.0.11-1 → 1.0.12-1
  • ros-humble-point-cloud-transport: 1.0.18-1 → 1.0.19-1
  • ros-humble-point-cloud-transport-dbgsym: 1.0.18-1 → 1.0.19-1
  • ros-humble-point-cloud-transport-plugins: 1.0.11-1 → 1.0.12-1
  • ros-humble-point-cloud-transport-py: 1.0.18-1 → 1.0.19-1
  • ros-humble-pose-broadcaster: 2.50.0-1 → 2.50.1-1
  • ros-humble-pose-broadcaster-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-position-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-position-controllers-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-python-qt-binding: 1.1.2-1 → 1.1.3-1
  • ros-humble-quality-of-service-demo-cpp: 0.20.5-1 → 0.20.6-1
  • ros-humble-quality-of-service-demo-cpp-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-quality-of-service-demo-py: 0.20.5-1 → 0.20.6-1
  • ros-humble-range-sensor-broadcaster: 2.50.0-1 → 2.50.1-1
  • ros-humble-range-sensor-broadcaster-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-rcpputils: 2.4.5-1 → 2.4.6-1
  • ros-humble-rcpputils-dbgsym: 2.4.5-1 → 2.4.6-1
  • ros-humble-realtime-tools: 2.14.0-1 → 2.14.1-1
  • ros-humble-realtime-tools-dbgsym: 2.14.0-1 → 2.14.1-1
  • ros-humble-rmw-fastrtps-cpp: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-fastrtps-cpp-dbgsym: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-fastrtps-dynamic-cpp: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-fastrtps-dynamic-cpp-dbgsym: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-fastrtps-shared-cpp: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-fastrtps-shared-cpp-dbgsym: 6.2.8-1 → 6.2.9-1
  • ros-humble-rmw-zenoh-cpp: 0.1.6-1 → 0.1.7-1
  • ros-humble-rmw-zenoh-cpp-dbgsym: 0.1.6-1 → 0.1.7-1
  • ros-humble-ros2-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-ros2-controllers-test-nodes: 2.50.0-1 → 2.50.1-1
  • ros-humble-rosidl-generator-py: 0.14.5-1 → 0.14.6-1
  • ros-humble-rosidl-typesupport-fastrtps-c: 2.2.2-2 → 2.2.3-1
  • ros-humble-rosidl-typesupport-fastrtps-c-dbgsym: 2.2.2-2 → 2.2.3-1
  • ros-humble-rosidl-typesupport-fastrtps-cpp: 2.2.2-2 → 2.2.3-1
  • ros-humble-rosidl-typesupport-fastrtps-cpp-dbgsym: 2.2.2-2 → 2.2.3-1
  • ros-humble-rpyutils: 0.2.1-2 → 0.2.2-1
  • ros-humble-rqt: 1.1.7-1 → 1.1.8-1
  • ros-humble-rqt-gui: 1.1.7-1 → 1.1.8-1
  • ros-humble-rqt-gui-cpp: 1.1.7-1 → 1.1.8-1
  • ros-humble-rqt-gui-cpp-dbgsym: 1.1.7-1 → 1.1.8-1
  • ros-humble-rqt-gui-py: 1.1.7-1 → 1.1.8-1
  • ros-humble-rqt-joint-trajectory-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-rqt-py-common: 1.1.7-1 → 1.1.8-1
  • ros-humble-rviz-assimp-vendor: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-common: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-common-dbgsym: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-default-plugins: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-default-plugins-dbgsym: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-ogre-vendor: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-ogre-vendor-dbgsym: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-rendering: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-rendering-dbgsym: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-rendering-tests: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz-visual-testing-framework: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz2: 11.2.20-1 → 11.2.22-1
  • ros-humble-rviz2-dbgsym: 11.2.20-1 → 11.2.22-1
  • ros-humble-steering-controllers-library: 2.50.0-1 → 2.50.1-1
  • ros-humble-steering-controllers-library-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-tecgihan-driver: 0.1.1-1 → 0.1.2-1
  • ros-humble-teleop-twist-keyboard: 2.4.0-1 → 2.4.1-1
  • ros-humble-topic-monitor: 0.20.5-1 → 0.20.6-1
  • ros-humble-topic-statistics-demo: 0.20.5-1 → 0.20.6-1
  • ros-humble-topic-statistics-demo-dbgsym: 0.20.5-1 → 0.20.6-1
  • ros-humble-tricycle-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-tricycle-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-tricycle-steering-controller: 2.50.0-1 → 2.50.1-1
  • ros-humble-tricycle-steering-controller-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-tsid: 1.8.0-1 → 1.9.0-1
  • ros-humble-tsid-dbgsym: 1.8.0-1 → 1.9.0-1
  • ros-humble-turtle-nest: 1.2.0-1 → 1.2.1-1
  • ros-humble-turtle-nest-dbgsym: 1.2.0-1 → 1.2.1-1
  • ros-humble-turtlesim: 1.4.2-1 → 1.4.3-1
  • ros-humble-turtlesim-dbgsym: 1.4.2-1 → 1.4.3-1
  • ros-humble-ur: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-bringup: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-calibration: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-calibration-dbgsym: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-client-library: 2.3.0-1 → 2.4.0-1
  • ros-humble-ur-client-library-dbgsym: 2.3.0-1 → 2.4.0-1
  • ros-humble-ur-controllers: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-controllers-dbgsym: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-dashboard-msgs: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-dashboard-msgs-dbgsym: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-description: 2.7.0-1 → 2.8.0-1
  • ros-humble-ur-moveit-config: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-robot-driver: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-robot-driver-dbgsym: 2.8.1-1 → 2.9.0-1
  • ros-humble-ur-simulation-gz: 0.3.0-1 → 0.5.0-1
  • ros-humble-velocity-controllers: 2.50.0-1 → 2.50.1-1
  • ros-humble-velocity-controllers-dbgsym: 2.50.0-1 → 2.50.1-1
  • ros-humble-yasmin: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-dbgsym: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-demos: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-demos-dbgsym: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-msgs: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-msgs-dbgsym: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-ros: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-ros-dbgsym: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-viewer: 3.4.0-1 → 3.5.0-1
  • ros-humble-yasmin-viewer-dbgsym: 3.4.0-1 → 3.5.0-1
  • ros-humble-zed-msgs: 5.0.1-2 → 5.1.0-1
  • ros-humble-zed-msgs-dbgsym: 5.0.1-2 → 5.1.0-1
  • ros-humble-zenoh-cpp-vendor: 0.1.6-1 → 0.1.7-1
  • ros-humble-zenoh-cpp-vendor-dbgsym: 0.1.6-1 → 0.1.7-1
  • ros-humble-zenoh-security-tools: 0.1.6-1 → 0.1.7-1
  • ros-humble-zenoh-security-tools-dbgsym: 0.1.6-1 → 0.1.7-1
  • ros-humble-zlib-point-cloud-transport: 1.0.11-1 → 1.0.12-1
  • ros-humble-zlib-point-cloud-transport-dbgsym: 1.0.11-1 → 1.0.12-1
  • ros-humble-zstd-point-cloud-transport: 1.0.11-1 → 1.0.12-1
  • ros-humble-zstd-point-cloud-transport-dbgsym: 1.0.11-1 → 1.0.12-1

Removed Packages [2]:

  • ros-humble-kuka-kss-rsi-driver
  • ros-humble-kuka-kss-rsi-driver-dbgsym

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

  • Aditya Pande
  • Aina Irisarri
  • Alberto Tudela
  • Alejandro Hernandez Cordero
  • Alexey Merzlyakov
  • Andrej Orsula
  • Aron Svastits
  • Audrow Nash
  • Automatika Robotics
  • Bence Magyar
  • Brian Wilcox
  • Carl Delsey
  • Carlos Orduno
  • Chris Bollinger
  • Chris Lalancette
  • David Brown
  • David V. Lu!!
  • David ter Kuile
  • Davide Faconti
  • Denis Štogl
  • Dorian Scholz
  • Emerson Knapp
  • Ethan Gao
  • Felix Exner
  • Foxglove
  • Guilhem Saurel
  • Jacob Perron
  • Janne Karttunen
  • Jean-Pierre Busch
  • Jeremie Deray
  • Joe Dong
  • Jordan Palacios
  • Jose Luis Blanco-Claraco
  • Joseph Mirabel
  • Justin Carpentier
  • Kristof Matyas Pasztor
  • Lennart Reiher
  • Luis Camero
  • Mabel Zhang
  • Martin Pecka
  • Matej Vargovcik
  • Meher Malladi
  • Michael Goerner
  • Michael Jeronimo
  • Michael v4hn Goerner
  • Michel Hidalgo
  • Miguel Ángel González Santamarta
  • Mohammad Haghighipanah
  • Noel Jimenez
  • NovAtel Support
  • Oscar Martinez
  • Pyo
  • Robert Haschke
  • STEREOLABS
  • Shane Loretz
  • Shigeru Wakida
  • Steve Macenski
  • Tim Clephas
  • Vimarsh Shah
  • Yadunund
  • davidfernandez
  • miguel
  • ouster developers
  • steve
  • toosimple
  • user

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-packages-for-humble-hawksbill-2025-10-20/50628

ROS Discourse General: RclGo v0.4.1 Release

:rocket: rclgo v0.4.1 Released - ROS 2 Filesystem Logging & CLI Parameter Overrides

I’m excited to announce rclgo v0.4.1, a Go client library for ROS 2 Humble! This release brings two critical features for production robotics deployments.

What’s New

:white_check_mark: ROS 2 Filesystem Logging

rclgo nodes now properly write logs to ~/.ros/log/ with full ROS 2 formatting, matching rclcpp/rclpy behavior:

  • Logs automatically written to ~/.ros/log/<node_name>.log
  • Compatible with ROS 2 logging infrastructure (spdlog backend)
  • Works seamlessly with ros2 launch and standalone execution

:white_check_mark: CLI Parameter Overrides

Full support for command-line parameter overrides, a critical feature for dynamic configuration:
ros2 run my_package my_node --ros-args -p camera.fps:=60 -p exposure:=0.05

  • Compatible with launch file LaunchConfiguration substitutions
  • Updates existing declared parameters or declares new ones
  • Supports all ROS 2 parameter types (bool, int64, double, string, arrays)

Why rclgo?

rclgo enables writing ROS 2 nodes in Go, bringing:

  • Performance: Native compiled binaries, efficient concurrency with goroutines
  • Simplicity: Clean, idiomatic Go APIs for ROS 2 concepts
  • Production-ready: Growing feature parity with rclcpp/rclpy

Current Feature Support

:white_check_mark: Publisher/Subscriber​:white_check_mark: Services​:white_check_mark: Parameters (declare, get, set, YAML, CLI overrides):white_check_mark: QoS Policies​:white_check_mark: ROS Time & /use_sim_time​:white_check_mark: Named loggers with filesystem output

:construction: Coming soon: Actions, Lifecycle nodes, Multi-threaded executor

Installation

go get github.com/merlindrones/rclgo@v0.4.1

Full documentation and examples: GitHub - MerlinDrones/rclgo: Go bindings for ROS2. Forked from https://github.com/tiiuae/rclgo

Feedback Welcome

This project targets ROS 2 Humble and aims for production-grade parity with rclcpp/rclpy. Feedback, bug reports, and contributions are very welcome!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/rclgo-v0-4-1-release/50619

ROS Discourse General: SDF2MAP → Generate ROS 2D Maps Directly from GzSim Worlds

SDF2MAP — SDF to Occupancy Map Converter

SDF2MAP is a lightweight desktop tool that converts Gazebo SDF / World files into 2D occupancy grid maps compatible with the ROS navigation stack.

It automates generating .pgm and .yaml maps directly from simulation environments — eliminating the need for manual map creation.

  • Quickly test navigation and localization algorithms using GzSim maps.

  • Extract 2D map layers at specific heights to match your robot’s LIDAR sensor placement.

  • Seamlessly handle worlds generated by PyRoboSim.

Download & Run:

Get the latest release and instructions from GitHub:



1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/sdf2map-generate-ros-2d-maps-directly-from-gzsim-worlds/50598

ROS Discourse General: Wanted your feedback on an Agentic ROS Robotics development platform

Hey everyone,

I’ve been working with my team on a project that might interest those here who spend time with ROS, Gazebo, or general robotics tooling.

We recently opened up an early version of OORB Studio, a browser-based environment where you can build and test robots directly from your browser using natural language commands.
You can see a quick demo video here: LinkedIn post

It’s still very raw, plenty of bugs, rough edges, and missing features, but we’re sharing it early to get feedback from people who actually use ROS in practice.

If you’ve ever found yourself juggling too many tools just to get a basic prototype running, I’d love for you to give it a spin and tell us what works, what doesn’t, and what’s confusing.
You can try it out here: oorb.io

We also recently joined the Blueprint Residency program in San Francisco, where we’re focusing on improving the system architecture and learning from other robotics builders.

Any insights or feedback from this community would mean a lot, especially from those teaching, experimenting, or deploying with ROS2. The goal isn’t to promote anything, just to learn from real users and keep improving.

Thanks for reading, and I’m looking forward to your thoughts.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/wanted-your-feedback-on-an-agentic-ros-robotics-development-platform/50592

ROS Discourse General: Hector Recorder: A terminal-based rosbag UI

Hi everyone,

I’d like to share our Hector Recorder with you, a terminal-based UI for recording ROS2 bags.

It can be used just like ros2 bag record, but also displays live statistics - topics, message count, bandwidth, …

In addition, you can load all settings from a YAML file, allowing for convenient configuration and reproducible recording setups.

Optionally, the statistics can be published on a topic for further live inspection.

This tool is part of Team Hector’s default onboard logging setup and has been used in the creation of multiple datasets. We hope it proves useful for the community!

On a related note, I also made a small Nautilus extension that lets you view ROS 2 bag metadata right in your file browser: https://github.com/JoLichtenfeld/nautilus_ros2_bag_info

Best,
Jonathan Lichtenfeld

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/hector-recorder-a-terminal-based-rosbag-ui/50577

ROS Discourse General: ROS2 support for Daheng Imaging VEN's cameras

Hi to everyone again! I‘ve recently dropped ROS2 package for Daheng Imaging VEN’s cameras. Take a look, if you have one or you are interested in :slight_smile:

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros2-support-for-daheng-imaging-vens-cameras/50568

ROS Discourse General: INSAION with Sergi Grau-Moya and Victor Massagué Respall | Cloud Robotics WG Meeting 2025-10-22

The CRWG is pleased to welcome Sergi Grau-Moya, Co-founder and CEO of INSAION, and Victor Massagué Respall, Co-founder and CTO of INSAION, to our next meeting at Wed, Oct 22, 2025 4:00 PM UTCWed, Oct 22, 2025 5:00 PM UTC. INSAION provides an observability platform for your robot fleet, allowing users to optimise robot operations and explore advanced robot diagnostics. Sergi and Victor will share the purpose of the company and show some of the capabilities of the software, to add to the group’s research of Logging & Observability.

Last meeting, Carter Schultz of AMP joined the CRWG to discuss how AMP manages large deployments, and the pain points they see from doing so. If you would like to see the talk for yourself, is it available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/insaion-with-sergi-grau-moya-and-victor-massague-respall-cloud-robotics-wg-meeting-2025-10-22/50547

ROS Discourse General: Announcing DepthAI ROS 2 Kilted Driver Release for Luxonis OAK cameras

Big update for ROS developers!
We’ve just released new DepthAI ROS packages - now built for ROS 2 Kilted and powered by DepthAI V3. :tada:
Highlights include:
:small_blue_diamond: RVC4 support – works with new OAK4 devices
:small_blue_diamond: Streamlined NN creation & renamed parameters for cleaner configs
:small_blue_diamond: New RGBD & Pointcloud nodes for faster, colored pointclouds
:small_blue_diamond: Thermal node support for thermal-equipped devices
:small_blue_diamond: Improved Camera nodes with undistorted stream options
:small_blue_diamond: Experimental VIO node (RVC4 support coming soon!)
We’ve also refined socket/frame naming, simplified examples, and added updated Rviz configs.
:backhand_index_pointing_right: See the release blogpost: Luxonis Forum
:backhand_index_pointing_right: Dive into the details and full changelog in our docs: Driver

5 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-depthai-ros-2-kilted-driver-release-for-luxonis-oak-cameras/50546

ROS Discourse General: How do you access changelogs for ROS 2 releases?

Back in the times of ROS 1, I was used to read the “New packages for Turtly Turtle” post, click on the packages I’m interested in, which usually led to ROS wiki, and directly there, I could look at the changelog for that particular distro.

I haven’t found anything like that in ROS 2. The links usually lead either to external doc sites (like nav2 or ros control), or to github (if lucky, they lead directly to the correct branch). Then I have to go to commit history, find the changelog-updating commit and look around (or open the one package and check its CHANGELOG.rst). There is also an attempt at changelogs at index.ros.org, but it apparently doesn’t work: nav2_planner - ROS Package Overview . The changelog is empty. And even if it worked correctly, there’s no direct link from the news post to the package’s page at index.ros.org (but this shouldn’t be that difficult, right?).

Is there something more comfortable?

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/how-do-you-access-changelogs-for-ros-2-releases/50530

ROS Discourse General: Announcing Robot Lock: coordinate work on dev robots

Announcing a tiny but useful new Transitive capability!

Robot Lock

Indicate to others that you are working on a robot, and, optionally, what you are doing.

Most robotics companies suffer from a lack of development robots to test on. As a result, team members spend time tracking down robots that are available for testing, often checking Slack and other free-form communication channels to find out whether anyone else is using a particular robot or not. This is a waste of time.

Robot Lock solves this by giving members the ability to “lock” a robot, optionally adding a note describing what they are doing. Once locked, only they are able to unlock the robot again. Like with all Transitive capabilities, the front-end UI, showing the locked status and toggle, can be embedded in any web page, showing either the status of just one specific robot, or the locks of all robots in the fleet. In addition, the shell on the robot itself can be configured to show the lock status directly in the prompt itself (PS1).

Demo

4_demo

Getting Started

Setting this up on your robots only takes a few minutes:

  • create an account on https://transitiverobotics.com,
  • add your robot to your account by installing the agent using the provided curl command or run the docker image we provide, and
  • add the Robot Lock capability from your Device page.

If you prefer to self-host, follow Setup | Transitive Robotics.

This capability is open-source.

About Transitive

Transitive is an open-source framework for building full-stack robotic capabilities that combine functionality on robot, cloud, and web. Transitive provides data-synchronization, deployment, and sandboxing, making it easy to build components for fleet management and operation that are accessible to anyone over the web – no VPN required. Transitive Robotics operates a hosted offering of Transitive and offers a number of ready-to-go, commercially supported capabilities such as low-latency video streaming, remote teleoperation, live map display, a React ROS SDK, and configuration management.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-robot-lock-coordinate-work-on-dev-robots/50513

ROS Discourse General: 【PIKA】Achievement of PIKA SDK for Teleoperating UR Robotic Arm

Hi everyone! :waving_hand:

I’m excited to share our beginner-friendly guide on implementing teleoperation for UR robotic arms using PIKA Sense. This tutorial walks through the complete setup process, from environment configuration to running your first teleoperation session.

Check out the demo video to see it in action:
https://youtu.be/TNB1lPitM4Y?si=LIw0abAQMvbngZ4C

A Beginner’s Guide to PIKA Teleoperation (UR Edition)

Before starting, it is recommended to read Methods for Teleoperating Any Robotic Arm with PIKA first.

Now that you understand the principles, let’s start building a teleoperation program step by step. To quickly implement the teleoperation function, we will use the following tools:

  • PIKA SDK: It enables quick access to all data from PIKA Sense and easy control of the gripper.
  • Various conversion tools: Such as tools for converting XYZRPY (X/Y/Z coordinates + Roll/Pitch/Yaw angles) to a 4x4 homogeneous transformation matrix, converting XYZ coordinates and quaternions to a 4x4 homogeneous transformation matrix, and converting RPY angles (rotation around X/Y/Z axes) to a rotation vector.
  • UR robotic arm control interface: This interface mainly uses the ur-rtde library. It realizes real-time control by sending target poses (XYZ coordinates and rotation vectors), speed, acceleration, control interval cycle (frequency), look-ahead time, and proportional gain.

Environment Setup

1. Clone the Code

git clone --recursive https://github.com/RoboPPN/pika_remote_ur.git

2. Install Dependencies

cd pika_remote_ur/pika_sdk
pip3 install -r requirements.txt  
pip3 install -e .
pip3 install ur-rtde

UR Control Interface

Let’s start with the control interface. For teleoperation, we first need to create a proper control interface. For example, the original control interface of UR requires input of XYZ coordinates and a rotation vector, but the data we usually send in the teleoperation code is XYZRPY. This means we need to perform a conversion, which can be done either in the control interface or the main teleoperation program. Here, we choose to do the conversion in the main teleoperation program. The following is the control interface code for the UR robotic arm, and the code file is located at pika_remote_ur/ur_control.py:

import rtde_control
import rtde_receive

class URCONTROL:
    def __init__(self,robot_ip):
        # Connect to the robot
        self.rtde_c = rtde_control.RTDEControlInterface(robot_ip)
        self.rtde_r = rtde_receive.RTDEReceiveInterface(robot_ip)
        if not self.rtde_c.isConnected():
            print("Failed to connect to the robot control interface.")
            return
        if not self.rtde_r.isConnected():
            print("Failed to connect to the robot receive interface.")
            return
        print("Connected to the robot.")
            
        # Define servoL parameters
        self.speed = 0.15  # m/s
        self.acceleration = 0.1  # m/s^2
        self.dt = 1.0/50  # dt for 50Hz, or 1.0/125 for 125Hz
        self.lookahead_time = 0.1  # s
        self.gain = 300  # proportional gain
        
    def sevol_l(self, target_pose):
        self.rtde_c.servoL(target_pose, self.speed, self.acceleration, self.dt, self.lookahead_time, self.gain)
        
    def get_tcp_pose(self):
        return self.rtde_r.getActualTCPPose()
    
    def disconnect(self):
        if self.rtde_c:
            self.rtde_c.disconnect()
        if self.rtde_r:
            self.rtde_r.disconnect()
        print("UR disconnected")

# example
# if __name__ == "__main__":
#     ur = URCONTROL("192.168.1.15")
#     target_pose = [0.437, -0.1, 0.846, -0.11019068574221307, 1.59479642933605, 0.07061926626169934]
#     ur.sevol_l(target_pose)

This code defines a Python class named URCONTROL for communicating with and controlling the UR robot. The class encapsulates the functions of the rtde_control and rtde_receive libraries, and provides methods for connecting to the robot, disconnecting from it, sending servoL commands, and obtaining the TCP (Tool Center Point) pose.


Core Teleoperation Code

The teleoperation code is located at pika_remote_ur/teleop_ur.py.

From Methods for Teleoperating Any Robotic Arm with PIKA, we know that the principle of teleoperation can be generally divided into four steps:

  1. Acquire 6D pose data.
  2. Align coordinate systems.
  3. Implement incremental control.
  4. Map the 6D pose data to the robotic arm.

Acquiring Pose Data

The code is as follows:

# Get tracker device pose data
def get_tracker_pose(self):
    logger.info(f"Starting to get pose data from {self.target_device} device...")
    while True:
        # Get pose data
        pose = self.sense.get_pose(self.target_device)
        if pose:
            # Extract position and rotation data for further processing
            position = pose.position  # [x, y, z]
            rotation = self.tools.quaternion_to_rpy(pose.rotation[0],pose.rotation[1],pose.rotation[2],pose.rotation[3])  # [x, y, z, w] quaternion

            self.x,self.y,self.z,   self.roll, self.pitch, self.yaw = self.adjustment(position[0],position[1],position[2],
                                                                                      rotation[0],rotation[1],rotation[2])                                                                           
        else:
            logger.warning(f"Failed to get pose data from {self.target_device}, waiting for next attempt...")

        time.sleep(0.02)  # Get data every 0.02 seconds (50Hz)

This code acquires the pose information of the tracker named “T20” every 0.02 seconds. There are two types of tracker names: one starts with “WM” and the other starts with “T”. When the tracker is connected to the computer with a wire, the first connected tracker is named “T20”, the second one is “T21”, and so on. When connected wirelessly, the first tracker connected to the computer is named “WM0”, the second one is “WM1”, and so on.

The acquired pose data needs to be processed further. The adjustment function is used to adjust the coordinates so that we match the coordinate system of the UR robotic arm’s end effector, achieving alignment between the two.


Coordinate System Alignment

The code is as follows:

# Adjustment matrix function
def adjustment(self,x,y,z,Rx,Ry,Rz):
    transform = self.tools.xyzrpy2Mat(x,y,z,   Rx, Ry, Rz)

    r_adj = self.tools.xyzrpy2Mat(self.pika_to_arm[0],self.pika_to_arm[1],self.pika_to_arm[2],
                                  self.pika_to_arm[3],self.pika_to_arm[4],self.pika_to_arm[5],)   # Adjust coordinate axis direction pika--->robot arm end

    transform = np.dot(transform, r_adj)

    x_,y_,z_,Rx_,Ry_,Rz_ = self.tools.mat2xyzrpy(transform)

    return x_,y_,z_,Rx_,Ry_,Rz_

The functions of this function are as follows:

  1. Convert the input pose (x, y, z, Rx, Ry, Rz) to a transformation matrix.
  2. Obtain the adjustment matrix that transforms the PIKA coordinate system to the robotic arm end-effector coordinate system.
  3. Combine the two transformations by matrix multiplication.
  4. Convert the final transformation matrix back to pose parameters and return them.

Through this function, we can obtain the pose information that has been converted to match the robotic arm’s coordinate system.


Incremental Control

In teleoperation, the pose data provided by PIKA Sense is an absolute pose. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we hope that the robotic arm can follow the relative movement of the operator, starting from its current position. In short words, it means converting the change in the absolute pose of the operating device into the relative pose command that the robotic arm needs to execute.

The code is as follows:

# Incremental control
def calc_pose_incre(self,base_pose, pose_data):
    begin_matrix = self.tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
                                                base_pose[3], base_pose[4], base_pose[5])
    zero_matrix = self.tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
                                        self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
    end_matrix = self.tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
                                            pose_data[3], pose_data[4], pose_data[5])
    result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
    xyzrpy = self.tools.mat2xyzrpy(result_matrix)
    return xyzrpy   

This function uses the operation rules of transformation matrices to implement incremental control. Let’s break down the code step by step:

Input Parameters

  • base_pose: This is the reference pose at the start of teleoperation. When teleoperation is triggered, the system records the pose of the operating device at that moment and stores it as self.base_pose. This pose serves as the “starting point” or “reference zero point” for calculating all subsequent increments.
  • pose_data: This is the real-time pose data received from the operating device (PIKA Sense) at the current moment.

Matrix Conversion

The function first converts three key poses (represented in the [x, y, z, roll, pitch, yaw] format) into 4x4 homogeneous transformation matrices. This conversion is usually completed by the tools.xyzrpy2Mat function.

  • begin_matrix: Converted from base_pose, it represents the pose matrix of the operating device at the start of teleoperation, denoted as T_begin.
  • zero_matrix: Converted from self.initial_pose_rpy, it represents the pose matrix of the robotic arm’s end-effector at the start of teleoperation. This is the “starting point” of the robotic arm’s movement, denoted as T_zero.
  • end_matrix: Converted from pose_data, it represents the pose matrix of the operating device at the current moment, denoted as T_end.

Core Calculation

This is the most critical line of code:

result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))

We can analyze it using matrix multiplication:

The formula is expressed as: Result = T_zero * (T_begin)⁻¹ * T_end

  • np.linalg.inv(begin_matrix): Calculates the inverse matrix of begin_matrix, i.e., (T_begin)⁻¹. In robotics, the inverse matrix of a transformation matrix represents the reverse transformation.
  • np.dot(np.linalg.inv(begin_matrix), end_matrix): This step calculates (T_begin)⁻¹ * T_end. The physical meaning of this operation is the transformation required to go from the begin coordinate system to the end coordinate system. In other words, it accurately describes the relative pose change (increment) of the operating device from the start of teleoperation to the current moment, denoted as ΔT.
  • np.dot(zero_matrix, ...): This step calculates T_zero * ΔT. Its physical meaning is to apply the previously calculated relative pose change (ΔT) to the initial pose (T_zero) of the robotic arm’s end-effector.

Result Conversion and Return

  • xyzrpy = tools.mat2xyzrpy(result_matrix): Converts the calculated 4x4 target pose matrix result_matrix back to the [x, y, z, roll, pitch, yaw] format that the robot controller can understand.
  • return xyzrpy: Returns the calculated target pose.

Teleoperation Trigger

In fact, there are many ways to trigger teleoperation:

  • Voice trigger: The operator can trigger teleoperation using a wake-up word.
  • Server request trigger: Teleoperation is triggered by sending a request.

However, neither of the above methods is very convenient to operate. For voice trigger, an additional voice input module is required, and there is the problem of inaccurate wake-up word recognition. Sometimes, you have to say the wake-up word many times to trigger teleoperation successfully, and your mouth may get dry before teleoperation even starts. For the server request trigger, you need to use a control computer to send the request. It works well if there are two people to cooperate, but it becomes a hassle when there is only one person.

The method we use is to trigger teleoperation by detecting the state change of PIKA Sense. The operator only needs to hold PIKA Sense and double-click it quickly to reverse its state, thereby triggering teleoperation. The code is as follows:

# Teleoperation trigger
def handle_trigger(self):
    current_value = self.sense.get_command_state()

    if self.last_value is None:
        self.last_value = current_value
    if current_value != self.last_value: # State change detected
        self.bool_trigger = not self.bool_trigger # Flip bool_trigger
        self.last_value =  current_value # Update last_value
        # Execute corresponding operations based on new bool_trigger value
        if self.bool_trigger :
            self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            self.flag = True
            print("Start teleoperation")

        elif not self.bool_trigger :
            self.flag = False

            #-------------------------------------------------Option 1: When teleoperation ends, robot stops at current pose, next teleoperation starts from current pose---------------------------------------------------

            self.initial_pose_rotvec = self.ur_control.get_tcp_pose()

            temp_rotvec = [self.initial_pose_rotvec[3], self.initial_pose_rotvec[4], self.initial_pose_rotvec[5]]

            # Convert rotation vector to Euler angles
            roll, pitch, yaw = self.tools.rotvec_to_rpy(temp_rotvec)

            self.initial_pose_rpy = self.initial_pose_rotvec[:]
            self.initial_pose_rpy[3] = roll
            self.initial_pose_rpy[4] = pitch
            self.initial_pose_rpy[5] = yaw

            self.base_pose = self.initial_pose_rpy # Target pose data
            print("Stop teleoperation")

            #-------------------------------------------------Option 2: When teleoperation ends, robot returns to initial pose, next teleoperation starts from initial pose---------------------------------------------------

            # # Get current robot arm pose
            # current_pose = self.ur_control.get_tcp_pose()

            # # Define interpolation steps
            # num_steps = 100  # Can adjust steps as needed, more steps = smoother transition

            # for i in range(1, num_steps + 1):
            #     # Calculate current interpolation point pose
            #     # Assumes initial_pose_rotvec and current_pose are both in [x, y, z, Rx, Ry, Rz] format
            #     interpolated_pose = [
            #         current_pose[j] + (self.initial_pose_rotvec[j] - current_pose[j]) * i / num_steps
            #         for j in range(6)
            #     ]
            #     self.ur_control.sevol_l(interpolated_pose)
            #     time.sleep(0.01)  # Small delay between interpolations to control speed

            # # Ensure final arrival at initial position
            # self.ur_control.sevol_l(self.initial_pose_rotvec)

            # self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            # print("Stop teleoperation")

This code continuously obtains the current state of PIKA Sense through self.sense.get_command_state(), which can only output two states: 0 or 1. When the program starts, bool_trigger is set to False by default. When the first state reversal occurs, bool_trigger is set to True. At this time, the pose of the tracker is set as the zero point, self.flag is set to True, and the control data is sent to the robotic arm to control it. To stop teleoperation, double-click quickly again to reverse the state. At this point, the robotic arm will stop at the current pose, and the next teleoperation will start from this pose. The above is the control method for the robotic arm when teleoperation stops in Option 1. In Option 2, when teleoperation stops, the robotic arm returns to the initial pose, and the next teleoperation starts from the initial pose. You can choose the appropriate teleoperation stop method according to your specific situation.


Mapping PIKA Pose Data to the Robotic Arm

The code for this part is as follows:

def start(self):
    self.tracker_thread.start() # Start thread        
    # Main thread continues with other tasks
    while self.running:
        self.handle_trigger()
        self.control_gripper()
        current_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
        increment_pose = self.calc_pose_incre(self.base_pose,current_pose)

        finally_pose  = self.tools.rpy_to_rotvec(increment_pose[3], increment_pose[4], increment_pose[5])

        increment_pose[3:6] = finally_pose

        # Send pose to robot arm
        if self.flag:
            self.ur_control.sevol_l(increment_pose)

        time.sleep(0.02) # 50Hz update

In this part of the code, the increment_pose calculated through incremental control is used. The RPY rotation in increment_pose is converted to a rotation vector, and then the resulting data is sent to the robotic arm (the UR robotic arm is controlled by receiving XYZ coordinates and a rotation vector). Only when self.flag is True (i.e., teleoperation is enabled) will the control data be sent to the robotic arm.


Practical Operation

The teleoperation code is located in pika_remote_ur/teleop_ur.py.

Step-by-Step Guide

  1. Power on the UR robotic arm and enable the joint motors. If the end of the robotic arm is equipped with an actuator such as a gripper, enter the corresponding load parameters.

  2. Configure the robot IP address on the tablet.

  3. Set up the tool coordinate system.

    Important: It is crucial to set the end-effector coordinate system such that the Z-axis points forward, the X-axis points downward, and the Y-axis points to the left. In the code, we rotate the PIKA coordinate system 90° counterclockwise around the Y-axis. After this rotation, the PIKA coordinate system has the Z-axis forward, X-axis downward, and Y-axis to the left. Therefore, the end-effector (tool) coordinate system of the robotic arm must be set to match this; otherwise, the control will be chaotic.

  4. For the first use, adjust the speed to 20-30%, and then enable the remote control of the robotic arm.

  5. Connect the tracker to the computer using a USB cable, and calibrate the tracker and the base station.

    Navigate to the ~/pika_ros/scripts directory and run the following command:

    bash calibration.bash 
    

    After the positioning calibration is completed, close the program.

  6. Connect PIKA Sense and PIKA Gripper to the computer using a USB 3.0 cable.

    Note: First, plug in PIKA Sense, which should be recognized as the /dev/ttyUSB0 port. Then, plug in the gripper (the gripper requires 24V DC power supply), which should be recognized as the /dev/ttyUSB1 port.

  7. Run the code:

    cd pika_remote_ur
    python3 teleop_ur.py
    

    The terminal will output a lot of logs. Most of the initial logs will be:

    teleop_ur - WARNING - Failed to get pose data from T20, waiting for next attempt...
    

    Wait until the above log stops appearing and the following log is output:

    pika.vive_tracker - INFO - New device update detected: T20
    

    At this point, you can double-click PIKA Sense to start teleoperation.


Wrapping Up

That’s it! You should now have a working teleoperation setup for your UR arm. If you run into any issues or have questions, feel free to drop a comment below. I’d also love to hear about your experiences and any improvements you make to the code.

The full repository is available on GitHub – contributions and feedback are always welcome!

Happy teleoperating! :robot:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/pika-achievement-of-pika-sdk-for-teleoperating-ur-robotic-arm/50508

ROS Discourse General: :new: ROSCon 2025 Mini-Workshops

Hi everyone,

I have some exciting news about ROSCon 2025! We’ve just added ten mini-workshops to the schedule.

These free, one-hour workshops are organized by our OSRA members and sponsors. They will take place on Mon, Oct 27, 2025 12:00 AM UTCMon, Oct 27, 2025 9:00 AM UTC, the first full day of ROSCon. No workshop registration is required—just show up with your standard ROSCon registration!

I’ve included the list of workshops and organizers below. Full details are available on the ROSCon website.

  • “Hands-On Robot Arm Control with ROS 2 and MoveIt Pro” with PickNik.
  • “Zenoh Drakes of a Flame” with Angelo Corsaro, Julien Enoch, and Yu-Yuan Yuan from Zettascale.
  • “Automate Your ROS Bag Analysis: Getting Started with Roboto” with Yves Albers and Benji Barash from Roboto AI.
  • “Demonstrating the Canonical Observability Stack for Devices” with Guillaume Beuzeboc from Canonical.
  • “Hands-on ROS 2 with Rubik Pi 3” with Qualcomm Technologies.
  • “Evolving ROS 2 for Real-Time Control of Learned Policy-Driven Robots” with Hemal Shah from NVIDIA.
  • “b-controlled Box - a real-time ROS 2 gateway for industrial 24/7 applications” with Denis Stogl and Yara Shahin from b-robotized (Stogl Robotics).
  • “Simulate. Verify. Improve: ROS 2 Sim↔Real for Environmental Robotics in Tropical Cities” with David Tan and Daniel Wong of NETATECH.
  • “ROS 2 Native IMU Sensors with AI-Powered Sensor Fusion in Mobile Robotics” with M. Leox Karimi and Edwin Babaians from Olive Robotics.
  • “ROS-Industrial Scan-N-Plan” with Michael Ripperger from ROS-Industrial / Southwest Research Institute.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/roscon-2025-mini-workshops/50479

ROS Discourse General: NVIDIA's Greenwave Monitor - A tool for high-performance topic monitoring and diagnostics

The Isaac team at NVIDIA is open sourcing Greenwave Monitor, a tool we use internally to monitor and debug topics.

It provides the following features:

  1. A node similar to a C++ based ros2 topic hz, i.e., subscribes to topics to determine the message rate and latency. The Greenwave node is performant, publishes Diagnostics, and offers services to manage topics and expected frequencies.

  2. Terminal-based dashboards that display topic rates, latency, and status, and allow you to add/remove topics and set expected frequencies.

  3. A header only C++ library so you can publish compatible diagnostics directly from your own nodes for reduced overhead.

This diagram shows an overview of the architecture:

We provide two different TUI frontends for you to try. One is a fork of the excellent r2s project, it is powerful but can become slow when there are many topics, we also provide a basic, fast, lightweight curses based frontend.

This project grew out of Isaac ROS NITROS diagnostics, but is completely standalone, with no dependency on Isaac / Isaac ROS.

Try it out (instructions in the README)! Let us know if you find it useful, if there are features you would like to see, or bugs you find please raise an issue (or open a PR) on the Github page.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/nvidias-greenwave-monitor-a-tool-for-high-performance-topic-monitoring-and-diagnostics/50477

ROS Discourse General: Paper: A Modular ROS2 Gateway for CAN-based Systems: Architecture and Performance Evaluation

Hello Open Robotics Community,

I’m excited to share my latest paper, “A Modular ROS2 Gateway for CAN-based Systems: Architecture and Performance Evaluation,” which I believe addresses a critical challenge in modern robotics: the robust integration of real-time Controller Area Network (CAN) systems with higher-level robotics frameworks like ROS 2.

The Challenge:
The paper tackles the fundamental conflict between low-level fieldbuses like CAN, which ensure data integrity and real-time control, and high-level middleware like ROS 2, which offers powerful tooling but typically runs on non-real-time operating systems. Bridging this gap is essential for creating intelligent machines.

Our Approach & Contributions:
We present a novel, modular gateway architecture designed to treat the gateway as a digital twin of the CAN system. This approach aims to abstract low-level CAN complexities while preserving safety features and data fidelity. Our key contributions are threefold:

  1. A novel, modular ROS 2 gateway architecture based on the digital twin pattern, providing semantic abstraction for arbitrary CAN-based systems.

  2. An open-source example implementation of this architecture, offering a practical and reusable tool for the robotics community.

  3. A rigorous performance evaluation of the open-source implementation on a CAN-based mobile robot, quantifying resource usage, latency, and jitter of the system.

Key Findings:
Validated on a 1:5 scale model car (the CanEduDev Rover platform, Fig. 1 in the paper), our performance evaluation revealed:

  • Average latencies of 170.4 µs and average jitter of 74.0 µs for CAN to ROS message transmission, demonstrating sub-millisecond performance suitable for soft real-time control.

  • Bimodal latency distribution and periodic performance spikes, suggesting areas for optimization, potentially related to ROS 2 middleware (e.g., FastDDS) or OS scheduling artifacts.

  • Low CPU usage overhead for the gateway application, especially with hardware-level CAN filtering.

  • Linear scaling of memory consumption with the number of nodes, primarily due to Python-based ROS nodes running in separate processes without node composition.

System Implications & Benefits:
This integration significantly lowers the barrier for simulating entire systems with tools like Gazebo. It unlocks the full potential of ROS tooling (e.g., ROS bags for data recording and analysis). Crucially, it makes the system more accessible to individuals with a robotics or research background by abstracting away low-level CAN complexities.

Limitations & Future Work:
While providing significant benefits, the current implementation has limitations, such as the lack of robust safeguards against critical ROS node crashes and the observed bimodal latency distribution. Future work will involve C++ re-implementation for better resource efficiency, detailed root cause analysis of latency variations (e.g., using ros2_tracing), and comparative analysis with different ROS middleware implementations or a PREEMPT_RT Linux kernel.

We believe this work provides a concrete design pattern and a performance roadmap for developers bridging industrial control protocols with modern robotics frameworks.

You can read the full paper here: Resources - CanEduDev

I welcome your thoughts, questions, and feedback on the architecture, performance analysis, and broader implications of this work.

Thank you!

Hashem Hashem
Co-founder, CanEduDev

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/paper-a-modular-ros2-gateway-for-can-based-systems-architecture-and-performance-evaluation/50461

ROS Discourse General: ROSCon 2025 Exhibit Map Now Available

ROSCon 2025 Exhibit Map Now Available :world_map:


Hi Everyone,

We just posted the final exhibitor map for ROSCon 2025 in Singapore. An interactive version of the map is also available here.

Our sponsors and exhibitors help make ROSCon possible, and many of them have fantastic live demonstrations planned for the event! I’ve included a full list of sponsors and exhibitors below so you can start building your itinerary for ROSCon.

:1st_place_medal: Gold Sponsors

:2nd_place_medal: Silver Sponsors

:3rd_place_medal: Bronze Sponsors

:seedling: Startup Alley Sponsors

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/roscon-2025-exhibit-map-now-available/50455

ROS Discourse General: What's your stack for debugging robots in the wild?

Hi there :waving_hand: ROS community,

My colleague and I are mapping out best practices for managing deployed robot fleets, and we’d love to learn from your real-world experience.

As robots move from the lab into the wild, the process for debugging and resolving issues gets complicated. We’re trying to move past our own ad-hoc methods and are curious about how your teams handle the entire lifecycle of an incident.

Specifically, we’re focused on these four areas:

  1. Incident & Resolution Tracking
    When a novel issue is solved, how do you capture that hard-won knowledge for the rest of the team? We’re curious about your process for creating a durable record of the diagnostic path and the fix, so the next engineer doesn’t have to solve the same problem from scratch six months from now.

  2. Hardware & Software Context
    How do you correlate a failure with the specific context of the robot? We’ve found it’s often crucial to know the exact firmware of a sensor, the driver version, the OS patch level, or even the manufacturing batch of a component. How do you capture and surface this data during an investigation?

  3. Remote vs. On-Site Debugging
    What is your decision tree for debugging? How much can you solve remotely with the data you have? What are the specific triggers that force you to accept defeat and send a person on-site? What’s the one piece of data you wish you had to avoid that trip?

  4. Fleet-Wide Failure Analysis
    How do you identify systemic issues across your fleet? For example, discovering that a specific component fails more often under certain circumstances. What does your data analysis pipeline look like for finding these patterns—the “what, why, when, and where” of recurring failures?

We’re hoping to get a good public discussion going in this thread about the tools and workflows you’re using today. Whether it’s custom scripts, telegraf, prometheus, grafana dashboards, or something else.

On a separate note, this problem space is our team’s entire focus at INSAION. If you’re wrestling with these challenges daily and find the current tooling inadequate, we’d be very interested to hear your perspective. Please feel free to send me a DM for an honest, engineer-to-engineer conversation.

Keep your robots healthy and running!

Sergi from INSAION

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/whats-your-stack-for-debugging-robots-in-the-wild/50451

ROS Discourse General: Invite: How to Make the Most of ROSCon (for Women in Robotics)

WomeninRobotics.org members are encouraged to check the Slack group for an invitation to the following:

Ahead of ROSCon 2025, we’re hosting a prep session “How to Make the Most of ROSCon, as a Speaker, Regular, or First-timer” on Wednesday Oct 8th/Thursday Oct 9th depending on timezone.*

This will be a structured, facilitated session for anyone attending ROSCon.ros.org (or still considering it!).

Know someone who’d be interested? Given the relatively small intersection of the robotics community, your help reaching interested attendees would be very appreciated! :folded_hands:

Till soon,
Deanna (2024 keynote)

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/invite-how-to-make-the-most-of-roscon-for-women-in-robotics/50431

ROS Discourse General: ANNOUNCEMENT: October 9 7:00pm: Boston Robot Hackers Meetup

REGISTER: Eventbrite Link

Greetings! I am excited to announce the next meeting of the Boston Robot Hackers!

Date: Thursday October 9 at 7:00pm
Location: Artisans Asylum, 96 Holton Street, Boston (Allston)

REGISTER: Eventbrite Link

We’re excited that our talk this month is by David Dorf and the topic: “Affordable Biomimetic Robot Hands”. David will discuss new ways of building robot end-effectors (hands). He will share novel methods for tackling these issues with 3D printing flexible materials and biomimetic design and interfacing to ROS2.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcement-october-9-7-boston-robot-hackers-meetup/50419

ROS Discourse General: Videos from ROSCon UK 2025 in Edinburgh 🇬🇧

Hi Everyone,

The entire program from our inaugural ROSCon UK in Edinburgh is now available :sparkles: ad free :sparkles: on the OSRF Vimeo account. You can find the full conference website here.


1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/videos-from-roscon-uk-2025-in-edinburgh/50390

Wiki: TullyFoote/TestPlanetRSS (last edited 2014-09-25 22:49:53 by TullyFoote)