A project brought to you by RSL - ETH Zurich.
References • Hugging Face • ROS1 • Contributing • News • Citation
Please, at first, visit the official webpage to learn more about the available data & hardware setup & registration.
Visit our sponsors and partners.
Project | Preview |
---|---|
Physical Terrain Parameters Learning Learning simulation paramters from RGB and proprioception. |
![]() |
Fordward Dynamics Model Learning Learning dynamics model. |
![]() |
Holistic Fusion Holistic State Estimation. |
![]() |
RESPLE: Recursive Spline LIO SoTA LiDAR Inertial Odometry. |
![]() |
You can find Jupyter Notebooks and Scripts with full instructions in the examples_hugging_face
directory.
Click for installation details
These steps assume you are using uv for dependency management.
pip3 install uv
uv install
cd examples_hugging_face
uv sync
uv run scripts/download_data.py
-
Accessing/Downloading GrandTour Data
Learn how to download the GrandTour datasets from HuggingFace. -
Exploring GrandTour Data
Explore the dataset structure and learn how to work with Zarr data.
-
zarr_transforms.py
Demonstrates how to use transforms and provides helper functions for Zarr data. -
plot_lidar_3d.py
Visualize LiDAR data in 3D space. -
project_lidar_on_image.py
Project LiDAR points onto camera images, accounting for camera distortion and relative motion. -
dynamic_points_filtering_using_images.py
Removes dynamic objects from LiDAR point clouds using image segmentation and saves results in Zarr format. -
generate_elevation_maps.py
Generates elevation maps from LiDAR and depth cameras. -
nerfstudio_convert.py
Converts datasets into nerfstudio format for training Gaussian Splatting models.
Click here
To access and download the GrandTour dataset rosbags, please follow these steps:
- Register here: Google Form Registration
Option 1 – Command Line Interface (Recommended):
Install the CLI tool and log in:
pip3 install kleinkram
klein login
- You can now explore the CLI using tab-completion or the
--help
flag.
Download multiple files via Python scripting:
python3 examples_kleinkram/kleinkram_cli_example.py
Directly convert rosbags to PNG images (requires ROS1 installation):
python3 examples_kleinkram/kleinkram_extract_images.py
Option 2 – Web Interface:
- Use the GrandTour Dataset Web Interface to browse and download data directly.
Click here
mkdir -p ~/grand_tour_ws/src
mkdir -p ~/git
⚠️ Note: Thegrand_tour_box
repository is currently private. We are actively working on making it public.
# Cloning the repository
cd ~/git
git clone git@github.com:leggedrobotics/grand_tour_dataset.git
cd grand_tour_dataset; git submodule update --init
# Checkout only the required packages from the grand_tour_box repository for simplicity
cd ~/git/grand_tour_dataset/examples_ros1/submodules/grand_tour_box
git sparse-checkout init --cone
git sparse-checkout set box_model box_calibration box_drivers/anymal_msgs box_drivers/gnss_msgs
# Link the repository to the workspace
ln -s ~/git/grand_tour_dataset/examples_ros1 ~/grand_tour_ws/src/
cd ~/grand_tour_ws
catkin init
catkin config --extend /opt/ros/noetic
catkin config --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo
catkin build grand_tour_ros1
source devel/setup.bash
mkdir -p ~/grand_tour_ws/src/examples_ros1/data
cd ~/grand_tour_ws/src/examples_ros1/data
pip3 install kleinkram
klein login
klein download --mission 3c97a27e-4180-4e40-b8af-59714de54a87
roslaunch grand_tour_ros1 lidars.launch
# URDFs are automaticlly loaded by:
# Boxi: box_model box_model.launch
# ANYmal: anymal_d_simple_description load.launch
cd ~/grand_tour_ws/src/examples_ros1/data
# We provide an easy interface to replay the bags
rosrun grand_tour_ros1 rosbag_play.sh --help
rosrun grand_tour_ros1 rosbag_play.sh --lidars --tf_model
# We provide two tf_bags
# tf_model contains frames requred for UDRF model of ANYmal and Boxi.
# tf_minimal contains only core sensor frames.
You can also try the same for cameras.launch
.
Example Output:
LiDAR Visualization | Camera Visualization |
---|---|
![]() Visualization of LiDAR data using lidars.launch . |
![]() Visualization of images using cameras.launch . |
We provide a launch file to uncompress images and publish rectified images. Install the required dependencies:
sudo apt-get install ros-noetic-image-transport
sudo apt-get install ros-noetic-compressed-image-transport
roslaunch grand_tour_ros1 cameras_helpers.launch
We use rqt-multiplot to visualize the IMU measurments.
Install rqt_multiplot:
sudo apt-get install ros-noetic-rqt-multiplot -y
Start rqt_multiplot and replay the bags:
roslaunch grand_tour_ros1 imus.launch
cd ~/grand_tour_ws/src/examples_ros1/data
rosrun grand_tour_ros1 rosbag_play.sh --imus --ap20
We warmly welcome contributions to help us improve and expand this project. Whether you're interested in adding new examples, enhancing existing ones, or simply offering suggestions — we'd love to hear from you! Feel free to open an issue or reach out directly.
We are particularly looking for contributions in the following areas:
- New and interesting benchmarks
- ROS2 integration and conversion
- Visualization tools (e.g., Viser, etc.)
- Hosting and deployment support in Asia
We're organizing a workshop at ICRA 2026 in Vienna and are currently looking for co-organizers and collaborators. We are also planning to write a community paper about this project. Everyone who contributes meaningfully will be included as a co-author.
Let’s build this together — your input matters!
@INPROCEEDINGS{Tuna-Frey-Fu-RSS-25,
AUTHOR = {Jonas Frey AND Turcan Tuna AND Lanke Frank Tarimo Fu AND Cedric Weibel AND Katharine Patterson AND Benjamin Krummenacher AND Matthias Müller AND Julian Nubert AND Maurice Fallon AND Cesar Cadena AND Marco Hutter},
TITLE = {{Boxi: Design Decisions in the Context of Algorithmic Performance for Robotics}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2025},
ADDRESS = {Los Angeles, United States},
MONTH = {June}
}
*shared first authorship: Frey, Tuna, Fu.