From 64f5207a3cc27c6056a8f1847ded6b410ad5a75b Mon Sep 17 00:00:00 2001 From: Nevin Valsaraj Date: Thu, 7 May 2026 15:18:11 -0700 Subject: [PATCH] Wiki markdown fixes and filename/slug normalization --- _data/navigation.yml | 26 +++---- wiki/actuation/__all_subsections.md | 4 +- wiki/actuation/index.md | 4 +- ...tion.md => moveit-and-hebi-integration.md} | 0 ...uit-controller-for-skid-steering-robot.md} | 0 wiki/common-platforms/__all_subsections.md | 6 +- ...=> husky-interfacing-and-communication.md} | 0 wiki/common-platforms/index.md | 2 +- wiki/common-platforms/ros/ros-intro.md | 9 +-- .../ros2-navigation-for-clearpath-husky.md | 4 +- wiki/fabrication/__all_subsections.md | 4 +- ...ication-considerations-for-3d-printing.md} | 0 wiki/fabrication/index.md | 4 +- .../{series-A-pro.md => series-a-pro.md} | 0 wiki/fabrication/ultimaker-series.md | 4 +- wiki/interfacing/__all_subsections.md | 2 +- wiki/interfacing/buffer-issues.md | 2 +- wiki/interfacing/index.md | 2 +- ...os1_ros2_bridge.md => ros1-ros2-bridge.md} | 0 wiki/machine-learning/intro-to-diffusion.md | 2 +- wiki/networking/__all_subsections.md | 2 +- wiki/networking/bluetooth-sockets.md | 2 +- wiki/planning/__all_subsections.md | 76 ++++++++++--------- ...=> astar-planning-implementation-guide.md} | 0 wiki/planning/chomp-planning.md | 4 +- wiki/planning/frenet-frame-planning.md | 68 ++++++++--------- wiki/planning/index.md | 2 +- .../{move_base_flex.md => move-base-flex.md} | 0 wiki/planning/non-a-star-planning.md | 2 +- wiki/planning/rrt-prm-planning.md | 2 +- wiki/robotics-project-guide/choose-a-robot.md | 2 +- ...ll_subsections.md => __all_subsections.md} | 2 +- wiki/sensing/index.md | 2 +- wiki/sensing/robotic-total-stations.md | 2 +- ...=> trajectory-extraction-static-camera.md} | 0 .../ultrawideband-beacon-positioning.md | 2 +- wiki/simulation/__all_subsections.md | 8 +- ...ilding-a-light-weight-custom-simulator.md} | 0 ...n-considerations-for-ros-architectures.md} | 0 wiki/simulation/index.md | 8 +- ...oware.md => ndt-matching-with-autoware.md} | 0 ...ning-and-controlling-vehicles-in-carla.md} | 0 wiki/state-estimation/__all_subsections.md | 6 +- ...ion.md => cartographer-ros-integration.md} | 0 .../gps-lacking-state-estimation-sensors.md | 2 +- wiki/state-estimation/index.md | 4 +- ...vigation.md => oculus-prime-navigation.md} | 0 .../__all_subsections.md | 8 +- ...{In-Loop-Testing.md => in-loop-testing.md} | 29 ++----- wiki/system-design-development/index.md | 2 +- wiki/tools/__all_subsections.md | 2 +- .../{Qtcreator-ros.md => qt-creator-ros.md} | 0 wiki/tools/rosbags-matlab.md | 2 +- 53 files changed, 148 insertions(+), 166 deletions(-) rename wiki/actuation/{moveit-and-HEBI-integration.md => moveit-and-hebi-integration.md} (100%) rename wiki/actuation/{Pure-Pursuit-Controller-for-Skid-Steering-Robot.md => pure-pursuit-controller-for-skid-steering-robot.md} (100%) rename wiki/common-platforms/{husky_interfacing_and_communication.md => husky-interfacing-and-communication.md} (100%) rename wiki/fabrication/{fabrication_considerations_for_3D_printing.md => fabrication-considerations-for-3d-printing.md} (100%) rename wiki/fabrication/{series-A-pro.md => series-a-pro.md} (100%) rename wiki/interfacing/{ros1_ros2_bridge.md => ros1-ros2-bridge.md} (100%) rename wiki/planning/{astar_planning_implementation_guide.md => astar-planning-implementation-guide.md} (100%) rename wiki/planning/{move_base_flex.md => move-base-flex.md} (100%) rename wiki/sensing/{___all_subsections.md => __all_subsections.md} (99%) rename wiki/sensing/{trajectory_extraction_static_camera.md => trajectory-extraction-static-camera.md} (100%) rename wiki/simulation/{Building-a-Light-Weight-Custom-Simulator.md => building-a-light-weight-custom-simulator.md} (100%) rename wiki/simulation/{Design-considerations-for-ROS-architectures.md => design-considerations-for-ros-architectures.md} (100%) rename wiki/simulation/{NDT-Matching-with-Autoware.md => ndt-matching-with-autoware.md} (100%) rename wiki/simulation/{Spawning-and-Controlling-Vehicles-in-CARLA.md => spawning-and-controlling-vehicles-in-carla.md} (100%) rename wiki/state-estimation/{Cartographer-ROS-Integration.md => cartographer-ros-integration.md} (100%) rename wiki/state-estimation/{OculusPrimeNavigation.md => oculus-prime-navigation.md} (100%) rename wiki/system-design-development/{In-Loop-Testing.md => in-loop-testing.md} (95%) rename wiki/tools/{Qtcreator-ros.md => qt-creator-ros.md} (100%) diff --git a/_data/navigation.yml b/_data/navigation.yml index a28151b1..8bb3dddc 100644 --- a/_data/navigation.yml +++ b/_data/navigation.yml @@ -24,7 +24,7 @@ wiki: - title: Subsystem Interface Modeling url: /wiki/system-design-development/subsystem-interface-modeling/ - title: In Loop Testing - url: /wiki/system-design-development/In-Loop-Testing/ + url: /wiki/system-design-development/in-loop-testing/ - title: How to design a robotic state machine url: /wiki/system-design-development/how-to-design-a-robotic-state-machine/ - title: Project Management @@ -50,7 +50,7 @@ wiki: - title: Hello Robot Stretch RE1 url: /wiki/common-platforms/hello-robot - title: Husky Interfacing Procedure - url: /wiki/common-platforms/husky_interfacing_and_communication/ + url: /wiki/common-platforms/husky-interfacing-and-communication/ - title: Interfacing with the Nvidia Orin url: /wiki/common-platforms/interfacing-with-nvidia-orin/ - title: Khepera 4 @@ -141,7 +141,7 @@ wiki: - title: Thermal Cameras url: /wiki/sensing/thermal-cameras/ - title: Tracking vehicles using a static traffic camera - url: /wiki/sensing/trajectory_extraction_static_camera/ + url: /wiki/sensing/trajectory-extraction-static-camera/ - title: Actuation url: /wiki/actuation/ children: @@ -156,9 +156,9 @@ wiki: - title: Vedder Electronic Speed Controller url: /wiki/actuation/vedder-electronic-speed-controller/ - title: Pure Pursuit Controller for Skid Steering - url: /wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot.md + url: /wiki/actuation/pure-pursuit-controller-for-skid-steering-robot.md - title: MoveIt Motion Planning and HEBI Actuator Setup and Integration - url: /wiki/actuation/moveit-and-HEBI-integration.md + url: /wiki/actuation/moveit-and-hebi-integration.md - title: Model Predictive Control Introduction and Setup url: /wiki/actuation/model-predictive-control/ - title: Task Prioritization Control for Advanced Manipulator Control @@ -198,7 +198,7 @@ wiki: - title: Visual Servoing url: /wiki/state-estimation/visual-servoing/ - title: Cartographer SLAM ROS Integration - url: /wiki/state-estimation/Cartographer-ROS-Integration/ + url: /wiki/state-estimation/cartographer-ros-integration/ - title: External Position Estimation using OptiTrack Motion Capture System url: /wiki/state-estimation/optitrack-motion-capture/ - title: Programming @@ -239,13 +239,13 @@ wiki: url: /wiki/simulation/ children: - title: Building a Light Weight Custom Simulator - url: /wiki/simulation/Building-a-Light-Weight-Custom-Simulator/ + url: /wiki/simulation/building-a-light-weight-custom-simulator/ - title: Design considerations for ROS architectures url: /wiki/simulation/Design-considerations-for-ROS-architectures - title: Spawning and Controlling Vehicles in CARLA url: /wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA - title: NDT Matching with Autoware - url: /wiki/simulation/NDT-Matching-with-Autoware/ + url: /wiki/simulation/ndt-matching-with-autoware/ - title: An Introduction to Isaac Sim url: /wiki/simulation/an-introduction-to-isaac-sim/ - title: Simulating UGVs in Unity @@ -260,7 +260,7 @@ wiki: - title: micro-ROS for ROS2 on Microcontrollers url: /wiki/interfacing/microros-for-ros2-on-microcontrollers/ - title: ROS 1 - ROS 2 Bridge - url: /wiki/interfacing/ros1_ros2_bridge/ + url: /wiki/interfacing/ros1-ros2-bridge/ - title: Computing url: /wiki/computing/ children: @@ -290,7 +290,7 @@ wiki: - title: CubePro url: /wiki/fabrication/cube-pro/ - title: Fabrication Considerations for 3D printing - url: /wiki/fabrication/fabrication_considerations_for_3D_printing/ + url: /wiki/fabrication/fabrication-considerations-for-3d-printing/ - title: Machining & Prototyping url: /wiki/fabrication/machining-prototyping/ - title: MakerBot Replicator 2x @@ -300,7 +300,7 @@ wiki: - title: Rapid Prototyping url: /wiki/fabrication/rapid-prototyping/ - title: Series A Pro Printer - url: /wiki/fabrication/series-A-pro/ + url: /wiki/fabrication/series-a-pro/ - title: Sheet Metal Fabrication url: /wiki/fabrication/sheet-metal-guidelines/ - title: Soldering @@ -346,7 +346,7 @@ wiki: - title: Code Editors - Introduction to VS Code and Vim url: /wiki/tools/code-editors-Introduction-to-vs-code-and-vim/ - title: Qtcreator UI development with ROS - url: /wiki/tools/Qtcreator-ros/ + url: /wiki/tools/qt-creator-ros/ - title: Tutorial on Using USB Compute Sticks url: /wiki/tools/usb-compute-sticks/ - title: Datasets @@ -362,7 +362,7 @@ wiki: - title: Planning Overview url: /wiki/planning/planning-overview/ - title: A* Planner Implementation Guide - url: /wiki/planning/astar_planning_implementation_guide/ + url: /wiki/planning/astar-planning-implementation-guide/ - title: Coverage Planner Implementation Guide url: /wiki/planning/coverage-planning-implementation-guide/ - title: Resolved Rates diff --git a/wiki/actuation/__all_subsections.md b/wiki/actuation/__all_subsections.md index fc9d9f2e..39379a57 100644 --- a/wiki/actuation/__all_subsections.md +++ b/wiki/actuation/__all_subsections.md @@ -439,7 +439,7 @@ In most of the applications for a DC geared motor speed control or position cont In order to use feedback control, we need information about the state of the motor. This is achieved through the use of encoders mounted on the motor shaft. Typically, data from this encoder is fed into a microcontroller, such as an Arduino. The microcontroller would need to have a code for PID control. Another easier option would be to use a motor controller which has the ability to read data from the encoder. One such controller is Pololu Jrk 21v3 USB Motor Controller with Feedback. More details about this component can be found [here](https://www.pololu.com/product/1392.). -/wiki/actuation/moveit-and-HEBI-integration/ +/wiki/actuation/moveit-and-hebi-integration/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. @@ -709,7 +709,7 @@ If your sensor readings are very noise you might want to consider adding a Kalma Finally, you can write your own PID software using [this guide](http://brettbeauregard.com/blog/2011/04/improving-the-beginners-pid-introduction/). You might want to do this if you want to add custom features or just want to learn more about controls. Only recommended for advanced users. -/wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot/ +/wiki/actuation/pure-pursuit-controller-for-skid-steering-robot/ --- date: 2020-04-10 Title: Pure-Pursuit based Controller for Skid Steering Robot diff --git a/wiki/actuation/index.md b/wiki/actuation/index.md index fa0e0333..9cc21bea 100644 --- a/wiki/actuation/index.md +++ b/wiki/actuation/index.md @@ -21,13 +21,13 @@ The "Controls & Actuation" section provides a detailed guide to implementing and - **[Motor Controller with Feedback](/wiki/actuation/motor-controller-feedback/)** Introduces motor controllers with encoder feedback, highlighting the Pololu Jrk 21v3 USB Motor Controller as an example. -- **[MoveIt Motion Planning and HEBI Actuator Setup and Integration](/wiki/actuation/moveit-and-HEBI-integration/)** +- **[MoveIt Motion Planning and HEBI Actuator Setup and Integration](/wiki/actuation/moveit-and-hebi-integration/)** Outlines using MoveIt in ROS for robotic motion planning and integrating it with HEBI actuators for hardware execution. - **[PID Control on Arduino](/wiki/actuation/pid-control-arduino/)** Explains implementing PID control on Arduino platforms, including tips for tuning and integrating Kalman filters for noisy sensors. -- **[Pure-Pursuit Based Controller for Skid Steering Robots](/wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot/)** +- **[Pure-Pursuit Based Controller for Skid Steering Robots](/wiki/actuation/pure-pursuit-controller-for-skid-steering-robot/)** Covers the Pure-Pursuit algorithm for trajectory tracking in skid-steering robots, including implementation steps and constraints. - **[Task Prioritization Control for Advanced Manipulator Control](/wiki/actuation/task-prioritization-control/)** diff --git a/wiki/actuation/moveit-and-HEBI-integration.md b/wiki/actuation/moveit-and-hebi-integration.md similarity index 100% rename from wiki/actuation/moveit-and-HEBI-integration.md rename to wiki/actuation/moveit-and-hebi-integration.md diff --git a/wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot.md b/wiki/actuation/pure-pursuit-controller-for-skid-steering-robot.md similarity index 100% rename from wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot.md rename to wiki/actuation/pure-pursuit-controller-for-skid-steering-robot.md diff --git a/wiki/common-platforms/__all_subsections.md b/wiki/common-platforms/__all_subsections.md index 7a0a9ec1..ae9b8b4a 100644 --- a/wiki/common-platforms/__all_subsections.md +++ b/wiki/common-platforms/__all_subsections.md @@ -555,7 +555,7 @@ Using the above tutorials, one can get started easily with the HelloNode and Too - [Tool Share examples](https://github.com/hello-robot/stretch_tool_share) -/wiki/common-platforms/husky_interfacing_and_communication/ +/wiki/common-platforms/husky-interfacing-and-communication/ --- date: 2019-05-14 title: Husky Interfacing and Communication @@ -897,7 +897,7 @@ For instance, in the [Husky](https://github.com/husky/husky/tree/humble-devel) r This tutorial aims to guide you through the process of setting up the ROS 1 navigation stack on the Clearpath Husky and seamlessly connecting it to ROS 2. It assumes a foundational understanding of both ROS 1 and ROS 2. ## ROS 1 - ROS 2 Bridge -To configure the Clearpath Husky hardware, we will be using the [husky_robot](https://github.com/husky/husky_robot) repository. This repository contains the ROS 1 packages for the Husky, including the navigation stack. To connect the ROS 1 packages to ROS 2, we will be using the [ros1_bridge](https://github.com/ros2/ros1_bridge) package. Detailed instruction on how to setup this is provided in [this tutorial](https://roboticsknowledgebase.com/wiki/interfacing/ros1_ros2_bridge/) on the Robotics Knowledgebase. Once the bridge is established, we can proceed to configure the Husky using the following steps. +To configure the Clearpath Husky hardware, we will be using the [husky_robot](https://github.com/husky/husky_robot) repository. This repository contains the ROS 1 packages for the Husky, including the navigation stack. To connect the ROS 1 packages to ROS 2, we will be using the [ros1_bridge](https://github.com/ros2/ros1_bridge) package. Detailed instruction on how to setup this is provided in [this tutorial](https://roboticsknowledgebase.com/wiki/interfacing/ros1-ros2-bridge/) on the Robotics Knowledgebase. Once the bridge is established, we can proceed to configure the Husky using the following steps. ``` # Install Husky Packages apt-get update && apt install ros-noetic-husky* -y @@ -1280,7 +1280,7 @@ This tutorial provides a step-by-step guide to configure the Clearpath Husky for It is recommended to read the [Nav2 documentation](https://navigation.ros.org/index.html) to understand the Nav2 stack in detail. The [Nav2 tutorials](https://navigation.ros.org/getting_started/index.html) are also a good place to start. ## See Also: -- [ROS1 - ROS2 Bridge](https://roboticsknowledgebase.com/wiki/interfacing/ros1_ros2_bridge/) +- [ROS1 - ROS2 Bridge](https://roboticsknowledgebase.com/wiki/interfacing/ros1-ros2-bridge/) ## Further Readings: - [Nav2 First Time Setup](https://navigation.ros.org/setup_guides/index.html) diff --git a/wiki/common-platforms/husky_interfacing_and_communication.md b/wiki/common-platforms/husky-interfacing-and-communication.md similarity index 100% rename from wiki/common-platforms/husky_interfacing_and_communication.md rename to wiki/common-platforms/husky-interfacing-and-communication.md diff --git a/wiki/common-platforms/index.md b/wiki/common-platforms/index.md index 560dc944..244e2adf 100644 --- a/wiki/common-platforms/index.md +++ b/wiki/common-platforms/index.md @@ -35,7 +35,7 @@ We encourage contributions to further enhance the knowledge base in this section - **[Interfacing with the Nvidia Orin](/wiki/common-platforms/interfacing-with-nvidia-orin/)** A comprehensive guide to using the Nvidia Jetson AGX Orin for robotics. Covers power delivery, GPIO pinouts, high-speed interfaces like USB and Ethernet, and debugging tools for reliable sensor integration. -- **[Husky Interfacing and Communication](/wiki/common-platforms/husky_interfacing_and_communication/)** +- **[Husky Interfacing and Communication](/wiki/common-platforms/husky-interfacing-and-communication/)** Discusses how to set up communication with the Clearpath Husky robot, including hardware setup and localization using GPS, IMU, and odometry. - **[Khepera 4 Robot Guide](/wiki/common-platforms/khepera4/)** diff --git a/wiki/common-platforms/ros/ros-intro.md b/wiki/common-platforms/ros/ros-intro.md index 855d4f02..1bbd098d 100644 --- a/wiki/common-platforms/ros/ros-intro.md +++ b/wiki/common-platforms/ros/ros-intro.md @@ -25,9 +25,8 @@ Actually, ROS is not an operating system but a meta operating system, which mean ROS is useless without knowing how it works. Merely reading through the tutorials are not enough; this cannot be stressed enough. Learning ROS takes time and effort, so when going through the tutorials, try to understand what you are seeing, and make sure you follow along by Typing the example code, and run each tutorial to learn what is happening. - One of the most important package in ROS is [navigation stack](ros-navigation). Here are several topics of it covered in this directory for reference. + One of the most important package in ROS is [navigation stack](ros-navigation). Here are several topics of it covered in this wiki for reference. 1. [Global Planner](ros-global-planner.md) - 2. [Local Planner](ros-local-planner) - 3. [Costmap](ros-cost-maps) - 4. [Mapping and Localization](ros-mapping-localization) - 5. [Motion Server](ros-motion-server-framework) + 2. [Costmap](ros-cost-maps) + 3. [Mapping and Localization](ros-mapping-localization) + 4. [Motion Server](ros-motion-server-framework) diff --git a/wiki/common-platforms/ros2-navigation-for-clearpath-husky.md b/wiki/common-platforms/ros2-navigation-for-clearpath-husky.md index 12efa8fb..0a805802 100644 --- a/wiki/common-platforms/ros2-navigation-for-clearpath-husky.md +++ b/wiki/common-platforms/ros2-navigation-for-clearpath-husky.md @@ -10,7 +10,7 @@ For instance, in the [Husky](https://github.com/husky/husky/tree/humble-devel) r This tutorial aims to guide you through the process of setting up the ROS 1 navigation stack on the Clearpath Husky and seamlessly connecting it to ROS 2. It assumes a foundational understanding of both ROS 1 and ROS 2. ## ROS 1 - ROS 2 Bridge -To configure the Clearpath Husky hardware, we will be using the [husky_robot](https://github.com/husky/husky_robot) repository. This repository contains the ROS 1 packages for the Husky, including the navigation stack. To connect the ROS 1 packages to ROS 2, we will be using the [ros1_bridge](https://github.com/ros2/ros1_bridge) package. Detailed instruction on how to setup this is provided in [this tutorial](https://roboticsknowledgebase.com/wiki/interfacing/ros1_ros2_bridge/) on the Robotics Knowledgebase. Once the bridge is established, we can proceed to configure the Husky using the following steps. +To configure the Clearpath Husky hardware, we will be using the [husky_robot](https://github.com/husky/husky_robot) repository. This repository contains the ROS 1 packages for the Husky, including the navigation stack. To connect the ROS 1 packages to ROS 2, we will be using the [ros1_bridge](https://github.com/ros2/ros1_bridge) package. Detailed instruction on how to setup this is provided in [this tutorial](https://roboticsknowledgebase.com/wiki/interfacing/ros1-ros2-bridge/) on the Robotics Knowledgebase. Once the bridge is established, we can proceed to configure the Husky using the following steps. ``` # Install Husky Packages apt-get update && apt install ros-noetic-husky* -y @@ -393,7 +393,7 @@ This tutorial provides a step-by-step guide to configure the Clearpath Husky for It is recommended to read the [Nav2 documentation](https://navigation.ros.org/index.html) to understand the Nav2 stack in detail. The [Nav2 tutorials](https://navigation.ros.org/getting_started/index.html) are also a good place to start. ## See Also: -- [ROS1 - ROS2 Bridge](https://roboticsknowledgebase.com/wiki/interfacing/ros1_ros2_bridge/) +- [ROS1 - ROS2 Bridge](https://roboticsknowledgebase.com/wiki/interfacing/ros1-ros2-bridge/) ## Further Readings: - [Nav2 First Time Setup](https://navigation.ros.org/setup_guides/index.html) diff --git a/wiki/fabrication/__all_subsections.md b/wiki/fabrication/__all_subsections.md index 0a90162c..139b454e 100644 --- a/wiki/fabrication/__all_subsections.md +++ b/wiki/fabrication/__all_subsections.md @@ -142,7 +142,7 @@ After the build plate comes to rest, apply glue on it covering the area that is After the print has completed, wait for some time so that the heated plate can cool down. After the cool down, remove the build plate carefully from the Cube Pro and run the part and the plate under water (preferably hot). Water dissolves the glue and allows easier removal of part. You might need to use a chisel to remove you part (depends on the material, print pattern and the amount of glue you have used). After removing the part, clean the part and the plate using paper towels. Also remove and stray chips that might have been left during the part removal on the plate so that you and others have a smooth surface for the next print. -/wiki/fabrication/fabrication_considerations_for_3D_printing/ +/wiki/fabrication/fabrication-considerations-for-3d-printing/ --- date: 2019-05-16 title: Fabrication Considerations for 3D printing @@ -400,7 +400,7 @@ It is important to keep in mind what the purpose of the prototype is. A good rul The important thing to keep in mind that rapid prototyping does is that it helps you to test and fail early in your design process. Quickly developing systems and testing them allows you to understand the failure modes and work towards eliminating those. Faster you develop and test, more time you have to build a robust and fail proof system. Ra -/wiki/fabrication/series-A-pro/ +/wiki/fabrication/series-a-pro/ --- date: 2017-12-16 title: Series A Pro diff --git a/wiki/fabrication/fabrication_considerations_for_3D_printing.md b/wiki/fabrication/fabrication-considerations-for-3d-printing.md similarity index 100% rename from wiki/fabrication/fabrication_considerations_for_3D_printing.md rename to wiki/fabrication/fabrication-considerations-for-3d-printing.md diff --git a/wiki/fabrication/index.md b/wiki/fabrication/index.md index dfa3443c..f72254de 100644 --- a/wiki/fabrication/index.md +++ b/wiki/fabrication/index.md @@ -15,7 +15,7 @@ This section provides in-depth resources and tutorials for various fabrication t - **[Building Prototypes in CubePro](/wiki/fabrication/cube-pro/)** A step-by-step guide for using the CubePro 3D printer, from setting up materials to leveling the plate and applying glue for optimal prints. Covers part removal and maintenance tips. -- **[Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication_considerations_for_3D_printing/)** +- **[Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication-considerations-for-3d-printing/)** Key factors to consider in 3D printing, such as wall thickness, part orientation, overhangs, warping, shrinkage, and material selection (e.g., PLA, ABS). Includes strategies to improve print quality and optimize design. - **[Machining and Prototyping](/wiki/fabrication/machining-prototyping/)** @@ -33,7 +33,7 @@ This section provides in-depth resources and tutorials for various fabrication t - **[Rapid Prototyping](/wiki/fabrication/rapid-prototyping/)** Discusses the iterative nature of rapid prototyping, highlighting techniques like CAD design, 3D printing, and laser cutting. Emphasizes the importance of validating designs early and cost-effectively. -- **[Series A Pro](/wiki/fabrication/series-A-pro/)** +- **[Series A Pro](/wiki/fabrication/series-a-pro/)** Explores the features of the Series A Pro 3D printer, including its web interface, print profiles, material options, and advanced control capabilities for high-quality and flexible printing. - **[Soldering](/wiki/fabrication/soldering/)** diff --git a/wiki/fabrication/series-A-pro.md b/wiki/fabrication/series-a-pro.md similarity index 100% rename from wiki/fabrication/series-A-pro.md rename to wiki/fabrication/series-a-pro.md diff --git a/wiki/fabrication/ultimaker-series.md b/wiki/fabrication/ultimaker-series.md index 332fe9e9..eee54829 100644 --- a/wiki/fabrication/ultimaker-series.md +++ b/wiki/fabrication/ultimaker-series.md @@ -25,7 +25,7 @@ Using the options on the left side of the screen (Circle 2), move your piece(s) Next, check that the material in the menu indicated by Circle 3 **matches what is actually loaded into your printer**. If you don't do this, you your spliced file may not load correctly when you attempt to start the print. -In the menu indicated by Circle 4, there are a variety of print options for you to explore. You can learn more about different print options and how to choose them for your specific piece by heading over to our page on [Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication_considerations_for_3D_printing/). +In the menu indicated by Circle 4, there are a variety of print options for you to explore. You can learn more about different print options and how to choose them for your specific piece by heading over to our page on [Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication-considerations-for-3d-printing/). Finally, you can click the 'Slice' button indiciated by Circle 5 when ready. As long as your material was loaded correctly, you should not have to worry about re-setting things like print temperatures or changing the core. You *should* check those when changing the loaded material spools, however. @@ -93,7 +93,7 @@ All in all, the Ultimaker Series 3 is relatively capable 3D printer that is easy ## See Also - [3D Printers](/wiki/fabrication/3d-printers/) -- [Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication_considerations_for_3D_printing/) +- [Fabrication Considerations for 3D Printing](/wiki/fabrication/fabrication-considerations-for-3d-printing/) ## Further Reading diff --git a/wiki/interfacing/__all_subsections.md b/wiki/interfacing/__all_subsections.md index d5fa2e25..734fe427 100644 --- a/wiki/interfacing/__all_subsections.md +++ b/wiki/interfacing/__all_subsections.md @@ -591,7 +591,7 @@ You will soon realize that working with Lua is extremely useless as it does not - [ROS](https://github.com/roboTJ101/ros_myo) -/wiki/interfacing/ros1_ros2_bridge/ +/wiki/interfacing/ros1-ros2-bridge/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. diff --git a/wiki/interfacing/buffer-issues.md b/wiki/interfacing/buffer-issues.md index 52eedaf0..92df2c3a 100644 --- a/wiki/interfacing/buffer-issues.md +++ b/wiki/interfacing/buffer-issues.md @@ -100,7 +100,7 @@ Key takeaways: - [RC Cars for Autonomous Vehicle Research](/wiki/common-platforms/rccars-the-complete-guide/) - [ROS-Arduino Interface using rosserial](/wiki/common-platforms/ros/ros-arduino-interface/) - [Vedder Open-Source Electronic Speed Controller](/wiki/actuation/vedder-electronic-speed-controller/) -- [Pure Pursuit Controller for Skid-Steering Robot](/wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot/) +- [Pure Pursuit Controller for Skid-Steering Robot](/wiki/actuation/pure-pursuit-controller-for-skid-steering-robot/) ## Further Reading - https://vesc-project.com/ diff --git a/wiki/interfacing/index.md b/wiki/interfacing/index.md index 1eff3505..4274dee4 100644 --- a/wiki/interfacing/index.md +++ b/wiki/interfacing/index.md @@ -24,7 +24,7 @@ This section delves into **interfacing techniques and tools** for robotics appli - **[Getting Started with the Myo](/wiki/interfacing/myo/)** A beginner-friendly tutorial for the Myo Gesture Control Armband, covering Lua scripting basics, API access, and available language bindings (e.g., Python, Java, ROS). Guides users through creating simple gesture-based applications and integrating Myo with larger robotic systems. -- **[ROS 1 - ROS 2 Bridge](/wiki/interfacing/ros1_ros2_bridge/)** +- **[ROS 1 - ROS 2 Bridge](/wiki/interfacing/ros1-ros2-bridge/)** A detailed walkthrough of setting up the ROS 1 bridge to enable communication between ROS 1 and ROS 2 environments. Discusses dynamic and static bridges, installation tips, and best practices for sourcing. Also includes guidance for Docker-based deployment and examples of bridging specific topics and services. ## Resources diff --git a/wiki/interfacing/ros1_ros2_bridge.md b/wiki/interfacing/ros1-ros2-bridge.md similarity index 100% rename from wiki/interfacing/ros1_ros2_bridge.md rename to wiki/interfacing/ros1-ros2-bridge.md diff --git a/wiki/machine-learning/intro-to-diffusion.md b/wiki/machine-learning/intro-to-diffusion.md index c220af0a..b12b089a 100644 --- a/wiki/machine-learning/intro-to-diffusion.md +++ b/wiki/machine-learning/intro-to-diffusion.md @@ -324,7 +324,7 @@ Diffusion models represent a powerful paradigm for generative modeling that has - [Introduction to Reinforcement Learning](/wiki/machine-learning/intro-to-rl/) - Alternative approaches to learning robot policies - [GRPO for Diffusion Policies in Robotics](/wiki/machine-learning/grpo-diffusion-policies/) - Using reinforcement learning to optimize diffusion policies with reward-based learning -- [NLP for Robotics](/wiki/machine-learning/nlp_for_robotics/) - Other applications of modern ML in robotics +- [NLP for Robotics](/wiki/machine-learning/nlp-for-robotics/) - Other applications of modern ML in robotics ## Further Reading diff --git a/wiki/networking/__all_subsections.md b/wiki/networking/__all_subsections.md index 84704199..7f85e20a 100644 --- a/wiki/networking/__all_subsections.md +++ b/wiki/networking/__all_subsections.md @@ -33,7 +33,7 @@ There are different transport protocols used in Bluetooth, the most important of ## Bluez stack and PyBluez Now coming to the actual application part of the post. PyBluez is a C (known as Bluez) and Python library for Bluetooth socket programming. PyBluez is a Python extension module written in C that provides access to system Bluetooth resources in an object oriented, modular manner. Although it provides the same functionality in both languages, only Python based implementation is discussed here. PyBluez is available for Microsoft Windows (XP onwards) and GNU/Linux. Basic knowledge of Python is assumed in this tutorial and familiarity with Linux operating system as well. -Instructions for installing all the required libraries can be found on [here](htp://www.bluez.org), but installation on Linux is fairly simple through an apt repository. On the terminal one can simply type: +Instructions for installing all the required libraries can be found on [here](http://www.bluez.org), but installation on Linux is fairly simple through an apt repository. On the terminal one can simply type: ``apt-get install libbluetooth1-dev bluez-utils`` diff --git a/wiki/networking/bluetooth-sockets.md b/wiki/networking/bluetooth-sockets.md index 0dcaa9f9..ac98d595 100644 --- a/wiki/networking/bluetooth-sockets.md +++ b/wiki/networking/bluetooth-sockets.md @@ -31,7 +31,7 @@ There are different transport protocols used in Bluetooth, the most important of ## Bluez stack and PyBluez Now coming to the actual application part of the post. PyBluez is a C (known as Bluez) and Python library for Bluetooth socket programming. PyBluez is a Python extension module written in C that provides access to system Bluetooth resources in an object oriented, modular manner. Although it provides the same functionality in both languages, only Python based implementation is discussed here. PyBluez is available for Microsoft Windows (XP onwards) and GNU/Linux. Basic knowledge of Python is assumed in this tutorial and familiarity with Linux operating system as well. -Instructions for installing all the required libraries can be found on [here](htp://www.bluez.org), but installation on Linux is fairly simple through an apt repository. On the terminal one can simply type: +Instructions for installing all the required libraries can be found on [here](http://www.bluez.org), but installation on Linux is fairly simple through an apt repository. On the terminal one can simply type: ``apt-get install libbluetooth1-dev bluez-utils`` diff --git a/wiki/planning/__all_subsections.md b/wiki/planning/__all_subsections.md index a1dfc2a0..0ec2119c 100644 --- a/wiki/planning/__all_subsections.md +++ b/wiki/planning/__all_subsections.md @@ -1,4 +1,4 @@ -/wiki/planning/astar_planning_implementation_guide/ +/wiki/planning/astar-planning-implementation-guide/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. @@ -25,7 +25,7 @@ A* is a popular search algorithm that is guaranteed to return an optimal path, a - h(s) value - estimate of the cost-to-go from the current state (s) to the goal state (sgoal) - f(s) value - total estimated cost from the start state (sstart) to the goal state (sgoal) - Admissibility - h(s) is an underestimate of the true cost to goal -- Monotonicity/Consistent - h(s) <= c(s,s’) + h(s’) for all successors s’ of s +- Monotonicity/Consistent - $h(s) \le c(s,s') + h(s')$ for all successors $s'$ of $s$ - Optimality - no path exists from the start to the goal with a lower cost within the constraints of the problem A* works by computing optimal g-values for all states along the search at any point in time. @@ -244,48 +244,48 @@ There are many ways to plan a trajectory for a robot. A trajectory can be seen a The Frenet frame (also called the moving trihedron or Frenet trihedron) along a curve is a moving (right-handed) coordinate system determined by the tangent line and curvature. The frame, which locally describes one point on a curve, changes orientation along the length of the curve. -More formally, the Frenet frame of a curve at a point is a triplet of three mutually [orthogonal](https://www.statisticshowto.com/orthogonal-functions/#definition) unit vectors {T, N, B}. In three-dimensions, the Frenet frame consists of [1]: -The unit tangent vector T, which is the [unit vector](https://www.statisticshowto.com/tangent-vector-velocity/) in the direction of what is being modeled (like velocity), -The [unit normal] (https://www.statisticshowto.com/unit-normal-vector/) N: the direction where the curve is turning. We can get the normal by taking the [derivative](https://www.statisticshowto.com/differentiate-definition/) of the tangent then dividing by its length. You can think of the normal as being the place the curve sits in [2]. -The unit binormal B = T x N, which is the cross product of the unit tangent and unit normal. +More formally, the Frenet frame of a curve at a point is a triplet of three mutually [orthogonal](https://www.statisticshowto.com/orthogonal-functions/#definition) unit vectors $\{T, N, B\}$. In three dimensions, the Frenet frame consists of [1]: +- The unit tangent vector $T$, which is the [unit vector](https://www.statisticshowto.com/tangent-vector-velocity/) in the direction of what is being modeled (like velocity). +- The [unit normal](https://www.statisticshowto.com/unit-normal-vector/) vector $N$: the direction where the curve is turning. We can get the normal by taking the [derivative](https://www.statisticshowto.com/differentiate-definition/) of the tangent, then dividing by its length [2]. +- The unit binormal vector $B = T \times N$, which is the cross product of the unit tangent and unit normal. -The tangent and normal unit vectors span a plane called the osculating plane at F(s). In four-dimensions, the Frenet frame contains an additional vector, the trinormal unit vector [3]. While vectors have no [origin](https://www.statisticshowto.com/calculus-definitions/cartesian-plane-quadrants-ordinate-abscissa/#origin) in space, it’s traditional with Frenet frames to think of the vectors as radiating from the point of interest. +The tangent and normal unit vectors span a plane called the osculating plane at $F(s)$. In four dimensions, the Frenet frame contains an additional vector, the trinormal unit vector [3]. While vectors have no [origin](https://www.statisticshowto.com/calculus-definitions/cartesian-plane-quadrants-ordinate-abscissa/#origin) in space, it is traditional with Frenet frames to think of the vectors as radiating from the point of interest. More details:[here](https://fjp.at/posts/optimal-frenet/#trajectory-planning-in-the-frenet-space) ## Algorithm -1. Determine the trajectory start state [x1,x2,θ,κ,v,a] +1. Determine the trajectory start state $[x_1, x_2, \theta, \kappa, v, a]$ The trajectory start state is obtained by evaluating the previously calculated trajectory at the prospective start state (low-level-stabilization). At system initialization and after reinitialization, the current vehicle position is used instead (high-level-stabilization). -2. Selection of the lateral mode -Depending on the velocity v the time based (d(t)) or running length / arc length based (d(s)) lateral planning mode is activated. By projecting the start state onto the reference curve the the longitudinal start position s(0) is determined. The frenet state vector [s,s˙,s¨,d,d′,d′′](0) can be determined using the frenet transformation. For the time based lateral planning mode, [d˙,d¨](0) +2. Selection of the lateral mode +Depending on the velocity $v$, the time-based ($d(t)$) or running-length/arc-length-based ($d(s)$) lateral planning mode is activated. By projecting the start state onto the reference curve, the longitudinal start position $s(0)$ is determined. The Frenet state vector $[s,\dot{s},\ddot{s},d,d',d''](0)$ can be determined using the Frenet transformation. For the time-based lateral planning mode, $[\dot{d},\ddot{d}](0)$ need to be calculated. 3. Generating the lateral and longitudinal trajectories Trajectories including their costs are generated for the lateral (mode dependent) as well as the longitudinal motion (velocity keeping, vehicle following / distance keeping) in the frenet space. In this stage, trajectories with high lateral accelerations with respect to the reference path can be neglected to improve the computational performance. -4. Combining lateral and longitudinal trajectories -Summing the partial costs of lateral and longitudinal costs using J(d(t),s(t))=Jd(d(t))+ks⋅Js(s(t)) +4. Combining lateral and longitudinal trajectories +Summing the partial costs of lateral and longitudinal costs using $J(d(t), s(t)) = J_d(d(t)) + k_s \cdot J_s(s(t))$ , for all active longitudinal mode every longitudinal trajectory is combined with every lateral trajectory and transformed back to world coordinates using the reference path. The trajectories are verified if they obey physical driving limits by subsequent point wise evaluation of curvature and acceleration. This leads to a set of potentially drivable maneuvers of a specific mode in world coordinates. 5. Static and dynamic collision check Every trajectory set is evaluated with increasing total costs if static and dynamic collisions are avoided. The trajectory with the lowest cost is then selected. 6. Longitudinal mode alternation -Using the sign based (in the beginning) jerk a(0), the trajectory with the strongest decceleration or the trajectory which accelerates the least respectivel +Using the sign based (in the beginning) jerk `a(0)`, the trajectory with the strongest decceleration or the trajectory which accelerates the least respectivel -“Frenet Coordinates”, are a way of representing position on a road in a more intuitive way than traditional (x,y) Cartesian Coordinates. -With Frenet coordinates, we use the variables s and d to describe a vehicle’s position on the road or a reference path. The s coordinate represents distance along the road (also known as longitudinal displacement) and the d coordinate represents side-to-side position on the road (relative to the reference path), and is also known as lateral displacement. +“Frenet Coordinates”, are a way of representing position on a road in a more intuitive way than traditional `(x,y)` Cartesian Coordinates. +With Frenet coordinates, we use the variables `s` and `d` to describe a vehicle’s position on the road or a reference path. The `s` coordinate represents distance along the road (also known as longitudinal displacement) and the `d` coordinate represents side-to-side position on the road (relative to the reference path), and is also known as lateral displacement. In the following sections the advantages and disadvantages of Frenet coordinates are compared to the Cartesian coordinates. -##Frenet Features +## Frenet Features -The image below[frenet path] depicts a curvy road with a Cartesian coordinate system laid on top of it, as well as a curved (continuously curved) reference path (for example the middle of the road). +The image below ([frenet path]) depicts a curvy road with a Cartesian coordinate system laid on top of it, as well as a curved (continuously curved) reference path (for example, the middle of the road). The next image shows the same reference path together with its Frenet coordinates. -The s coordinate represents the run length and starts with s = 0 at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with d = 0. d is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. -The image above[frenet path] shows that curved reference paths (such as curvy roads) are represented as straight lines on the s axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non straight motions in Frenet coordinates. Instead such motions result in an offset from the reference path and therefore the s axis, which is described with the d coordinate. The following image shows the two different representations (Cartesian vs Frenet) +The `s` coordinate represents the run length and starts with `s = 0` at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with `d = 0`. `d` is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. +The image above ([frenet path]) shows that curved reference paths (such as curvy roads) are represented as straight lines on the $s$ axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non-straight motions in Frenet coordinates. Instead, such motions result in an offset from the reference path and therefore the $s$ axis, which is described with the $d$ coordinate. The following image shows the two different representations (Cartesian vs. Frenet). To use Frenet coordinates it is required to have a continouosly smooth reference path. -The s coordinate represents the run length and starts with s = 0 at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with d = 0. d is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. +The `s` coordinate represents the run length and starts with `s = 0` at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with `d = 0`. `d` is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. The image above shows that curved reference paths (such as curvy roads) are represented as straight lines on the s axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non straight motions in Frenet coordinates. Instead such motions result in an offset from the reference path and therefore the s axis, which is described with the d coordinate. The following image shows the two different representations (Cartesian vs Frenet) ## Reference Path @@ -299,21 +299,23 @@ A reference path can be represented in two different forms although for all repr 3. Clothoid (special polynome) 4. Polyline (single points with run length information) -Clothoid - x(l)=c0+c1∗l +Clothoid: + +$$x(l) = c_0 + c_1 l$$ Polyline ## Transformation The transformation from local vehicle coordinates to Frenet coordinates is based on the relations. -Given a point PC in the vehicle frame search for the closest point RC on the reference path. The run length of RC, which is known from the reference path points, determins the s coordinate of the transformed point PF. If the reference path is sufficiently smooth (continuously differentiable) then the vector PR→ is orthogonal to the reference path at the point RC. The signed length of PR→ determines the d coordinate of PF. The sign is positive, if PC +Given a point $P_C$ in the vehicle frame, search for the closest point $R_C$ on the reference path. The run length of $R_C$, which is known from the reference path points, determines the $s$ coordinate of the transformed point $P_F$. If the reference path is sufficiently smooth (continuously differentiable), then the vector $\overrightarrow{PR}$ is orthogonal to the reference path at point $R_C$. The signed length of $\overrightarrow{PR}$ determines the $d$ coordinate of $P_F$. The sign is positive if $P_C$ + +lies on the left along the run length of the reference path. -lies on the left along the run lenght of the reference path. +The procedure to transform a point $P_F$ +from Frenet coordinates to the local vehicle frame in Cartesian coordinates is analogous. First, find point $R_C$, which lies on the reference path at run length $s$. Next, a normal unit vector $\vec{d}$ is determined, which, at this point, is orthogonal to the reference path. The direction of this vector points toward positive $d$ values and therefore points to the left with increasing run length $s$. Therefore, the vector $\vec{d}$ depends on run length, which leads to: -The procedure to transform a point PF -from Frenet coordinates to the local vehicle frame in Cartesian coordinates is analogous. First, the point RC, which lies on the reference path at run length s. Next, a normal unit vector d⃗ is determined, which, in this point, is orthogonal to the reference path. The direction of this vector points towards positive d values and therefore points to the left with increasing run length s. Therefore, the vector d⃗ depends on the run length, which leads to: - PC(s,d)=RC(s)+d⋅d⃗ (s)(2) +$$P_C(s,d) = R_C(s) + d \cdot \vec{d}(s)$$ @@ -330,18 +332,18 @@ The given article describes in detail what is Frenet Frame and how robot motion ## Further Reading -[https://fjp.at/posts/optimal-frenet/#frenet-coordinates](Frenet Cordinates) -[https://www.mathworks.com/help/nav/ug/highway-trajectory-planning-using-frenet.html](Highway Trajectory Planning Using Frenet Reference Path) -[https://www.researchgate.net/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame](Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame) +[Frenet Cordinates](https://fjp.at/posts/optimal-frenet/#frenet-coordinates) +[Highway Trajectory Planning Using Frenet Reference Path](https://www.mathworks.com/help/nav/ug/highway-trajectory-planning-using-frenet.html) +[Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame](https://www.researchgate.net/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame) ## References -[https://fjp.at/posts/optimal-frenet/#frenet-coordinates](Frenet Frame) +[Frenet Frame](https://fjp.at/posts/optimal-frenet/#frenet-coordinates) -/wiki/planning/move_base_flex/ +/wiki/planning/move-base-flex/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. @@ -718,7 +720,7 @@ This section is focused on search-based planning algorithms, and more specifical The A* algorithm is similar to Djikstra’s algorithm except that A* is incentivized to search towards the goal, thereby focusing the search. Specifically, the difference is an added heuristic that takes into account the cost of the node of interest to the goal node in addition to the cost to get to that node from the start node. Since A* prioritizes searching those nodes that are closer to the end goal, this typically results in faster computation time in comparison to Djikstra’s. Graph search algorithms like Djikstra’s and A* utilize a priority queue to determine which node in the graph to expand from. Graph search models use a function to assign a value to each node, and this value is used to determine which node to next expand from. The function is of the form: -f(s) = g(s) + h(s) +$$f(s) = g(s) + h(s)$$ With g(s) being the cost of the path to the current node, and h(s) some heuristic which changes depending on which algorithm is being implemented. A typically used heuristic is euclidean distance from the current node to the goal node. @@ -735,7 +737,7 @@ D* Lite’s main perk is that the algorithm is much more simple than A* or D*, a With these two estimates, we have consistency defined, where a node is **consistent** if g = rhs, and a node is inconsistent otherwise. Inconsistent nodes are then processed with higher priority, to be made consistent, based on a queue, which allows the algorithm to focus the search and order the cost updates more efficiently. The priority of a node on this list is based on: -minimum(g(s), rhs(s)) + heuristic +$$\min(g(s), rhs(s)) + \text{heuristic}$$ Other Terminology: - A **successor node** is defined as one that has a directed edge from another node to the node @@ -760,15 +762,15 @@ Nonholonomic systems are characterized by constraint equations involving the tim **Two-driving wheel robots**: The general dynamic model is given as: ![](/assets/images/planning/2dwr1.png) -The reference point of the robot is the midpoint of the two wheels; its coordinates, with respect to a fixed frame, are denoted by (x,y) and θ is the direction of the driving wheels and ℓ is the distance between the driving wheels. By setting v=½\*(v1+v2) and ω= 1/ℓ * (v1−v2) we get the kinematic model which is expressed as the following 3-dimensional system: +The reference point of the robot is the midpoint of the two wheels; its coordinates, with respect to a fixed frame, are denoted by $(x, y)$ and $\theta$ is the direction of the driving wheels while $\ell$ is the distance between the driving wheels. By setting $v=\frac{1}{2}(v_1+v_2)$ and $\omega=\frac{1}{\ell}(v_1-v_2)$, we get the kinematic model expressed as the following 3-dimensional system: ![](/assets/images/planning/clr2.png) -**Car-like robots**: The reference point with coordinates (x,y) is the midpoint of the rear wheels. We assume that the distance between both rear and front axles is unit length. We denote w as the speed of the front wheels of the car and ζ as the angle between the front wheels and the main direction θ of the car. Moreover a mechanical constraint imposes |ζ| ≤ ζmax and consequently a minimum turning radius. The general dynamic model is given as: +**Car-like robots**: The reference point with coordinates $(x,y)$ is the midpoint of the rear wheels. We assume that the distance between both rear and front axles is unit length. We denote $w$ as the speed of the front wheels of the car and $\zeta$ as the angle between the front wheels and the main direction $\theta$ of the car. Moreover, a mechanical constraint imposes $|\zeta| \le \zeta_{\max}$ and consequently a minimum turning radius. The general dynamic model is given as: ![](/assets/images/planning/clr1.png) -A first simplification consists in controlling w; it gives a 4-dimensional system. Let us assume that we do not care about the direction of the front wheels. We may still simplify the model. By setting v=wcosζ and ω=wsinζ we get a 3-dimensional system. +A first simplification consists in controlling $w$; it gives a 4-dimensional system. Let us assume that we do not care about the direction of the front wheels. We may still simplify the model. By setting $v=w\cos\zeta$ and $\omega=w\sin\zeta$, we get a 3-dimensional system. ![](/assets/images/planning/clr2.png) diff --git a/wiki/planning/astar_planning_implementation_guide.md b/wiki/planning/astar-planning-implementation-guide.md similarity index 100% rename from wiki/planning/astar_planning_implementation_guide.md rename to wiki/planning/astar-planning-implementation-guide.md diff --git a/wiki/planning/chomp-planning.md b/wiki/planning/chomp-planning.md index 8fdc2682..b6024a0e 100644 --- a/wiki/planning/chomp-planning.md +++ b/wiki/planning/chomp-planning.md @@ -79,8 +79,8 @@ CHOMP is a highly effective local trajectory optimizer that enhances motion plan ## See Also: - [xarm_ros Guide](/wiki/tools/xarm-ros-guide/) -- [MoveIt and HEBI Integration](/wiki/actuation/moveit-and-HEBI-integration/) -- [Pure Pursuit Controller for Skid Steering](/wiki/actuation/Pure-Pursuit-Controller-for-Skid-Steering-Robot/) +- [MoveIt and HEBI Integration](/wiki/actuation/moveit-and-hebi-integration/) +- [Pure Pursuit Controller for Skid Steering](/wiki/actuation/pure-pursuit-controller-for-skid-steering-robot/) ## Further Reading - [CHOMP: Gradient Optimization Techniques for Efficient Motion Planning](https://homes.cs.washington.edu/~joschu/docs/chomp.pdf) diff --git a/wiki/planning/frenet-frame-planning.md b/wiki/planning/frenet-frame-planning.md index 30e5e407..fe12587e 100644 --- a/wiki/planning/frenet-frame-planning.md +++ b/wiki/planning/frenet-frame-planning.md @@ -21,76 +21,78 @@ There are many ways to plan a trajectory for a robot. A trajectory can be seen a The Frenet frame (also called the moving trihedron or Frenet trihedron) along a curve is a moving (right-handed) coordinate system determined by the tangent line and curvature. The frame, which locally describes one point on a curve, changes orientation along the length of the curve. -More formally, the Frenet frame of a curve at a point is a triplet of three mutually [orthogonal](https://www.statisticshowto.com/orthogonal-functions/#definition) unit vectors {T, N, B}. In three-dimensions, the Frenet frame consists of [1]: -The unit tangent vector T, which is the [unit vector](https://www.statisticshowto.com/tangent-vector-velocity/) in the direction of what is being modeled (like velocity), -The [unit normal] (https://www.statisticshowto.com/unit-normal-vector/) N: the direction where the curve is turning. We can get the normal by taking the [derivative](https://www.statisticshowto.com/differentiate-definition/) of the tangent then dividing by its length. You can think of the normal as being the place the curve sits in [2]. -The unit binormal B = T x N, which is the cross product of the unit tangent and unit normal. +More formally, the Frenet frame of a curve at a point is a triplet of three mutually [orthogonal](https://www.statisticshowto.com/orthogonal-functions/#definition) unit vectors $\{T, N, B\}$. In three dimensions, the Frenet frame consists of [1]: +- The unit tangent vector $T$, which is the [unit vector](https://www.statisticshowto.com/tangent-vector-velocity/) in the direction of what is being modeled (like velocity). +- The [unit normal](https://www.statisticshowto.com/unit-normal-vector/) vector $N$: the direction where the curve is turning. We can get the normal by taking the [derivative](https://www.statisticshowto.com/differentiate-definition/) of the tangent, then dividing by its length [2]. +- The unit binormal vector $B = T \times N$, which is the cross product of the unit tangent and unit normal. -The tangent and normal unit vectors span a plane called the osculating plane at F(s). In four-dimensions, the Frenet frame contains an additional vector, the trinormal unit vector [3]. While vectors have no [origin](https://www.statisticshowto.com/calculus-definitions/cartesian-plane-quadrants-ordinate-abscissa/#origin) in space, it’s traditional with Frenet frames to think of the vectors as radiating from the point of interest. +The tangent and normal unit vectors span a plane called the osculating plane at $F(s)$. In four dimensions, the Frenet frame contains an additional vector, the trinormal unit vector [3]. While vectors have no [origin](https://www.statisticshowto.com/calculus-definitions/cartesian-plane-quadrants-ordinate-abscissa/#origin) in space, it is traditional with Frenet frames to think of the vectors as radiating from the point of interest. More details:[here](https://fjp.at/posts/optimal-frenet/#trajectory-planning-in-the-frenet-space) ## Algorithm -1. Determine the trajectory start state [x1,x2,θ,κ,v,a] +1. Determine the trajectory start state $[x_1, x_2, \theta, \kappa, v, a]$ The trajectory start state is obtained by evaluating the previously calculated trajectory at the prospective start state (low-level-stabilization). At system initialization and after reinitialization, the current vehicle position is used instead (high-level-stabilization). -2. Selection of the lateral mode -Depending on the velocity v the time based (d(t)) or running length / arc length based (d(s)) lateral planning mode is activated. By projecting the start state onto the reference curve the the longitudinal start position s(0) is determined. The frenet state vector [s,s˙,s¨,d,d′,d′′](0) can be determined using the frenet transformation. For the time based lateral planning mode, [d˙,d¨](0) +2. Selection of the lateral mode +Depending on the velocity $v$, the time-based ($d(t)$) or running-length/arc-length-based ($d(s)$) lateral planning mode is activated. By projecting the start state onto the reference curve, the longitudinal start position $s(0)$ is determined. The Frenet state vector $(s,\dot{s},\ddot{s},d,d',d'')$ at $t=0$ can be determined using the Frenet transformation. For the time-based lateral planning mode, $(\dot{d},\ddot{d})$ at $t=0$ need to be calculated. 3. Generating the lateral and longitudinal trajectories Trajectories including their costs are generated for the lateral (mode dependent) as well as the longitudinal motion (velocity keeping, vehicle following / distance keeping) in the frenet space. In this stage, trajectories with high lateral accelerations with respect to the reference path can be neglected to improve the computational performance. -4. Combining lateral and longitudinal trajectories -Summing the partial costs of lateral and longitudinal costs using J(d(t),s(t))=Jd(d(t))+ks⋅Js(s(t)) +4. Combining lateral and longitudinal trajectories +Summing the partial costs of lateral and longitudinal costs using $J(d(t), s(t)) = J_d(d(t)) + k_s \cdot J_s(s(t))$ , for all active longitudinal mode every longitudinal trajectory is combined with every lateral trajectory and transformed back to world coordinates using the reference path. The trajectories are verified if they obey physical driving limits by subsequent point wise evaluation of curvature and acceleration. This leads to a set of potentially drivable maneuvers of a specific mode in world coordinates. 5. Static and dynamic collision check Every trajectory set is evaluated with increasing total costs if static and dynamic collisions are avoided. The trajectory with the lowest cost is then selected. 6. Longitudinal mode alternation -Using the sign based (in the beginning) jerk a(0), the trajectory with the strongest decceleration or the trajectory which accelerates the least respectivel +Using the sign-based (in the beginning) jerk $a(0)$, the trajectory with the strongest decceleration or the trajectory which accelerates the least respectivel -“Frenet Coordinates”, are a way of representing position on a road in a more intuitive way than traditional (x,y) Cartesian Coordinates. -With Frenet coordinates, we use the variables s and d to describe a vehicle’s position on the road or a reference path. The s coordinate represents distance along the road (also known as longitudinal displacement) and the d coordinate represents side-to-side position on the road (relative to the reference path), and is also known as lateral displacement. +“Frenet Coordinates” are a way of representing position on a road in a more intuitive way than traditional $(x,y)$ Cartesian coordinates. +With Frenet coordinates, we use the variables $s$ and $d$ to describe a vehicle’s position on the road or a reference path. The $s$ coordinate represents distance along the road (also known as longitudinal displacement) and the $d$ coordinate represents side-to-side position on the road (relative to the reference path), and is also known as lateral displacement. In the following sections the advantages and disadvantages of Frenet coordinates are compared to the Cartesian coordinates. -##Frenet Features +## Frenet Features -The image below[frenet path] depicts a curvy road with a Cartesian coordinate system laid on top of it, as well as a curved (continuously curved) reference path (for example the middle of the road). +The image below ([frenet path]) depicts a curvy road with a Cartesian coordinate system laid on top of it, as well as a curved (continuously curved) reference path (for example, the middle of the road). The next image shows the same reference path together with its Frenet coordinates. -The s coordinate represents the run length and starts with s = 0 at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with d = 0. d is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. -The image above[frenet path] shows that curved reference paths (such as curvy roads) are represented as straight lines on the s axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non straight motions in Frenet coordinates. Instead such motions result in an offset from the reference path and therefore the s axis, which is described with the d coordinate. The following image shows the two different representations (Cartesian vs Frenet) +The $s$ coordinate represents the run length and starts with $s = 0$ at the beginning of the reference path. Lateral positions relative to the reference path are represented with the $d$ coordinate. Positions on the reference path are represented with $d = 0$. $d$ is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. +The image above ([frenet path]) shows that curved reference paths (such as curvy roads) are represented as straight lines on the $s$ axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non-straight motions in Frenet coordinates. Instead, such motions result in an offset from the reference path and therefore the $s$ axis, which is described with the $d$ coordinate. The following image shows the two different representations (Cartesian vs. Frenet). To use Frenet coordinates it is required to have a continouosly smooth reference path. -The s coordinate represents the run length and starts with s = 0 at the beginning of the reference path. Lateral positions relative to the reference path are are represented with the d coordinate. Positions on the reference path are represented with d = 0. d is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. -The image above shows that curved reference paths (such as curvy roads) are represented as straight lines on the s axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non straight motions in Frenet coordinates. Instead such motions result in an offset from the reference path and therefore the s axis, which is described with the d coordinate. The following image shows the two different representations (Cartesian vs Frenet) +The $s$ coordinate represents the run length and starts with $s = 0$ at the beginning of the reference path. Lateral positions relative to the reference path are represented with the $d$ coordinate. Positions on the reference path are represented with $d = 0$. $d$ is positive to the left of the reference path and negative on the right of it, although this depends on the convention used for the local reference frame. +The image above shows that curved reference paths (such as curvy roads) are represented as straight lines on the $s$ axis in Frenet coordinates. However, motions that do not follow the reference path exactly result in non-straight motions in Frenet coordinates. Instead, such motions result in an offset from the reference path and therefore the $s$ axis, which is described with the $d$ coordinate. The following image shows the two different representations (Cartesian vs. Frenet). ## Reference Path -Frenet coordinates provide a mathematically simpler representation of a reference path, because its run length is described with the s axis. This reference path provides a rough reference to follow an arbitrary but curvature continuous course of the road. To avoid collisions, the planner must take care of other objects in the environment, either static or dynamic. Such objects are usually not avoided by the reference path. +Frenet coordinates provide a mathematically simpler representation of a reference path, because its run length is described with the $s$ axis. This reference path provides a rough reference to follow an arbitrary but curvature-continuous course of the road. To avoid collisions, the planner must take care of other objects in the environment, either static or dynamic. Such objects are usually not avoided by the reference path. -A reference path can be represented in two different forms although for all representations a run length information, which represents the s axis, is required for the transformation. +A reference path can be represented in two different forms although for all representations a run-length information, which represents the $s$ axis, is required for the transformation. 1. Polynome 2. Spline (multiple polynomes) 3. Clothoid (special polynome) 4. Polyline (single points with run length information) -Clothoid - x(l)=c0+c1∗l +Clothoid: + +$$x(l) = c_0 + c_1 l$$ Polyline ## Transformation The transformation from local vehicle coordinates to Frenet coordinates is based on the relations. -Given a point PC in the vehicle frame search for the closest point RC on the reference path. The run length of RC, which is known from the reference path points, determins the s coordinate of the transformed point PF. If the reference path is sufficiently smooth (continuously differentiable) then the vector PR→ is orthogonal to the reference path at the point RC. The signed length of PR→ determines the d coordinate of PF. The sign is positive, if PC +Given a point $P_C$ in the vehicle frame, search for the closest point $R_C$ on the reference path. The run length of $R_C$, which is known from the reference path points, determines the $s$ coordinate of the transformed point $P_F$. If the reference path is sufficiently smooth (continuously differentiable), then the vector $\overrightarrow{PR}$ is orthogonal to the reference path at point $R_C$. The signed length of $\overrightarrow{PR}$ determines the $d$ coordinate of $P_F$. The sign is positive if $P_C$ + +lies on the left along the run length of the reference path. -lies on the left along the run lenght of the reference path. +The procedure to transform a point $P_F$ +from Frenet coordinates to the local vehicle frame in Cartesian coordinates is analogous. First, find point $R_C$, which lies on the reference path at run length $s$. Next, a normal unit vector $\vec{d}$ is determined, which, at this point, is orthogonal to the reference path. The direction of this vector points toward positive $d$ values and therefore points to the left with increasing run length $s$. Therefore, the vector $\vec{d}$ depends on run length, which leads to: -The procedure to transform a point PF -from Frenet coordinates to the local vehicle frame in Cartesian coordinates is analogous. First, the point RC, which lies on the reference path at run length s. Next, a normal unit vector d⃗ is determined, which, in this point, is orthogonal to the reference path. The direction of this vector points towards positive d values and therefore points to the left with increasing run length s. Therefore, the vector d⃗ depends on the run length, which leads to: - PC(s,d)=RC(s)+d⋅d⃗ (s)(2) +$$P_C(s,d) = R_C(s) + d \cdot \vec{d}(s)$$ @@ -107,13 +109,11 @@ The given article describes in detail what is Frenet Frame and how robot motion ## Further Reading -[https://fjp.at/posts/optimal-frenet/#frenet-coordinates](Frenet Cordinates) -[https://www.mathworks.com/help/nav/ug/highway-trajectory-planning-using-frenet.html](Highway Trajectory Planning Using Frenet Reference Path) -[https://www.researchgate.net/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame](Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame) +[Frenet Cordinates](https://fjp.at/posts/optimal-frenet/#frenet-coordinates) +[Highway Trajectory Planning Using Frenet Reference Path](https://www.mathworks.com/help/nav/ug/highway-trajectory-planning-using-frenet.html) +[Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame](https://www.researchgate.net/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame) ## References -[https://fjp.at/posts/optimal-frenet/#frenet-coordinates](Frenet Frame) - - +[Frenet Frame](https://fjp.at/posts/optimal-frenet/#frenet-coordinates) diff --git a/wiki/planning/index.md b/wiki/planning/index.md index 1fb7ed44..108e657f 100644 --- a/wiki/planning/index.md +++ b/wiki/planning/index.md @@ -11,7 +11,7 @@ We are actively seeking contributions to expand the resources available in this - **[Advanced MoveIt usage for Manipulator Motion Planning](/wiki/planning/advanced-moveit-manipulator-planning/)** Discusses motion planning for dual XArm 7 manipulators using RRTStar and Cartesian planners. Includes details on custom cost objectives and testing in simulation. -- **[A* Implementation Guide](/wiki/planning/astar_planning_implementation_guide/)** +- **[A* Implementation Guide](/wiki/planning/astar-planning-implementation-guide/)** A step-by-step tutorial on implementing the A* algorithm for robot motion planning. Covers key concepts such as heuristic design, map representation, and non-holonomic motion primitives for Ackermann vehicles. - **[Extensions To A* for Dynamic Planning](/wiki/planning/non-a-star-planning/)** diff --git a/wiki/planning/move_base_flex.md b/wiki/planning/move-base-flex.md similarity index 100% rename from wiki/planning/move_base_flex.md rename to wiki/planning/move-base-flex.md diff --git a/wiki/planning/non-a-star-planning.md b/wiki/planning/non-a-star-planning.md index 0aaa9b24..5445623d 100644 --- a/wiki/planning/non-a-star-planning.md +++ b/wiki/planning/non-a-star-planning.md @@ -108,7 +108,7 @@ Over the course of this article, we explored the key implementation details of e Future work can include plugins for the same that are readily integrated into popular ROS-based packages like MoveIt!, allowing users to simply tune the parameters based on their usecase and use it on a variety of diverse systems ## See Also -- [A\* Planning Implementation Guide](/wiki/planning/astar_planning_implementation_guide/) +- [A\* Planning Implementation Guide](/wiki/planning/astar-planning-implementation-guide/) - [Planning Overview](/wiki/planning/planning-overview/) ## Further Reading diff --git a/wiki/planning/rrt-prm-planning.md b/wiki/planning/rrt-prm-planning.md index fd8687ff..10ae108e 100644 --- a/wiki/planning/rrt-prm-planning.md +++ b/wiki/planning/rrt-prm-planning.md @@ -180,7 +180,7 @@ We implemented and tested several RRT and PRM variants, extended them with plann ## See Also: - [Planning Overview](/wiki/planning/planning-overview/) -- [A* Implementation Guide](/wiki/planning/astar_planning_implementation_guide/) +- [A* Implementation Guide](/wiki/planning/astar-planning-implementation-guide/) - [Multi Robot Navigation Stack Design](/wiki/planning/multi-robot-planning/) ## Further Reading diff --git a/wiki/robotics-project-guide/choose-a-robot.md b/wiki/robotics-project-guide/choose-a-robot.md index fe009be6..90723410 100644 --- a/wiki/robotics-project-guide/choose-a-robot.md +++ b/wiki/robotics-project-guide/choose-a-robot.md @@ -225,7 +225,7 @@ These resources provide step-by-step instructions and examples to help users eff - **[TurtleBot in ROS2](https://ros2-industrial-workshop.readthedocs.io/en/latest/_source/navigation/ROS2-Turtlebot.html):** A beginner-friendly guide to getting started with TurtleBot and ROS2. - **[ROSbot 2.0 Documentation](https://husarion.com/manuals/rosbot/):** Detailed manuals and tutorials for operating and programming ROSbot. - **[F1Tenth Build Documentation](https://f1tenth.readthedocs.io/):** Step-by-step instructions for building and programming the F1Tenth autonomous racing car. -- **[F1Tenth GitHub Repository]((https://github.com/f1tenth)):** Access to the open-source codebase and community contributions for the F1Tenth platform. +- **[F1Tenth GitHub Repository](https://github.com/f1tenth):** Access to the open-source codebase and community contributions for the F1Tenth platform. diff --git a/wiki/sensing/___all_subsections.md b/wiki/sensing/__all_subsections.md similarity index 99% rename from wiki/sensing/___all_subsections.md rename to wiki/sensing/__all_subsections.md index c17e97a6..152d499c 100644 --- a/wiki/sensing/___all_subsections.md +++ b/wiki/sensing/__all_subsections.md @@ -1425,7 +1425,7 @@ There are two types of NUC calibration: - www.flir.com/discover/professional-tools/what-is-a-non-uniformity-correction-nuc -/wiki/sensing/trajectory_extraction_static_camera.md +/wiki/sensing/trajectory-extraction-static-camera.md --- date: 2020-12-07 title: Tracking vehicles using a static traffic camera diff --git a/wiki/sensing/index.md b/wiki/sensing/index.md index d774d4c7..5de59aa5 100644 --- a/wiki/sensing/index.md +++ b/wiki/sensing/index.md @@ -83,7 +83,7 @@ This section dives into various sensing modalities such as GPS modules, fiducial - **[Thermal Cameras](/wiki/sensing/thermal-cameras/):** Examines the use of thermal cameras in robotics, including types of thermal cameras, calibration techniques, and debug tips. -- **[Tracking vehicles using a static traffic camera](/wiki/sensing/trajectory_extraction_static_camera/):** +- **[Tracking vehicles using a static traffic camera](/wiki/sensing/trajectory-extraction-static-camera/):** Describes a system for extracting vehicle trajectories using static traffic cameras, incorporating detection, tracking, and homography estimation. ### Resources diff --git a/wiki/sensing/robotic-total-stations.md b/wiki/sensing/robotic-total-stations.md index bceb5e89..0aa8ac6f 100644 --- a/wiki/sensing/robotic-total-stations.md +++ b/wiki/sensing/robotic-total-stations.md @@ -6,7 +6,7 @@ This page will go into detail about to get started with the TS16 Total Station, ### How does it work? -Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km [Leica-Geosystems](file:///home/john/Downloads/Leica_Viva_TS16_DS-2.pdf). +Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km. ![Example usage of a Total Station in the Field](/assets/images/sensing/assets_leica_field_image.jpg) [Source](https://leica-geosystems.com/) diff --git a/wiki/sensing/trajectory_extraction_static_camera.md b/wiki/sensing/trajectory-extraction-static-camera.md similarity index 100% rename from wiki/sensing/trajectory_extraction_static_camera.md rename to wiki/sensing/trajectory-extraction-static-camera.md diff --git a/wiki/sensing/ultrawideband-beacon-positioning.md b/wiki/sensing/ultrawideband-beacon-positioning.md index fb0ab27c..e5965030 100644 --- a/wiki/sensing/ultrawideband-beacon-positioning.md +++ b/wiki/sensing/ultrawideband-beacon-positioning.md @@ -17,7 +17,7 @@ By using multiple stationary devices, a single or multiple mobile beacons can be ### Best Use Cases & Expected Quality -At the time of writing, one of the most common modules for UWB is the DWM1001. Information of how to get started with that can be seen [here](LINK TO WIKI PAGE ON DWM1001). Since these modules are mass manufactured, they can be purchased very inexpensively and should be considered one of the most affordable options for positioning systems. +At the time of writing, one of the most common modules for UWB is the DWM1001. Information of how to get started with that can be seen [here](https://wiki.ros.org/localizer_dwm1001). Since these modules are mass manufactured, they can be purchased very inexpensively and should be considered one of the most affordable options for positioning systems. ### Limitations diff --git a/wiki/simulation/__all_subsections.md b/wiki/simulation/__all_subsections.md index b5f66772..30ff93fa 100644 --- a/wiki/simulation/__all_subsections.md +++ b/wiki/simulation/__all_subsections.md @@ -1,4 +1,4 @@ -/wiki/simulation/Building-a-Light-Weight-Custom-Simulator/ +/wiki/simulation/building-a-light-weight-custom-simulator/ --- date: 2020-05-11 title: Building a Light Weight Custom Simulator @@ -38,7 +38,7 @@ Since this simulator is meant to test early stage implementation, its developmen The purpose of this simulator is to test other algorithms, and thus it should not take longer to debug than the system to be tested. Most of the time, simpler code structures are more reliable than complex ones, and especially multi-processing structures are the hardest to debug. -/wiki/simulation/Design-considerations-for-ROS-architectures/ +/wiki/simulation/design-considerations-for-ros-architectures/ --- date: 2020-05-11 title: Design considerations for ROS Architectures @@ -90,7 +90,7 @@ Asynchronous communication is more practical and useful in real-time systems. Sy In general, separation of code into modular components is recommended for clarity and the cleanliness of our development process. This rule of thumb is not the best idea when applied to ROS nodes. If nodes, which can be merged are split, for the sake of keeping code modular and clean, we simultaneously pay a price in communication delays. It is better to merge the nodes and keep them modular by maintaining classes and importing them into one node. Separate the tasks only if you feel those tasks can be performed faster and concurrently if it was given another core of the CPU. -/wiki/simulation/NDT-Matching-with-Autoware/ +/wiki/simulation/ndt-matching-with-autoware/ --- date: 2020-05-11 title: NDT Matching with Autoware @@ -260,7 +260,7 @@ Now you can spawn a custom vehicle with a customed sensor stack in simulation. Y Read the Autoware documentation from [Autoware Documentation](https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/) -/wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA/ +/wiki/simulation/spawning-and-controlling-vehicles-in-carla/ --- date: 2020-04-10 title: Spawning and Controlling Vehicles in CARLA diff --git a/wiki/simulation/Building-a-Light-Weight-Custom-Simulator.md b/wiki/simulation/building-a-light-weight-custom-simulator.md similarity index 100% rename from wiki/simulation/Building-a-Light-Weight-Custom-Simulator.md rename to wiki/simulation/building-a-light-weight-custom-simulator.md diff --git a/wiki/simulation/Design-considerations-for-ROS-architectures.md b/wiki/simulation/design-considerations-for-ros-architectures.md similarity index 100% rename from wiki/simulation/Design-considerations-for-ROS-architectures.md rename to wiki/simulation/design-considerations-for-ros-architectures.md diff --git a/wiki/simulation/index.md b/wiki/simulation/index.md index 88c24058..d233056d 100644 --- a/wiki/simulation/index.md +++ b/wiki/simulation/index.md @@ -12,13 +12,13 @@ This section focuses on **simulation tools, techniques, and environments** for r - **[Gazebo Classic Simulation of Graspable and Breakable Objects](/wiki/simulation/gazebo-classic-simulation-of-graspable-and-breakable-objects/)** Details the setup of a Gazebo Classic simulator for bimanual manipulation, featuring breakable joints and robust grasping plugins. -- **[Building a Light-Weight Custom Simulator](/wiki/simulation/Building-a-Light-Weight-Custom-Simulator/)** +- **[Building a Light-Weight Custom Simulator](/wiki/simulation/building-a-light-weight-custom-simulator/)** Discusses the design and implementation of a minimal simulator for testing reinforcement learning algorithms. Focuses on simplicity, customizability, and minimal noise. Highlights considerations like minimizing external disturbances, reducing development effort, and maintaining a reliable architecture. -- **[Design Considerations for ROS Architectures](/wiki/simulation/Design-considerations-for-ROS-architectures/)** +- **[Design Considerations for ROS Architectures](/wiki/simulation/design-considerations-for-ros-architectures/)** Provides a comprehensive guide to designing efficient ROS architectures for simulation and robotics. Covers critical aspects like message dropout tolerance, latency, synchronous vs asynchronous communication, and task separation. Includes practical tips for optimizing communication and node performance in ROS-based systems. -- **[NDT Matching with Autoware](/wiki/simulation/NDT-Matching-with-Autoware/)** +- **[NDT Matching with Autoware](/wiki/simulation/ndt-matching-with-autoware/)** Explains the Normal Distribution Transform (NDT) for mapping and localization in autonomous driving. Includes step-by-step instructions for setting up LiDAR sensors, generating NDT maps, and performing localization. Covers hardware and software requirements, troubleshooting, and visualization techniques using Autoware and RViz. - **[Simulating Vehicles Using Autoware](/wiki/simulation/simulating-vehicle-using-autoware/)** @@ -27,7 +27,7 @@ This section focuses on **simulation tools, techniques, and environments** for r - **[NVIDIA Isaac Sim Setup and ROS2 Workflow](/wiki/simulation/simulation-isaacsim-setup/)** Provides a complete guide for installing Isaac Sim, configuring sensor modules, and integrating it with ROS 2 frameworks like Nav2 and MoveIt. Covers both local and remote (headless) installations, and demonstrates scene management and robot model imports for MRSD projects. -- **[Spawning and Controlling Vehicles in CARLA](/wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA/)** +- **[Spawning and Controlling Vehicles in CARLA](/wiki/simulation/spawning-and-controlling-vehicles-in-carla/)** A hands-on tutorial for spawning and controlling vehicles in the CARLA simulator. Covers connecting to the CARLA server, visualizing waypoints, spawning vehicles, and using PID controllers for motion control. Demonstrates waypoint tracking with visual aids and includes example scripts for quick implementation. ## Resources diff --git a/wiki/simulation/NDT-Matching-with-Autoware.md b/wiki/simulation/ndt-matching-with-autoware.md similarity index 100% rename from wiki/simulation/NDT-Matching-with-Autoware.md rename to wiki/simulation/ndt-matching-with-autoware.md diff --git a/wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA.md b/wiki/simulation/spawning-and-controlling-vehicles-in-carla.md similarity index 100% rename from wiki/simulation/Spawning-and-Controlling-Vehicles-in-CARLA.md rename to wiki/simulation/spawning-and-controlling-vehicles-in-carla.md diff --git a/wiki/state-estimation/__all_subsections.md b/wiki/state-estimation/__all_subsections.md index d3419343..4f527468 100644 --- a/wiki/state-estimation/__all_subsections.md +++ b/wiki/state-estimation/__all_subsections.md @@ -93,7 +93,7 @@ Here is a sample launch file. Generally you can leave many parameters at their d Best way to tune these parameters is to record a ROS bag file, with odometry and laser scan data, and play it back while tuning AMCL and visualizing it on RViz. This helps in tracking the performance based on the changes being made on a fixed data-set. -/wiki/state-estimation/Cartographer-ROS-Integration/ +/wiki/state-estimation/cartographer-ros-integration/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. @@ -299,7 +299,7 @@ As an important note, ultrasonic pulses are harmful to human hearing over an ext ### Robotic Total Stations -Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km [Leica-Geosystems](file:///home/john/Downloads/Leica_Viva_TS16_DS-2.pdf). +Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km. ![Example usage of a Total Station in the Field](/assets/images/state-estimation/leica_field_image.jpg) [Source](https://leica-geosystems.com/) @@ -314,7 +314,7 @@ When considering external referencing positioning systems, it's important to con - What are the accuracy and precision requirements? -/wiki/state-estimation/OculusPrimeNavigation/ +/wiki/state-estimation/oculus-prime-navigation/ --- date: 2017-08-15 title: Oculus Prime Navigation diff --git a/wiki/state-estimation/Cartographer-ROS-Integration.md b/wiki/state-estimation/cartographer-ros-integration.md similarity index 100% rename from wiki/state-estimation/Cartographer-ROS-Integration.md rename to wiki/state-estimation/cartographer-ros-integration.md diff --git a/wiki/state-estimation/gps-lacking-state-estimation-sensors.md b/wiki/state-estimation/gps-lacking-state-estimation-sensors.md index 7f0f4951..b5b540f5 100644 --- a/wiki/state-estimation/gps-lacking-state-estimation-sensors.md +++ b/wiki/state-estimation/gps-lacking-state-estimation-sensors.md @@ -39,7 +39,7 @@ As an important note, ultrasonic pulses are harmful to human hearing over an ext ### Robotic Total Stations -Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km [Leica-Geosystems](file:///home/john/Downloads/Leica_Viva_TS16_DS-2.pdf). +Total stations have an extended heritage in civil engineering, where they have been used to precisely survey worksites since the 1970s. The total station sends beams of light directly to a glass reflective prism, and uses the time-of-flight properties of the beam to measure distances. The robotic total station tracks it's calibration orientaiton to high precision, such that the measured distance can be converted into a high-precision 3D position mesaurement. Total stations, depending on the prism type and other factors, can accurate track with in millimeter range at up to 3.5km. ![Example usage of a Total Station in the Field](/assets/images/state-estimation/leica_field_image.jpg) [Source](https://leica-geosystems.com/) diff --git a/wiki/state-estimation/index.md b/wiki/state-estimation/index.md index 59e0c7bf..0577395d 100644 --- a/wiki/state-estimation/index.md +++ b/wiki/state-estimation/index.md @@ -18,7 +18,7 @@ The "State Estimation" section provides a comprehensive understanding of how to - Application to localization with a predefined map. - Configuring and tuning the ROS AMCL package. -- **[Cartographer ROS Integration](/wiki/state-estimation/Cartographer-ROS-Integration/)** +- **[Cartographer ROS Integration](/wiki/state-estimation/cartographer-ros-integration/)** Details the integration of Google Cartographer's SLAM algorithm with ROS for 2D mapping. Discusses the configuration of LiDAR, IMU, and odometry inputs, as well as custom inflation layers for navigation. - Sensor compatibility: LiDAR, IMU, odometry. - Transform preparation and costmap integration for navigation. @@ -29,7 +29,7 @@ The "State Estimation" section provides a comprehensive understanding of how to - Comparison of different external systems with pros and cons. - Applications in uniform and dynamic environments. -- **[Oculus Prime Navigation](/wiki/state-estimation/OculusPrimeNavigation/)** +- **[Oculus Prime Navigation](/wiki/state-estimation/oculus-prime-navigation/)** Covers waypoint navigation techniques for the Oculus Prime platform, from ROS command-line methods to graphical waypoint selection using Rviz. Provides examples for remote navigation setup. - **[ORB SLAM2 Setup Guidance](/wiki/state-estimation/orb-slam2-setup/)** diff --git a/wiki/state-estimation/OculusPrimeNavigation.md b/wiki/state-estimation/oculus-prime-navigation.md similarity index 100% rename from wiki/state-estimation/OculusPrimeNavigation.md rename to wiki/state-estimation/oculus-prime-navigation.md diff --git a/wiki/system-design-development/__all_subsections.md b/wiki/system-design-development/__all_subsections.md index cd1081ee..e73fc0ca 100644 --- a/wiki/system-design-development/__all_subsections.md +++ b/wiki/system-design-development/__all_subsections.md @@ -111,7 +111,7 @@ If you have a lot of cables in your system, and even if you did a good job at gr 3. http://inlowsound.weebly.com/uploads/1/9/8/5/1985965/5192126_orig.jpgE -/wiki/system-design-development/In-Loop-Testing/ +/wiki/system-design-development/in-loop-testing/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. @@ -223,18 +223,18 @@ The given article describes in detail what is in-loop testing with specific focu ## See Also: -[https://github.com/shahrathin/roboticsknowledgebase.github.io/blob/master/wiki/system-design-development/subsystem-interface-modeling.md](Subsystem Interface Modeling) +[Subsystem Interface Modeling](https://github.com/shahrathin/roboticsknowledgebase.github.io/blob/master/wiki/system-design-development/subsystem-interface-modeling.md) ## Further Reading -[https://www.guru99.com/loop-testing.html](Loop Testing) +[Loop Testing](https://www.guru99.com/loop-testing.html) ## References -[https://en.wikipedia.org/wiki/Hardware-in-the-loop_simulation](Hardware-in-the-loop simulation) +[Hardware-in-the-loop simulation](https://en.wikipedia.org/wiki/Hardware-in-the-loop_simulation) /wiki/system-design-development/mechanical-design/ diff --git a/wiki/system-design-development/In-Loop-Testing.md b/wiki/system-design-development/in-loop-testing.md similarity index 95% rename from wiki/system-design-development/In-Loop-Testing.md rename to wiki/system-design-development/in-loop-testing.md index f8b6b8ce..4851d08e 100644 --- a/wiki/system-design-development/In-Loop-Testing.md +++ b/wiki/system-design-development/in-loop-testing.md @@ -1,16 +1,8 @@ --- -# Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be -# overwritten except in special circumstances. -# You should set the date the article was last updated like this: -date: 2022-04-29 # YYYY-MM-DD -# This will be displayed at the bottom of the article -# You should set the article's title: +date: 2022-04-29 title: Hardware-in-Loop and Software-in-Loop Testing -# The 'title' is automatically displayed at the top of the page -# and used in other parts of the site. --- - Hardware-in-the-loop (HIL) testing is a test methodology that can be used throughout the development of real-time embedded controllers to reduce development time and improve the effectiveness of testing. As the complexity of electronic control units (ECUs) increases, the number of combinations of tests required to ensure correct functionality and response increases exponentially. Older testing methodology tended to wait for the controller design to be completed and integrated into the final system before system issues could be identified. @@ -23,8 +15,7 @@ The term "software-in-the-loop testing", or SIL testing, is used to describe a t Primitive techniques for collision and lane change avoidance are being replaced with advanced drive assistance systems (ADAS). These new systems introduce new design and test challenges. Modern ADAS architectures combine complex sensing, processing, and algorithmic technologies into the what will ultimately become the guts of autonomous vehicles. As ADASs evolve from simple collision-avoidance systems to fully autonomous vehicles, they demand sensing and computing technologies that are complex. For example, consider the sensing technology on the Tesla Model S, which combines information from eight cameras and 12 ultrasonic sensors as part of its autopilot technology. Many experts are claiming that the 2018 Audi A8 is the first car to hit Level 3 autonomous operation. At speeds up to 37 mph, the A8 will start, accelerate, steer and brake on roads with a central barrier without help from the driver. The car contains 12 ultrasonic sensors on the front, sides, and rear, four 360� cameras on the front, rear and side mirrors, a long-range radar and laser scanner at the front, a front camera at the top of the windscreen and a mid-range radar at each corner. As a result, autonomous vehicles employ significantly more complex processing technologies and generate more data than ever before. As an example, the Tesla Model S contains 62 microprocessors � more the three times the number of moving parts in the vehicle. In addition, Intel recently estimated that tomorrow�s autonomous vehicles will produce four terabytes of data every second. Making sense of all this data is a significant challenge � and engineers have experimented with everything from simple PID loops to deep neural networks to improve autonomous navigation. -This is the reason why we need Hardware-in-Loop and Software-in-Loop Testing and the importance of this has increased its importance more - +This is the reason why we need Hardware-in-Loop and Software-in-Loop Testing and the importance of this has increased its importance more. Increasingly complex ADAS technology makes a lot of demands on test regimes. In particular, hardware-in-the-loop test methods, long used in developing engine and vehicle dynamics controllers, are being adapted to ADAS setups. Hardware-in-the-loop testing offers a few important advantages over live road testing. Most notably, a HIL setup can help you understand how hardware will perform in the real world, without having to take it outdoors. @@ -35,7 +26,6 @@ Hardware-in-the-loop testing offers a few important advantages over live road te - Build a repeatable test process - Test weather or time-dependent edge cases at any time - ## Hardware-in-Loop HIL test systems use mathematical representations of dynamic systems which react with the embedded systems being tested. An HIL simulation may emulate the electrical behavior of sensors and actuators and send these signals to the vehicle electronic control module (ECM). Likewise, an ADAS HIL simulation may use real sensors to stimulate an emulation of the ECM and generate actuator control signals. @@ -93,11 +83,8 @@ In the case of running the simulation on an FPGA instead of an embedded processo https://www.mathworks.com/help/hdlverifier/ug/fpga-in-the-loop-fil-simulation.html - - ## In-Loop Testing and V-Loop Model - The V model is frequently used in the automotive industry to depict the relationships of vehicle-in-the-loop, hardware-in-the-loop, software-in-the-loop, and model-in-the-loop methods. Stages depicted on the left have a corresponding testing counterpart on the right. MiL settings test a model of the functions to be developed. MiL is applied in early stages to verify basic decisions about architecture and design. SiL setups test the functions of the program code complied for the target ECU but without including real hardware. HiL methods verify and validate the software in the physical target ECUs. ViL tests replace the vehicle simulation and most of the virtual ECUs with a real vehicle. The simulated parts of the environment are injected into the vehicle sensors or ECUs. Advanced systems like autonomous vehicles are quickly rewriting the rules for how test and measurement equipment vendors must design instrumentation. In the past, test software was merely a mechanism to communicate a measurement result or measure a voltage. Going forward, test software is the technology that allows engineers to construct increasingly complex measurement systems capable of characterizing everything from the simplest RF component to comprehensive autonomous vehicle simulation. As a result, software remains a key investment area for test equipment vendors � and the ability to differentiate products with software will ultimately define the winners and losers in the industry. @@ -106,20 +93,14 @@ Advanced systems like autonomous vehicles are quickly rewriting the rules for ho The given article describes in detail what is in-loop testing with specific focus on ADAS as usecase. The importance and relevance of in-loop testing with V-model in systems engineering is also highlighted in the given model. - ## See Also: -[https://github.com/shahrathin/roboticsknowledgebase.github.io/blob/master/wiki/system-design-development/subsystem-interface-modeling.md](Subsystem Interface Modeling) - +[Subsystem Interface Modeling](https://github.com/shahrathin/roboticsknowledgebase.github.io/blob/master/wiki/system-design-development/subsystem-interface-modeling.md) ## Further Reading -[https://www.guru99.com/loop-testing.html](Loop Testing) - - +[Loop Testing](https://www.guru99.com/loop-testing.html) ## References -[https://en.wikipedia.org/wiki/Hardware-in-the-loop_simulation](Hardware-in-the-loop simulation) - - +[Hardware-in-the-loop simulation](https://en.wikipedia.org/wiki/Hardware-in-the-loop_simulation) diff --git a/wiki/system-design-development/index.md b/wiki/system-design-development/index.md index abdd4486..2dcbc26b 100644 --- a/wiki/system-design-development/index.md +++ b/wiki/system-design-development/index.md @@ -16,7 +16,7 @@ The **System Design and Development** section provides comprehensive resources a - **[Finite State Machine Implementation Guide for Robotics](/wiki/system-design-development/finite-state-machine/)** Provides a structured approach to managing complex robot behaviors through states, transitions, and actions. Covers theoretical foundations and implementation in ROS and embedded systems. -- **[Hardware-in-Loop and Software-in-Loop Testing](/wiki/system-design-development/In-Loop-Testing/)** +- **[Hardware-in-Loop and Software-in-Loop Testing](/wiki/system-design-development/in-loop-testing/)** Discusses HIL and SIL methodologies for testing real-time embedded controllers, with specific use cases like ADAS and autonomous vehicles. - **[Mechanical Design](/wiki/system-design-development/mechanical-design/)** diff --git a/wiki/tools/__all_subsections.md b/wiki/tools/__all_subsections.md index c9e219a4..495ea4c9 100644 --- a/wiki/tools/__all_subsections.md +++ b/wiki/tools/__all_subsections.md @@ -1322,7 +1322,7 @@ You can create a custom launch file to load Mapviz with a custom configuration a ``` -/wiki/tools/Qtcreator-ros/ +/wiki/tools/qt-creator-ros/ --- # Jekyll 'Front Matter' goes here. Most are set by default, and should NOT be # overwritten except in special circumstances. diff --git a/wiki/tools/Qtcreator-ros.md b/wiki/tools/qt-creator-ros.md similarity index 100% rename from wiki/tools/Qtcreator-ros.md rename to wiki/tools/qt-creator-ros.md diff --git a/wiki/tools/rosbags-matlab.md b/wiki/tools/rosbags-matlab.md index e6feb040..0de8ff17 100644 --- a/wiki/tools/rosbags-matlab.md +++ b/wiki/tools/rosbags-matlab.md @@ -13,7 +13,7 @@ The following steps provide the parsing setup: Basic rosbag parsing is as follows: 1. Run `rosbag(filepath)` on the [rosbag](https://www.mathworks.com/help/robotics/ref/rosbag.html) file of interest -2. [Select](](https://www.mathworks.com/help/robotics/ref/select.html)) only the topic of interest with `select(bag,'Topic','')` +2. [Select](https://www.mathworks.com/help/robotics/ref/select.html) only the topic of interest with `select(bag,'Topic','')` 3. For straightforward values in a message, run `timeseries(bag_select,'')` 4. For more complicated messages, run `readMessages`