Mastering ROS for Robotics Programming (2015)

Chapter 9. Building and Interfacing Differential Drive Mobile Robot Hardware in ROS

In the previous chapter, we have discussed about robotic vision using ROS. In this chapter, we can see discuss how to build an autonomous mobile robot hardware with differential drive configuration and how to interface it into ROS. We will see how to configure ROS Navigation stack for this robot and perform SLAM and AMCL to move the robot autonomously. This chapter aims to give you an idea about building a custom mobile robot and interfacing it on ROS.

You will see the following topics in this chapter:

·        Introduction to Chefbot: a DIY autonomous mobile robot

·        Flashing Chefbot firmware using Energia IDE

·        Discussing Chefbot interface package in ROS

·        Developing base controller and odometry node for Chefbot in ROS

·        Configuring Navigation stack for Chefbot

·        Understanding AMCL

·        Understanding RViz for working with Navigation stack

·        Obstacle avoidance using Navigation stack

·        Working with Chefbot simulation

·        Sending a goal to the Navigation stack from a ROS node

The first topic we are going to discuss in this chapter is how to build a DIY (Do It Yourself) autonomous mobile robot, developing its firmware, and interface it to ROS Navigation stack. The robot called Chefbot was built as a part of my first book called LearningRobotics using Python for PACKT ( This book discusses step by step procedure to build this robot and its interfacing to ROS.

In this chapter, we will cover abstract information about this robot hardware and we will learn more about configuring ROS Navigation stack and its fine tuning for performing autonomous navigation using SLAM and AMCL. We have already discussed about ROS Navigation stack in Chapter 4Using the ROS MoveIt! and Navigation Stack and we have simulated a differential robot using Gazebo and performed SLAM and AMCL. In this chapter, we will see how to interface a real differential drive robot hardware to navigation package.

Introduction to Chefbot- a DIY mobile robot and its hardware configuration

In Chapter 4Using the ROS MoveIt! and Navigation Stack we have discussed some mandatory requirements for interfacing a mobile robot with ROS navigation package. The following are the mandatory requirements:

·        Odometry source: Robot should publish its odometry/position data with respect to the starting position. The necessary hardware components that provide odometry information are wheel encoders, IMU, and 2D/3D cameras (visual odometry).

·        Sensor source: There should be a laser scanner or a 3D vision sensor sensor, which can act as a laser scanner. The laser scanner data is essential for map building process using SLAM.

·        Sensor transform using tf: The robot should publish the transform of the sensors and other robot components using ROS transform.

·        Base controller: The base controller is a ROS node, which can convert a twist message from Navigation stack to corresponding motor velocities.

Introduction to Chefbot- a DIY mobile robot and its hardware configuration

Figure 1: Chefbot prototype

We can check the components present in the robot and determine whether they satisfy the Navigation stack requirements. The following components are present in the robot:

·        Pololu DC Gear motor with Quadrature encoder ( The motor is operated in 12 V, 80 RPM, and 18 kg-cm torque. It takes current of 300 mA in free run and 5 A in stall condition. The motor shaft is attached to aquadrature encoder, which can deliver a maximum count of 8400 counts per revolution of the gearbox's output shaft. Motor encoders are one source of odometry of robot.

·        Pololu motor drivers ( These are dual motor controllers for Pololu motors that can support up to 30 A and motor voltage from 5.5 V to 16 V.

·        Tiva C Launchpad Controller ( This robot has a Tiva C LaunchPad controller for interfacing motors, encoders, sensors, and so on. Also, it can receive control commands from the PC and can send appropriate signals to the motors according to the command. This board can act as a embedded controller board of the robot. Tiva C LaunchPad board runs on 80 MHz.

·        MPU 6050 IMU: The IMU used in this robot is MPU 6050, which is a combination of accelerometer, gyroscope, and Digital Motion Processer (DMP). This motion processor can run sensor fusion algorithm onboard and can provide accurate results of roll, pitch, and yaw. The IMU values can be taken to calculate the odometry along with the wheel encoders.

·        Xbox Kinect/Asus Xtion Pro: These are 3D vision sensors and we can use these sensors to mock a laser scanner. The point cloud generated from these sensors can be converted into laser scan data and used in the Navigation stack.

·        Intel NUC PC: This is a mini PC from Intel, and we have to load this with Ubuntu and ROS. The PC is connected to Kinect and LaunchPad to retrieve the sensor values and the odometry details. The program running on the PC can compute TF of the robot and can run the Navigation stack and associated packages such as SLAM and AMCL. This PC is placed in the robot itself.

From the robot components lists, it is clear that it satisfies the requirements of the ROS navigation packages. The following figure shows the block diagram of this robot:

Introduction to Chefbot- a DIY mobile robot and its hardware configuration

Figure 2: Block diagram of Chefbot

In this robot, the embedded controller board is the Tiva C LaunchPad. All the sensors and actuators are connected to the controller board and it is connected to Intel NUC PC for receiving higher level commands. The board and the PC communicate in UART protocol, IMU and the board communicate using I2C, Kinect is interfaced to PC via USB, and all the other sensors are interfaced through GPIO pins. A detailed connection diagram of the robot components follows:

Introduction to Chefbot- a DIY mobile robot and its hardware configuration

Figure 3: Connection diagram of Chefbot

Flashing Chefbot firmware using Energia IDE

After developing the preceding connections, we can program the Launchpad using Energia IDE ( After setting Energia IDE on the PC (Ubuntu is preferred), we can flash the robot firmware to the board. We will get the firmware code and the ROS interface package by using the following command:

$ git clone

The folder contains a folder called tiva_c_energia_code, which has the firmware code that flashes to the board after compilation in Energia IDE.

The firmware can read the encoder, ultrasonic sensor, and IMU values, and can receive values of the motor velocity command.

The important section of the firmware is discussed here. The programming language in the LaunchPad is the same as Arduino. Here we are using Energia IDE to program the controller, which is built from Arduino IDE.

The following code snippet is the setup() function definition of the code. This function starts serial communication with a baud rate of 115200. It also configures pins of motor encoder, motor driver pins, ultrasonic distance sensor, and IMU. Also, through this code, we are configuring a pin to reset the LaunchPad.

void setup()


  //Init Serial port with 115200 baud rate


  //Setup Encoders


  //Setup Motors


  //Setup Ultrasonic


  //Setup MPU 6050


  //Setup Reset pins


  //Set up Messenger



In the loop() function, the sensor values are continuously polled and the data is sent through serial port and incoming serial data are continuously polled for getting the robot commands. The following convention protocols are used to send each sensor value from the LaunchPad to the PC using serial communication (UART).

Serial data sending protocol from LaunchPad to PC

For the encoder, the protocol will be as follows:


For the ultrasonic sensor, the protocol will be as follows:


For IMU, the protocol will be as follows:


Serial data sending protocol from PC to Launchpad

For the motor, the protocol will be as follows:


For resetting the device, the protocol will be as follows:


We can check the serial values from the LaunchPad using a command line tool called This tool can view the serial data coming from a device. This script is already installed with the python-serial package, which is installed along with the rosserial-python Debian package. The following command will display the serial values from the robot controller:

$ /dev/ttyACM0 115200

We will get values like the following screenshot:

Serial data sending protocol from PC to Launchpad

Figure 4: Checking serial data using

Discussing Chefbot interface packages on ROS

After confirming the serial values from the board, we can install the Chefbot ROS package. The Chefbot package contains the following files and folders:

·        chefbot_bringup: This package contains python scripts, C++ nodes, and launch files to start publishing robot odometry and tf, and performing gmapping and AMCL. It contains the python/C++ nodes to read/write values from the LaunchPad, convert the encoder ticks to tf, and twist message to motor commands. It also has the PID node for handling velocity commands from the motor commands.

·        chefbot_description: This package contains the Chefbot URDF model.

·        chefbot_simulator: This package contains launch files to simulate the robot in Gazebo.

·        chefbot_navig_cpp: This package contains C++ implementation of few nodes which are already implemented in chefbot_bringup as the python node.

The following launch file will start the robot odometry and tf publishing nodes:

$ roslaunch chefbot_bringup robot_standalone.launch

The following figure shows the nodes started by this launch file and how they are interconnected:

Discussing Chefbot interface packages on ROS

Figure 5: Interconnection of each nodes in Chefbot

The nodes run by this launch file and their working are described next:

· We know that this robot uses Tiva C LaunchPad board as its controller. This node acts as a bridge between the robot controller and ROS. The basic functionality of this node is to receive serial values from the LaunchPad and convert each sensor data into ROS topics. This acts as the ROS driver for the LaunchPad board.

· This node converts the geometry_msgs/Twist message to motor velocity targets. It subscribes the command velocity, which is either from a teleop node or from a ROS Navigation stack, and publishes lwheel_vtarget and rwheel_vtarget.

· This node subscribes wheel_vtarget from the twist_to_motors node and the wheel topic, which is the encoder ticks from launchpad_node. We have to start two PID nodes for each wheel of the robot, as shown in the previous figure. This node finally generates the motor speed commands for each motor.

· This node subscribes the encoder ticks from the two motors and computes odometry, and publishes tf for the Navigation stack.

The list of topics generated after running robot_standalone.launch are shown in the following image:

Discussing Chefbot interface packages on ROS

Figure 6: List of topic generated when executing robot_standalone.launch

The following is the content of the robot_standalone.launch file:


 <arg name="simulation" default="$(optenv TURTLEBOT_SIMULATION false)"/>

 <param name="/use_sim_time" value="$(arg simulation)"/>

<!-- URDF robot model -->

  <arg name="urdf_file" default="$(find xacro)/ '$(find chefbot_description)/urdf/chefbot_base.xacro'" />

  <param name="robot_description" command="$(arg urdf_file)" />

  <!-- important generally, but specifically utilised by the current app manager -->

  <param name="robot/name" value="$(optenv ROBOT turtlebot)"/>

  <param name="robot/type" value="turtlebot"/>

<!-- Starting robot state publisher -->

  <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">

    <param name="publish_frequency" type="double" value="5.0" />


<!-- Robot parameters -->

  <rosparam param="base_width">0.3</rosparam>

  <rosparam param="ticks_meter">14865</rosparam>

<!-- Starting launchpad_node -->

  <node name="launchpad_node" pkg="chefbot_bringup" type="">

    <rosparam file="$(find chefbot_bringup)/param/serial.yaml" command="load" />


<!-- PID node for left motor , setting PID parameters -->

  <node name="lpid_velocity" pkg="chefbot_bringup" type="" output="screen">

      <remap from="wheel" to="lwheel"/>

      <remap from="motor_cmd" to="left_wheel_speed"/>

      <remap from="wheel_vtarget" to="lwheel_vtarget"/>

      <remap from="wheel_vel" to="lwheel_vel"/>

      <rosparam param="Kp">400</rosparam>

      <rosparam param="Ki">100</rosparam>

      <rosparam param="Kd">0</rosparam>

      <rosparam param="out_min">-1023</rosparam>

      <rosparam param="out_max">1023</rosparam>

      <rosparam param="rate">30</rosparam>

      <rosparam param="timeout_ticks">4</rosparam>

      <rosparam param="rolling_pts">5</rosparam>


<!-- PID node for right motor, setting PID parameters -->

  <node name="rpid_velocity" pkg="chefbot_bringup" type="" output="screen">

      <remap from="wheel" to="rwheel"/>

      <remap from="motor_cmd" to="right_wheel_speed"/>

      <remap from="wheel_vtarget" to="rwheel_vtarget"/>

      <remap from="wheel_vel" to="rwheel_vel"/>

      <rosparam param="Kp">400</rosparam>

      <rosparam param="Ki">100</rosparam>

      <rosparam param="Kd">0</rosparam>

      <rosparam param="out_min">-1023</rosparam>

      <rosparam param="out_max">1023</rosparam>

      <rosparam param="rate">30</rosparam>

      <rosparam param="timeout_ticks">4</rosparam>

      <rosparam param="rolling_pts">5</rosparam>


<!-- Starting twist to motor and diff_tf nodes -->

  <node pkg="chefbot_bringup" type="" name="twist_to_motors" output="screen"/>

  <node pkg="chefbot_bringup" type="" name="diff_tf" output="screen"/>


After running robot_standalone.launch, we can visualize the robot in RViz using the following command:

$ roslaunch chefbot_bringup view_robot.launch

We will see the robot model as shown in this next screenshot:

Discussing Chefbot interface packages on ROS

Figure 7: Visualization of robot model using real robot values.

Launch the keyboard teleop node and we can start moving the robot:

$ roslaunch chefbot_bringup keyboard_teleop.launch

Move the robot using the keys and we will see that the robot is moving around. If we enable TF of the robot in RViz, we can view the odometry as shown in the following screenshot:

Discussing Chefbot interface packages on ROS

Figure 8: Visualizing robot odometry

The graph of the connection between each node is given next. We can view it using the rqt_graph tool.

$ rqt_graph

Discussing Chefbot interface packages on ROS

Figure 9: Interconnection of nodes in Chefbot

Till now we have discussed the Chefbot interfacing on ROS. The coding of Chefbot is completely done in Python. There are some nodes implemented in C++ for computing odometry from the encoder ticks and generating motor speed commands from the twistmessages.

Computing odometry from encoder ticks

In this section, we will see the C++ interpretation of the node, which subscribes the encoder data and computes the odometry, and publishes the odometry and tf of the robot. We can see the C++ interpretation of this node, called diff_tf.cpp, which can be found in the src folder of a package named chefbot_navig_cpp.

Discussed next are the important code snippets of this code and their explanations. The following code snippet is the constructor of the class Odometry_calc. This class contains the definition of computing odometry. The following code declares the subscriber for the left and right wheel encoders along with the publisher for odom value:


  //Initialize variables used in the node


  ROS_INFO("Started odometry computing node");

  //Subscribing left and right wheel encoder values

  l_wheel_sub = n.subscribe("/lwheel",10, &Odometry_calc::leftencoderCb, this);

  r_wheel_sub = n.subscribe("/rwheel",10, &Odometry_calc::rightencoderCb, this);

   //Creating a publisher for odom

    odom_pub = n.advertise<nav_msgs::Odometry>("odom", 50);  

  //Retrieving parameters of this node


The following code is the update loop of computing odometry. It computes the delta distance moved and the angle rotated by the robot using the encoder values, base width of the robot, and ticks per meter of the encoder. After calculating the delta distance and the delta theta, we can compute the final x, y, and theta using the standard differential drive robot equations.

  if ( now > t_next) {

    elapsed = now.toSec() - then.toSec();

    if(enc_left == 0){

      d_left = 0;

      d_right = 0;



      d_left = (left - enc_left) / ( ticks_meter);

      d_right = (right - enc_right) / ( ticks_meter);


    enc_left = left;

    enc_right = right;

    d = (d_left + d_right ) / 2.0;

    th = ( d_right - d_left ) / base_width;

    dx = d /elapsed;

    dr = th / elapsed;

    if ( d != 0){

                  x = cos( th ) * d;

                  y = -sin( th ) * d;

                  // calculate the final position of the robot

                  x_final = x_final + ( cos( theta_final ) * x - sin( theta_final ) * y );

                  y_final = y_final + ( sin( theta_final ) * x + cos( theta_final ) * y );


       if( th != 0)

              theta_final = theta_final + th;

After computing the robot position and the orientation from the preceding code snippet, we can feed the odom values to the odom message header and in the tf header, which will publish the topics in /odom and /tf.

        geometry_msgs::Quaternion odom_quat ;

        odom_quat.x = 0.0;

        odom_quat.y = 0.0;

        odom_quat.z = 0.0;

          odom_quat.z = sin( theta_final / 2 );  

          odom_quat.w = cos( theta_final / 2 );

        //first, we'll publish the transform over tf

        geometry_msgs::TransformStamped odom_trans;

        odom_trans.header.stamp = now;

        odom_trans.header.frame_id = "odom";

        odom_trans.child_frame_id = "base_footprint";

        odom_trans.transform.translation.x = x_final;

        odom_trans.transform.translation.y = y_final;

        odom_trans.transform.translation.z = 0.0;

        odom_trans.transform.rotation = odom_quat;

        //send the transform


        //next, we'll publish the odometry message over ROS

        nav_msgs::Odometry odom;

        odom.header.stamp = now;

        odom.header.frame_id = "odom";

        //set the position

        odom.pose.pose.position.x = x_final;

        odom.pose.pose.position.y = y_final;

        odom.pose.pose.position.z = 0.0;

        odom.pose.pose.orientation = odom_quat;

        //set the velocity

        odom.child_frame_id = "base_footprint";

        odom.twist.twist.linear.x = dx;

        odom.twist.twist.linear.y = 0;

        odom.twist.twist.angular.z = dr;

        //publish the message


Computing motor velocities from ROS twist message

The C++ implementation of is discussed in this section. This node will convert the twist message (geometry_msgs/Twist) to motor target velocities. The topics subscribing by this node is the twist message from teleop node or Navigation stack and it publishes the target velocities for the two motors. The target velocities are fed into the PID nodes, which will send appropriate commands to each motor. The CPP file name is twist_to_motor.cpp and you can get it from the chapter_9_codes/chefbot_navig_cpp/src folder.





  ROS_INFO("Started Twist to Motor node");

  cmd_vel_sub = n.subscribe("cmd_vel_mux/input/teleop",10, &TwistToMotors::twistCallback, this);

  pub_lmotor = n.advertise<std_msgs::Float32>("lwheel_vtarget", 50);

  pub_rmotor = n.advertise<std_msgs::Float32>("rwheel_vtarget", 50);


The following code snippet is the callback function of the twist message. The linear velocity X is assigned as dx, Y as dy, and angular velocity Z as dr.

void TwistToMotors::twistCallback(const geometry_msgs::Twist &msg)


  ticks_since_target = 0;

  dx = msg.linear.x;

  dy = msg.linear.y;

  dr = msg.angular.z;


After getting dx, dy, and dr, we can compute the motor velocities using the following equations:

dx = (l + r) / 2

dr = (r - l) / w

Here r and l are the right and left wheel velocities, and w is the base width. The preceding equations are implemented in the following code snippet. After computing the wheel velocities, it is published to the lwheel_vtarget and rwheel_vtarget topics.

  right = ( 1.0 * dx ) + (dr * w /2);

  left = ( 1.0 * dx ) - (dr * w /2);

  std_msgs::Float32 left_;

  std_msgs::Float32 right_; = left; = right;



  ticks_since_target += 1;


Running robot stand alone launch file using C++ nodes

The following command can launch robot_stand_alone.launch, which uses the C++ nodes:

$ roslaunch chefbot_navig_cpp robot_standalone.launch

Configuring the Navigation stack for Chefbot

After setting the odometry nodes, the base controller node, and the PID nodes, we need to configure the Navigation stack to perform SLAM and Adaptive Monte Carlo Localization (AMCL) for building the map, localizing the robot, and performing autonomous navigation.

In Chapter 4Using the ROS MoveIt! and Navigation Stack, we saw the basic packages in the Navigation stack. To build the map of the environment, we need to configure mainly two nodes: the gmapping node for performing SLAM and the move_base node. We also need to configure the global planner, the local planner, the global cost map, and the local cost map inside the Navigation stack. Let's see the configuration of the gmapping node first.

Configuring the gmapping node

The gmapping node is the package to perform SLAM (

The gmapping node inside this package mainly subscribes and publishes the following topics:

The following are the subscribed topics:

·        tf (tf/tfMessage): Robot transform that relates to Kinect, robot base and odometry

·        scan (sensor_msgs/LaserScan): Laser scan data that is required to create the map

The following are the published topics:

·        map (nav_msgs/OccupancyGrid): Publishes the occupancy grid map data

·        map_metadata (nav_msgs/MapMetaData): Basic information about the occupancy grid

The gmapping node is highly configurable using various parameters. The gmapping node parameters are defined inside the chapter_9_codes/chefbot/chefbot_bringup/launch/include/gmapping.launch.xml file. Following is a code snippet of this file and its uses:


  <arg name="scan_topic" default="scan" />

<!-- Starting gmapping node -->

  <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">

<!-- Frame of mobile base -->

    <param name="base_frame" value="base_footprint"/>

    <param name="odom_frame" value="odom"/>

<!-- The interval of map updation, reducing this value will speed of map generation but increase computation load -->

    <param name="map_update_interval" value="5.0"/>

<!-- Maximum usable range of laser/kinect -->

    <param name="maxUrange" value="6.0"/>

<!-- Maximum range of sensor, max range should be > maxUrange -->

    <param name="maxRange" value="8.0"/>

    <param name="sigma" value="0.05"/>

    <param name="kernelSize" value="1"/>



By fine tuning these parameters, we improve the accuracy of the gmapping node.

The main gmapping launch file is given next. It is placed in chefbot_bringup/launch/includes/gmapping_demo.launch. This launch file launches the openni_launch file and the depth_to_laserscan node to convert the depth image to laser scan. After launching the Kinect nodes, it launches the gmapping node and the move_base configurations.


<!-- Launches 3D sensor nodes -->

  <include file="$(find chefbot_bringup)/launch/3dsensor.launch">

    <arg name="rgb_processing" value="false" />

    <arg name="depth_registration" value="false" />

    <arg name="depth_processing" value="false" />

    <arg name="scan_topic" value="/scan" />


<!-- Start gmapping nodes and its configurations -->

  <include file="$(find chefbot_bringup)/launch/includes/gmapping.launch.xml"/>

<!-- Start move_base node and its configuration -->

  <include file="$(find chefbot_bringup)/launch/includes/move_base.launch.xml"/>


Configuring the Navigation stack packages

The next node we need to configure is move_base. Along with the move_base node, we need to configure the global and the local planners, and also the global and the local cost maps. We will first look at the launch file to load all these configuration files. The following launch file chefbot_bringup/launch/includes/move_base.launch.xml will load all the parameters of move_base, planners, and costmaps:


  <arg name="odom_topic" default="odom" />

<!-- Starting move_base node -->

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">

<!-- common parameters of global costmap -->

    <rosparam file="$(find chefbot_bringup)/param/costmap_common_params.yaml" command="load" ns="global_costmap" />

<!-- common parameters of local costmap -->

    <rosparam file="$(find chefbot_bringup)/param/costmap_common_params.yaml" command="load" ns="local_costmap" />

<!-- local cost map parameters -->

    <rosparam file="$(find chefbot_bringup)/param/local_costmap_params.yaml" command="load" />

<!-- global cost map parameters -->

    <rosparam file="$(find chefbot_bringup)/param/global_costmap_params.yaml" command="load" />

<!-- base local planner parameters -->

    <rosparam file="$(find chefbot_bringup)/param/base_local_planner_params.yaml" command="load" />

<!-- dwa local planner parameters -->

    <rosparam file="$(find chefbot_bringup)/param/dwa_local_planner_params.yaml" command="load" />

<!-- move_base node parameters -->

    <rosparam file="$(find chefbot_bringup)/param/move_base_params.yaml" command="load" />

    <remap from="cmd_vel" to="/cmd_vel_mux/input/navi"/>

    <remap from="odom" to="$(arg odom_topic)"/>



We will now take a look at each configuration file and its parameters.

Common configuration (local_costmap) and (global_costmap)

The common parameters of the local and global costmaps are discussed in this section. The costmap is created using the obstacles present around the robot. Fine tuning the parameters of the costmap can increase the accuracy of map generation. The customized filecostmap_common_params.yaml of Chefbot follows. This configuration file contains the common parameters of both the global and the local cost maps. It is present in the chefbot_bringup/param folder. For more about costmap common parameters, check

#The maximum value of height which has to be taken as an obstacle

max_obstacle_height: 0.60 

#This parameters set the maximum obstacle range. In this case, the robot will only look at obstacles within 2.5 meters in front of robot

obstacle_range: 2.5

#This parameter helps robot to clear out space in front of it upto 3.0 meters away given a sensor reading

raytrace_range: 3.0

#If the robot is circular, we can define the robot radius, otherwise we need to mention the robot footprint

robot_radius: 0.45

#footprint: [[-0.,-0.1],[-0.1,0.1], [0.1, 0.1], [0.1,-0.1]]

#This parameter will actually inflate the obstacle up to this distance from the actual obstacle. This can be taken as a tolerance value of obstacle. The cost of map will be same as the actual obstacle up to the inflated value.

inflation_radius: 0.50 

#This factor is used for computing cost during inflation

cost_scaling_factor: 5 

#We can either choose map type as voxel which will give a 3D view of the world, or the other type, costmap which is a 2D view of the map. Here we are opting voxel.

map_type: voxel

#This is the z_origin of the map if it voxel

origin_z: 0.0

#z resolution of map in meters

z_resolution: 0.2

#No of voxel in a vertical column

z_voxels: 2

#This flag set whether we need map for visualization purpose

publish_voxel_map: false

#A list of observation source in which we get scan data and its parameters

observation_sources: scan

#The list of scan, which mention, data type of scan as LaserScan, marking and clearing indicate whether the laser data is used for marking and clearing costmap.

scan: {data_type: LaserScan, topic: scan, marking: true, clearing: true, min_obstacle_height: 0.0, max_obstacle_height: 3}

After discussing the common parameters, we will now look at the global costmap configuration.

Configuring global costmap parameters

The following are the main configurations required for building a global costmap. The definition of the costmap parameters are dumped in chefbot_bringup/param/ global_costmap_params.yaml. The following is the definition of this file and its uses:


   global_frame: /map

   robot_base_frame: /base_footprint

   update_frequency: 1.0

   publish_frequency: 0.5

   static_map: true

   transform_tolerance: 0.5

The global_frame here is /map, which is the coordinate frame of the costmap. The robot_base_frame parameter is /base_footprint; it is the coordinate frame in which the costmap should reference as the robot base. The update_frequency is frequency at which the cost map runs its main update loop. The publishing_frequency of the cost map is given as publish_frequency, which is 0.5. If we are using an existing map, we have to set static_map as true, otherwise as false. The transform_tolerance is the rate at which the transform has to perform. The robot would stop if the transforms are not updated at this rate.

Configuring local costmap parameters

Following is the local costmap configuration of this robot. The configuration of this file is located in chefbot_bringup/param/local_costmap_params.yaml.


   global_frame: odom

   robot_base_frame: /base_footprint

   update_frequency: 5.0

   publish_frequency: 2.0

   static_map: false

   rolling_window: true

   width: 4.0

   height: 4.0

   resolution: 0.05

   transform_tolerance: 0.5

The global_frame, robot_base_frame, publish_frequency, and static_map are the same as the global costmap. The rolling_window parameter makes the costmap centered around the robot. If we set this parameter to true, we will get a costmap that is built centered around the robot. The width , height, and resolution parameters are the width, height, and resolution of the costmap.

The next step is to configure the base local planner.

Configuring base local planner parameters

The main function of the base local planner is to compute the velocity commands from the goal sent from the ROS nodes. This file mainly contains the configurations related to velocity, acceleration, and so on. The base local planner configuration file of this robot is in chefbot_bringup/param/base_local_planner_params.yaml. The definition of this file is as follows:


# Robot Configuration Parameters, these are the velocity limit of the robot

  max_vel_x: 0.3

  min_vel_x: 0.1

#Angular velocity limit

  max_vel_theta:  1.0

  min_vel_theta: -1.0

  min_in_place_vel_theta: 0.6

#These are the acceleration limits of the robot 

  acc_lim_x: 0.5

  acc_lim_theta: 1.0

# Goal Tolerance Parameters: The tolerance of robot when it reach the goal position

  yaw_goal_tolerance: 0.3

  xy_goal_tolerance: 0.15

# Forward Simulation Parameters

  sim_time: 3.0

  vx_samples: 6

  vtheta_samples: 20

# Trajectory Scoring Parameters

  meter_scoring: true

  pdist_scale: 0.6

  gdist_scale: 0.8

  occdist_scale: 0.01

  heading_lookahead: 0.325

  dwa: true

# Oscillation Prevention Parameters

  oscillation_reset_dist: 0.05

# Differential-drive robot configuration : If the robot is holonomic configuration, set to true other vice set to false. Chefbot is a non holonomic robot.

  holonomic_robot: false

  max_vel_y: 0.0

  min_vel_y: 0.0

  acc_lim_y: 0.0

  vy_samples: 1

Configuring DWA local planner parameters

The DWA planner is another local planner in ROS. Its configuration is almost the same as the base local planner. It is located in chefbot_bringup/param/ dwa_local_planner_params.yaml. We can either use the base local planner or the DWA local planner for our robot.

Configuring move_base node parameters

There are some configurations to the move_base node too. The move_base node configuration is placed in the param folder. Following is the definition of move_base_params.yaml:

#This parameter determine whether the cost map need to shutdown when move_base in inactive state

shutdown_costmaps: false

#The rate at which move base run the update loop and send the velocity commands

controller_frequency: 5.0

#Controller wait time for a valid command before a space-clearing operations

controller_patience: 3.0

#The rate at which the global planning loop is running, if it is 0, planner only plan when a new goal is received

planner_frequency: 1.0

#Planner wait time for finding a valid path befire the space-clearing operations

planner_patience: 5.0

#Time allowed for oscillation before starting robot recovery operations

oscillation_timeout: 10.0

#Distance that robot should move to be considered which not be oscillating. Moving above this distance will reset the oscillation_timeout

oscillation_distance: 0.2

# local planner - default is trajectory rollout

base_local_planner: "dwa_local_planner/DWAPlannerROS"

We have discussed most of the parameters used in the Navigation stack, the gmapping node, and the move_base node. Now we can start running a gmapping demo for building the map.

Start the robot's tf nodes and base controller nodes:

$ roslaunch chefbot_bringup robot_standalone.launch

Start the gmapping nodes using the following command:

$  roslaunch chefbot_bringup gmapping_demo.launch

This gmapping_demo.launch will launch the 3Dsensor, which launches the OpenNI drivers and depth to the laser scan node, and launches gmapping node and movebase node with necessary parameters.

We can launch a teleop node for moving the robot to build the map of environment. The following command will launch the teleop node for moving the robot:

$ roslaunch chefbot_bringup keyboard_teleop.launch

We can see the map building in RViz, which can be invoked using the following command:

$ roslaunch chefbot_bringup view_navigation.launch

We are testing this robot in a plane room; we can move robot in all areas inside the room. If we move the robot in all the areas, we will get a map as shown in the following screenshot:

Configuring move_base node parameters

Figure 10: Creating a map using gmapping is shown in RViz

After completing the mapping process, we can save the map using the following command:

$ rosrun map_server map_saver -f /home/lentin/room

The map_server package in ROS contains the map_server node, which provides the current map data as an ROS service. It provides a command utility called map_saver, which helps to save the map.

It will save the current map as two files: room.pgm and room.yaml. The first one is the map data and the next is its meta data which contains the map file's name and its parameters. The following screenshot shows map generation using the map_server tool, which is saved in the home folder:

Configuring move_base node parameters

Figure 11: Terminal messages while saving a map

The following is the room.yaml:

image: room.pgm

resolution: 0.010000

origin: [-11.560000, -11.240000, 0.000000]

negate: 0

occupied_thresh: 0.65

free_thresh: 0.196

The definition of each parameter follows:

·        image: The image contains the occupancy data. The data can be absolute or relative to the origin mentioned in the YAML file.

·        resolution: This parameter is the resolution of the map, which is meters/pixels.

·        origin: This is the 2D pose of the lower left pixel in the map as (x, y, yaw) in which yaw as counter clockwise(yaw = 0 means no rotation).

·        negate: This parameter can reverse the semantics of white/black in the map and the free space/occupied space representation.

·        occupied_thresh: This is the threshold deciding whether the pixel is occupied or not. If the occupancy probability is greater than this threshold, it is considered as free space.

·        free_thresh: The map pixel with occupancy probability less than this threshold is considered completely occupied.

After mapping the environment, we can quit all the terminals and rerun the following commands to start AMCL. Before starting the amcl nodes, we will look at the configuration and main application of AMCL.

Understanding AMCL

After building a map of the environment, the next thing we need to implement is localization. The robot should localize itself on the generated map. We have worked with AMCL in Chapter 4Using the ROS MoveIt! and Navigation Stack. In this section, we will see a detailed study of the amcl package and the amcl launch files used in Chefbot.

AMCL is probabilistic localization technique for robot working in 2D. This algorithm uses particle filter for tracking the pose of the robot with respect to the known map. To know more about this localization technique, you can refer to a book called Probabilistic Robotics by Thrun (

The AMCL algorithm is implemented in the AMCL ROS package (, which has an amcl node that subscribes the scan (sensor_msgs/LaserScan), the tf (tf/tfMessage), the initial pose (geometry_msgs/PoseWithCovarianceStamped), and the map (nav_msgs/OccupancyGrid).

After processing the sensor data, it publishes amcl_pose (geometry_msgs/PoseWithCovarianceStamped), particlecloud(geometry_msgs/PoseArray) and tf (tf/Message).

The amcl_pose is the estimated pose of the robot after processing, where the particle cloud is the set of pose estimates maintained by the filter.

If the initial pose of the robot is not mentioned, the particle will be around the origin. We can set the initial pose of the robot in RViz using the 2D Pose estimate button. We can see the amcl launch file used in this robot. Following is the main launch file for startingamcl, called amcl_demo.launch:


  <rosparam command="delete" ns="move_base" />

  <include file="$(find chefbot_bringup)/launch/3dsensor.launch">

    <arg name="rgb_processing" value="false" />

    <arg name="depth_registration" value="false" />

    <arg name="depth_processing" value="false" />

    <!-- We must specify an absolute topic name because if not it will be prefixed by "$(arg camera)".

    <arg name="scan_topic" value="/scan" />


  <!-- Map server -->

  <arg name="map_file" default="$(find turtlebot_navigation)/maps/willow-2010-02-18-0.10.yaml"/>

  <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />

  <arg name="initial_pose_x" default="0.0"/> <!-- Use 17.0 for willow's map in simulation -->

  <arg name="initial_pose_y" default="0.0"/> <!-- Use 17.0 for willow's map in simulation -->

  <arg name="initial_pose_a" default="0.0"/>

  <include file="$(find chefbot_bringup)/launch/includes/amcl.launch.xml">

    <arg name="initial_pose_x" value="$(arg initial_pose_x)"/>

    <arg name="initial_pose_y" value="$(arg initial_pose_y)"/>

    <arg name="initial_pose_a" value="$(arg initial_pose_a)"/>


  <include file="$(find chefbot_bringup)/launch/includes/move_base.launch.xml"/>


The preceding launch file starts the 3D sensors related nodes, the map server for providing the map data, the amcl node for performing localization, and the move_base node to move the robot from the commands getting from higher level.

The complete amcl launch parameters are mentioned inside another sub file called amcl.launch.xml. It is placed in chefbot_bringup/launch/include. Following is the definition of this file:


  <arg name="use_map_topic"  default="false"/>

  <arg name="scan_topic"     default="scan"/>

  <arg name="initial_pose_x" default="0.0"/>

  <arg name="initial_pose_y" default="0.0"/>

  <arg name="initial_pose_a" default="0.0"/>

  <node pkg="amcl" type="amcl" name="amcl">

    <param name="use_map_topic"             value="$(arg use_map_topic)"/>



    <!-- Increase tolerance because the computer can get quite busy -->

    <param name="transform_tolerance"       value="1.0"/>

    <param name="recovery_alpha_slow"       value="0.0"/>

    <param name="recovery_alpha_fast"       value="0.0"/>

    <param name="initial_pose_x"            value="$(arg initial_pose_x)"/>

    <param name="initial_pose_y"            value="$(arg initial_pose_y)"/>

    <param name="initial_pose_a"            value="$(arg initial_pose_a)"/>

    <remap from="scan"                      to="$(arg scan_topic)"/>



We can refer the ROS amcl package wiki for getting more details about each parameter.

We will see how to localize and path plan the robot using the existing map.

Rerun the robot hardware nodes using the following command:

$ roslaunch chefbot_bringup robot_standalone.launch

Run the amcl launch file using the following command:

$ roslaunch chefbot_bringup amcl_demo.launch map_file:=/home/lentin/room.yaml

We can launch RViz for commanding the robot to move to a particular pose on the map.

We can launch RViz for navigation using the following command:

$ roslaunch chefbot_bringup view_navigation.launch

The following is the screenshot of RViz:

Understanding AMCL

Figure 12: Robot autonomous navigation using AMCL

We will see more about each option in RViz and how to command the robot in the map in the following section.

Understanding RViz for working with the Navigation stack

We will explore various GUI options inside RViz to visualize each parameter in the Navigation stack.

2D Pose Estimate button

The first step in RViz is to set the initial position of the robot on the map. If the robot is able to localize on the map by itself, there is no need to set the initial position. Otherwise, we have to set the initial position using the 2D Pose Estimate button in RViz, as shown in the following screenshot:

2D Pose Estimate button

Figure 13: RViz 2D Pose Estimate button

Press the 2D Pose Estimate button and select a pose of the robot using the left mouse button, as shown in the preceding figure. Check if the actual pose of the robot and the robot model in RViz are the same. After setting the pose, we can start path plan the robot.

The green color cloud around the robot is the particle cloud of amcl. If the particle amount is high, it means the uncertainty in the robot position is high, and if the cloud is less, it means that uncertainty is low and the robot is almost sure about its position. The topic handling the robot's initial pose is:

·        Topic Name: initialpose

·        Topic Type: geometry_msgs/PoseWithCovarianceStamped

Visualizing the particle cloud

The particle cloud around the robot can be enabled using the PoseArray display topic. Here the PoseArray topic is /particlecloud displayed in RViz. The PoseArray type is renamed as Amcl Particles.

·        Topic: /particlecloud

·        Type: geometry_msgs/PoseArray

Visualizing the particle cloud

Figure 14: Visualizing AMCL particles

The 2D Nav Goal button

The 2D Nav Goal button is used to give a goal position to the move_base node in the ROS Navigation stack through RViz. We can select this button from the top panel of RViz and can give the goal position inside the map by left clicking the map using the mouse. The goal position will send to the move_base node for moving the robot to that location.

·        Topic: move_base_simple/goal

·        Topic Type: geometry_msgs/PoseStamped

The 2D Nav Goal button

Figure 15 [ ]:Setting robot goal position in RViz using 2D Nav Goal

Displaying the static map

The static map is the map that we feed into the map_server node. The map_server node serves the static map in the /map topic.

·        Topic: /map

·        Type: nav_msgs/GetMap

The following is the static map in RViz:

Displaying the static map

Figure 16: Visualizing static map in RViz

Displaying the robot footprint

We have defined the robot footprint in the configuration file called costmap_common_params.yaml. This robot has a circular shape, and we have given the radius as 0.45 meters. It can visualize using the Polygon display type in RViz. The following is the circular footprint of the robot around the robot model and its topics:

·        Topic: /move_base/global_costmap/obstacle_layer_footprint/footprint_stamped

·        Topic: /move_base/local_costmap/obstacle_layer_footprint/footprint_stamped

·        Type: geometry_msgs/Polygon

Displaying the robot footprint

Figure 17: global and local robot footprint in RViz

Displaying the global and local cost map

The following RViz screenshot shows the local cost map, the global cost map, the real obstacles, and the inflated obstacles. The display type of each of these maps is map itself.

·        Local cost map topic: /move_base/local_costmap/costmap

·        Local cost map topic type: nav_msgs/OccupancyGrid

·        Global cost map topic: /move_base/global_costmap/costmap

·        Global cost map topic type: nav_msgs/OccupancyGrid

Displaying the global and local cost map

Figure 18 : Visualizing global and local map, and real and inflated obstacle in RViz

To avoid collision with the real obstacles, it is inflated to some distance from real obstacles called inflated obstacle as per the values in the configuration files. The robot only plans a path beyond the inflated obstacle; inflation is a technique to avoid collision with the real obstacles.

Displaying the global plan, local plan, and planner plan

The global plan from the global planner is shown as green in the next screenshot. The local plan is shown as red and the planner plan as black. The local plan is each section of the global plan and the planner plan is the complete plan to the goal. The global plan and the planner plan can be changed if there are any obstacles. The plans can be displayed using the Path display type in RViz.

·        Global plan topic: /move_base/DWAPlannerROS/global_plan

·        Global plan topic type: nav_msgs/Path

·        Local plan topic: /move_base/DWAPlannerROS/local_plan

·        Local plan topic type: nav_msgs/Path

·        Planner plan topic: /move_base/NavfnROS/plan

·        Planner plan topic type: nav_msgs/Path

Displaying the global plan, local plan, and planner plan

Figure 19: Visualizing global, local, and planner plan in RViz

The current goal

The current goal is the commanded position of the robot using the 2D Nav Goal button or using the ROS client nodes. The red arrow indicates the current goal of the robot.

·        Topic: /move_base/current_goal

·        Topic type: geometry_msgs/PoseStamped

The current goal

Figure 20: Visualizing robot goal position

Obstacle avoidance using the Navigation stack

The Navigation stack can avoid a random obstacle in the path. The following is a scenario where we have placed a dynamic obstacle in the planned path of the robot.

The first figure shows a path planning without any obstacle on the path. When we place a dynamic obstacle on the robot path, we can see it planning a path by avoiding the obstacle.

Obstacle avoidance using the Navigation stack

Figure 21: Visualizing obstacle avoidance capabilities in RViz

Working with Chefbot simulation

The chefbot_gazebo simulator package is available along with the chefbot_bringup package, and we can simulate the robot in Gazebo. We will see how to build a room similar to the room we tested with hardware. First we will check how to build a virtual room in Gazebo.

Building a room in Gazebo

We will start building the room in Gazebo, save into Semantic Description Format (SDF), and insert in the Gazebo environment.

Launch Gazebo with Chefbot robot in an empty world:

$ roslaunch chefbot_gazebo chefbot_empty_world.launch

It will open the Chefbot model in an empty world on Gazebo. We can build the room using walls, windows, doors, and stairs.

There is a Building Editor in Gazebo. We can take this editor from the menu Edit | Building Editor. We will get an editor in Gazebo viewport.

Building a room in Gazebo

Figure 22: Building walls in Gazebo

We can add walls by clicking the Add Wall option on the left side pane of Gazebo. In the Building Editor, we can draw the walls by clicking the left mouse button. We can see adding walls in editor will build real 3D walls in Gazebo. We are building a similar layout of the room that we tested for the real robot.

Save the room through the Save As option, or press the Done button; a box will pop up to save the file. The file will get saved in the .sdf format. We can save this example as final_room.

After saving the room file, we can add the model of this room in the gazebo model folder, so that we can access the model in any simulation.

Adding model files to the Gazebo model folder

The following procedure is to add a model to the gazebo folder:

1.    Locate the default model folder of Gazebo, which is located in the folder ~/.gazebo/models.

2.    Create a folder called final_room and copy final_room.sdf inside this folder. Also, create a file called model.config, which contains the details of the model file. The definition of this file follows:

3.  <?xml version="1.0"?>


5.  <model>

6.  <!-- Name of model which is displaying in Gazebo -->

7.    <name>Test Room</name>

8.    <version>1.0</version>

9.  <!-- Model file name -->

10.  <sdf version="1.2">final_room.sdf</sdf>


12.  <author>

13.    <name>Lentin Joseph</name>

14.    <email></email>

15.  </author>


17.  <description>

18.    A test room for performing SLAM

19.  </description>


After adding this model in the model folder, restart the Gazebo and we can see the model named Test Room in the entry in the Insert tab, as shown in the next screenshot. We have named this model as Test Room in the model.config file; that name will show on this list. We can select this file and add to the viewport, as shown next:

Adding model files to the Gazebo model folder

Figure 23: Inserting the walls in Chefbot simulation

After adding to the viewport, we can save the current world configuration. Take File from the Gazebo menu and Save World As option. Save the file as test_room.sdf in the worlds folder of the chefbot_gazebo ROS package.

After saving the world file, we can add it into the chefbot_empty_world.launch file and save this launch file as the chefbot_room_world.launch file, which is shown next:

  <include file="$(find gazebo_ros)/launch/empty_world.launch">

    <arg name="use_sim_time" value="true"/>

    <arg name="debug" value="false"/>

<!-- Adding world test_room.sdf as argument -->

    <arg name="world_name" value="$(find chefbot_gazebo)/worlds/test_room.sdf"/>


After saving this launch file, we can start the launch file chefbot_room_world.launch for simulating the same environment as the hardware robot. We can add obstacles in Gazebo using the primitive shapes available in it.

Instead of launching the robot_standalone.launch file from chefbot_bringup for hardware, we can start chefbot_room_world.launch for getting the same environment of the robot, and the odom and tf data in simulation.

$ roslaunch chefbot_gazebo chefbot_room_world.launch

Other operations, such as SLAM and AMCL, have the same procedure as we followed for the hardware. The following launch files are used to perform SLAM and AMCL in simulation:

Running SLAM in simulation:

$ roslaunch chefbot_gazebo gmapping_demo.launch

Running the Teleop node:

$ roslaunch chefbot_brinup keyboard keyboard_teleop.launch

Running AMCL in simulation:

$ roslaunch chefbot_gazebo amcl_demo.launch

Sending a goal to the Navigation stack from a ROS node

We have seen how to send a goal position to a robot for moving it from point A to B, using the RViz 2D Nav Goal button. Now we will see how to command the robot using actionlib client and ROS C++ APIs. Following is a sample package and node for communicating with Navigation stack move_base node.

The move_base node is SimpleActionServer. We can send and cancel the goals to the robot if the task takes a lot of time to complete.

The following code is SimpleActionClient for the move_base node, which can send the x, y, and theta from the command line arguments. The following code is in the chefbot_bringup/src folder with the name of send_robot_goal.cpp:

#include <ros/ros.h>

#include <move_base_msgs/MoveBaseAction.h>

#include <actionlib/client/simple_action_client.h>

#include <tf/transform_broadcaster.h>

#include <sstream>

#include <iostream>

//Declaring a new SimpleActionClient with action of move_base_msgs::MoveBaseAction


actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient;

int main(int argc, char** argv){

  ros::init(argc, argv, "navigation_goals");

//Initiating move_base client

  MoveBaseClient ac("move_base", true);

//Waiting for server to start


    ROS_INFO("Waiting for the move_base action server");


//Declaring move base goal

  move_base_msgs::MoveBaseGoal goal;

//Setting target frame id and time in the goal action

  goal.target_pose.header.frame_id = "map";

  goal.target_pose.header.stamp = ros::Time::now();

//Retrieving pose from command line other vice execute a default value


    goal.target_pose.pose.position.x = atof(argv[1]);

    goal.target_pose.pose.position.y = atof(argv[2]);

    goal.target_pose.pose.orientation.w = atof(argv[3]);


  catch(int e){

    goal.target_pose.pose.position.x = 1.0;

    goal.target_pose.pose.position.y = 1.0;

    goal.target_pose.pose.orientation.w = 1.0;


  ROS_INFO("Sending move base goal");

//Sending goal



  if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)

    ROS_INFO("Robot has arrived to the goal position");


    ROS_INFO("The base failed for some reason");


  return 0;


The following lines are added to CMakeLists.txt for building this node:

add_executable(send_goal src/send_robot_goal.cpp)

target_link_libraries(send_goal  ${catkin_LIBRARIES}  )

Build the package using catkin_make and test the working of the client using the following set of commands using Gazebo.

Start Gazebo simulation in a room:

$ roslaunch chefbot_gazebo chefbot_room_world.launch

Start the amcl node with the generated map:

$ roslaunch chefbot_gazebo amcl_demo.launch map_file:=final_room.yaml

Start RViz for navigation:

$ roslaunch chefbot_bringup view_navigation.launch

Run the send goal node for sending the move base goal:

$ rosrun chefbot_bringup send_goal 1 0 1

We will see the red arrow appear when this node runs, which shows that the pose is set on RViz.

Sending a goal to the Navigation stack from a ROS node

Figure 24: Sending a goal to move_base node from C++ APIs

After completing the operation, we will see the following messages in the send goal terminal:

Sending a goal to the Navigation stack from a ROS node

Figure 25: Terminal messages printing when a goal is send from action client

We will get the desired pose of the robot in the map by using the RViz 2D Nav goal button. Simply echoing the topic /move_base/goal will print the pose that we commanded through RViz. We can use these values as command line arguments in the send_goal node.


1.    What are the basic requirements for working with ROS Navigation stack?

2.    What are the main configuration files for working with ROS Navigation stack?

3.    How does AMCL package in ROS work?

4.    What are the methods to send a goal pose to Navigation stack?


In this chapter, we mainly covered interfacing a DIY autonomous mobile robot to ROS and navigation package. We saw an introduction of this robot and the necessary components and connection diagrams of the same. We saw the robot firmware and how to flash it into the real robot. After flashing the firmware, we learned how to interface it to ROS and saw the Python nodes for interfacing the LaunchPad controller in the robot and the nodes for converting twist message to motor velocities and encoder ticks to odom and tf.

After discussing the interconnection of the Chefbot nodes, we covered the C++ port of some important nodes for odometry calculation and the base controller node. After discussing these nodes, we saw detailed configurations of the ROS Navigation stack. We also did gmapping. AMCL and came into detail description of each options in RViz for working with Navigation stack. We also covered the obstacle avoidance using the Navigation stack and worked with Chefbot simulation. We set up a similar environment in Gazebo like the environment of the real robot and went through the steps to perform SLAM and AMCL. At the end of this chapter, we saw how we can send a goal pose to the Navigation stack using actionlib.