Project Description

The main goal of the project was to develop a system capable of autonomously moving to designated locations with reasonable challenges, such as moving obstacles

Implementation Details

Robot

The main piece of equipment used in this project was TurtleBot 2. It is a robot offered to hobbyists as well as to professionals to implement ideas easily and with fairly low cost. It comes with an open-source software and it has grown a large community around it. A hokuyo sensor was mounted on the turtlebot to provide laser readings of the envi-ronment for mapping and localization purposes.

Robot Platform

ROS (Robot Operating System) is a BSD-licensed system for controlling robotic com-ponents from a PC. A ROS system is comprised of a number of independent nodes, each of which communicates with the other nodes using a publish/subscribe messaging model

Hector SLAM

Hector Mapping is a SLAM approach that can be used without odometry as well as on platforms that exhibit roll/pitch motion of the sensor/platform. It leverages the high up-date rate of the LIDAR Sensor and provides 2D pose estimates at scan rate of the sensors [1].

Localization: Monte Carlo Localization

Monte Carlo localization (MCL), also known as particle filter localization, is an algo-rithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot [2].

Collision Avoidance: Dynamic Window Approach

Unlike other avoidance methods, the dynamic window approach is derived directly from the dynamics of the robot and is specially designed to deal with the constraints imposed by limited velocities and accelerations of the robot. The dynamic window approach first prunes the overall search space by considering only the next steering command. This results in a two-dimensional search space of circular trajectories. After that, the search space is reduced to the admissible velocities allowing the robot to stop safely without colliding with an obstacle. Finally, the dynamic window restricts the admissible velocities to those that can be reached within a short time inter-val given the limited accelerations of the robot. This way we make sure that the dynam-ics constraints are considered. The robot constantly picks a trajectory at which it can maximize its translational velocity and the distance to obstacles, yet minimize the angle to its goal relative to its own heading direction. This is done by maximizing the objective function [3].

Project Implementation

The Project Implementation was done in two phases. Phase I involved mapping the en-vironment to be used by the robot for autonomous delivery in Phase II.

Phase I: Creating a Map of the Environment using SLAM

The map of the environment was built using Hector SLAM algorithm while teleoperating the turtlebot.

Phase II: Autonomous Delivery Using Turtlebot

General Architecture

Turtlebot

Transform Configuration

The turtlebot publishes the transform configuration on the /tf topic. It helps keep track of multiple coordinate frames (base, sensor, etc.) over time. tf maintains the relationship between coordinate frames in a tree structure buffered in time, and helps transform points, vectors, etc. between any two coordinate frames at any desired point in time.

Sensor Information The lidar sensor on the turtlebot publishes the sensor information on the /LaserScan top-ic to avoid static and dynamic obstacles.

Base Controller The navigation stack sends velocity commands in the base coordinate frame of the robot on the /cmd_vel topic

Odometry Information The navigation stack uses tf to determine the robot’s location in the world and relate sensor data to a static map. However, tf does not provide any information about the velocity of the robot. Because of this, the navigation stack requires that any odometry source publish both a transform and a nav_msgs/Odometry message over ROS that con-tains velocity information which is done by the turtlebot.

Map Server

The map server offers the saved map as a ROS Service and/or publishes the map via topic “/map”

We make use of the move_base package of the ROS Navigation Stack [4] [5]. It takes in information from odometry and sensor streams and outputs velocity commands to send to a mobile base.

AMCL- Localization

Localization is done using amcl package which is a probabilistic localization algorithm that implements the adaptive (or KLD-sampling) Monte Carlo localization approach, which uses a particle filter to track the pose of a robot against the known map provided via the map server. During operation amcl estimates the transformation of the base frame (~base_frame_id) in respect to the global frame (~global_frame_id) but it only publishes the transform between the global frame and the odometry frame (~ odom_frame_id)

Global and Local Cost Maps

The navigation stack uses two costmaps to store information about obstacles in the world. One costmap is used for global planning, meaning creating long-term plans over the entire environment, and the other is used for local planning and obstacle avoidance. The global costmap takes the static map from the map_server via the /map_server topic The local cost map takes in sensor data from the world, builds a 2D occupancy grid of the data, and inflates costs in a 2D costmap based on the occupancy grid and a user specified inflation radius.

The global planner uses the global costmap to generate a long-term plan using Dijkstra’s Algorithm. The last plan computed is published every time the planner computes a new path and used primarily for visualization purposes. The local planner creates, locally around the robot, a value function, represented as a grid map. This value function encodes the costs of traversing through the grid cells. The planners job is to use this value function to determine dx, dy, dtheta velocities to send to the robot.

Recovery Behaviors

Should the robot be stuck recovery behaviors execute in the following order:

  • obstacles outside of a user-specified region will be cleared from the robot’s map
  • the robot will perform an in-place rotation to clear out space.
  • the robot will more aggressively clear its map, removing all obstacles outside of the rectangular region in which it can rotate in place
  • followed by another in-place rotation
  • Infeasible goal, notifies user and aborts

User Interface and its Integration

Disclaimer: The author of this section and its development is Palash

Objectives

Extendable:

The rqt package was utilized to develop custom user interface. rqt is a software framework of ROS that implements the various GUI tools in the form of plugins. One can run all the existing GUI tools as dockable windows within rqt! The tools can still run in a traditional standalone method, but rqt makes it easier to manage all the various windows on the screen at one moment.

Advantage of rqt framework:

  • Standardized common procedures for GUI (start-shutdown hook, restore previ-ous states).
  • Dockable multiple widgets in a single window (no need to open multiple win-dows)
  • Easily turn your Qt widgets into rqt plugin.
  • Expect good support at answers.ros.org (ROS community website for the ques-tions) since rqt developers are active!
  • Support multi-platform (basically wherever Qt and ROS run) and multi-language (Python, C++)
  • Manageable lifecycle: rqt plugins using common API makes maintenance & re-use easier

Functionalities:

The following functionalities were implemented in the project:

  • User can select the place of receiving of package and place of delivery of pack-age. The following figure shows a drop-down list to select destinations.

  • The GUI can display messages according to the state of the robot. In the fig-ure below GUI confirms initiation of delivery process.

Similarly, other messages are displayed e.g. start-position reached, goal-position reached, help required when turtlebot is stuck.

  • GUI displays kinect camera feed.
  • All the existing functionalities of rqt are available to be docked in the same window. E.g. topic list.

Implementation

Implementation principle is based on the following figure:

  • amcl_pose_call_back: The GUI subcribes to amcl topic receives pose update of turtlebot in the map built. This callback function generates the necessary messages and checks when the turtle-bot reaches destination.
  • The states for the turtlebot:
    • Idle state: robot is at docking station.
    • Delivery state, towards start point: robot is in motion towards start point to receive package.
    • Delivery state, waiting at start point: robot waits till the user submits the package and press “GoToGoal”.
    • Delivery state, towards goal point: robot is in motion towards goal point to deliver package.
    • Idle state, towards docking station: robot is in motion towards docking.

User commands are possible when the robot is in idle state, any user command in delivery state is neglected.

References

[1] Stefan Kohlbrecher, “Hector Mapping - ROS Wiki.” [Online]. Available: http://wiki.ros.org/hector_mapping. [Accessed: 28-May-2018].

[2] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), vol. 2, pp. 1322–1328.

[3] D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision avoidance,” IEEE Robot. Autom. Mag., vol. 4, no. 1, pp. 23–33, Mar. 1997.

[4] Eitan Marder-Eppstein, David V. Lu, M. Ferguson, and A. Hoy, “navigation - ROS Wiki.” [Online]. Available: http://wiki.ros.org/navigation. [Accessed: 28-May-2018].

[5] Eitan Marder-Eppstein, David V. Lu, Michael Ferguson, and Aaron Hoy, “move_base - ROS Wiki.” [Online]. Available: http://wiki.ros.org/move_base. [Accessed: 28-May-2018].