Difference between revisions of "Robot Operating System"

From Mech
Jump to navigationJump to search
Line 119: Line 119:


===Control Node===
===Control Node===
The control node lies downstream of the estimator node and is driven by updates to the system_state topic.

===Keyboard Node===
===Keyboard Node===
===Camera Node===
===Camera Node===

Revision as of 15:05, 11 June 2011

Overview

This page serves as a short introduction to ROS for the new or potential user. Although ROS is a tremendously complex and multifaceted software package, this page endeavors to outline the basic uses and functionality provided by ROS's framework. This is done through example by discussing the high level design of a ROS system developed by Jake Ware and Jarvis Schultz in 2011 for the puppeteer robot system. There is also a short "highlights" section that directs new users towards some useful ROS features that might not be readily apparent.

Introduction

Above all else, ROS should be seen as a tool to create and manage complex electromechanical systems. Originally developed by the Stanford Artificial Intelligence Laboratory in 2007, the ROS project was adopted by Willow Labs in 2008 and remains in their care. The following is Willow Labs' description of ROS:

"ROS is an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management. It also provides tools and libraries for obtaining, building, writing, and running code across multiple computers." [Source: ROS Intro]

All of this is true, but the underlying message of all this technical sophistication is still that ROS enables groups of people to work on complex projects by providing a common and well organized framework, while adding a minimal amount of overhead.

A presentation was given to the LIMS lab by Jake Ware and Jarvis Schultz in the Spring of 2011. Although it is not comprehensive, it covers the overall structure and use of ROS, some of the utilities provided, discusses some applications, and goes over pros and cons of using it in a project.

LIMS ROS Presentation: Part 1, Part 2, Part 3, Part 4

Willow Garage ROS Compilation: Three Years

Getting Started

Installation

Currently, ROS is only fully supported in Ubuntu Linux. The full list of supported systems can be found here. A detailed installation walkthrough can be found here. The installation time can take anywhere from 45 minutes to several hours depending on the speed of your internet connection.

Tutorials

If you are planning on using ROS for a long term project, it is absolutely worth investing the time to work through the tutorials provided on the ROS website. Although there are many more tutorials focused on specific stacks and packages, the introductory tutorials are the best place to start and can be found here. If you need a quick refresher, or are having trouble remembering some of the command line tools, you can use this cheat sheet.

Example

In the interest of demonstrating some basic techniques and good practices in ROS, the following example system is presented and described. First, the overall layout and structure of the system will be explained and justified. This will be followed by a short description of each of the packages and a link to download the actual code. The purpose of this system is to perform open and closed-loop control of a mass hung from a winch on a puppeteer robot using the Microsoft Kinect for object tracking. The majority of this code was written by either Jarvis Schultz or Jake Ware, but credit will be given for the parts that were not original.

System Overview

As mentioned above, the purpose of this system is to perform open and closed-loop tracking on a mass hanging from a puppeteer robot's winch. A video of the open-loop version of the system can be found hereFor the purposes of this article, we will not discuss the robot's code or the code in the Kinect stack. It is particularly important to treat the Kinect as a black box because that software is updated and maintained by Willow Garage. Assuming these two systems perform as they should, we can focus on the six nodes that were written specifically for this system. These six nodes are as follows:


Original Nodes:

Serial Node (C++): Interfaces with the robot through a serial port assigned to the FTDI cable that attaches to the XBee Wireless Chip. As a safety fail safe, It also watches stdin for any key strike and executes an emergency stop when it sees one.

Estimator Node (C++): Collects state information about the hanging mass from the object tracker and about the robots current position from robot's encoder-based odometry calculations. Although the current version of the Control Node does this, it will eventually be responsible for calculating string length and robot and mass velocities.

Control Node (Python): Uses the current state, last state, and time to calculate the proper gains and control inputs for the next time step.

Object Tracker Node (C++): Find the hanging mass location from the point cloud data generated from the openni_camera node developed by WIllow Garage.

Marker Node (C++): Generates 3D visuals for the robot and mass and displays them in the proper orientation and position in rviz, ROS's visualization software.

Keyboard Node (C++): This node watches stdin for keyboard input and modifies the operating condition of the system according to a defined command set.

Puppeteer Messages (NA): A collection of all the message definitions for the topics and services used in this system.

Borrowed Nodes

Kinect Nodes: This is a black box for us and consists of openni_camera and several other ROS packages. This software must be started before the system can function.

rviz: ROS's visualization software that is extremely useful for debugging and working with 3D data.

The following block diagram illustrates the flow of information between these nodes. The timing of the system is driven by the 30Hz rate of the Kinect. That is, the Kinect drives the openni (Kinect) nodes, the openni nodes drive the object tracker, the object tracker drives the estimator, the estimator drives the control node, the control node sends a new command to the robot, and process is repeated. In the long run, the system won't be driven off the 30Hz rate of the Kinect, but by an independent timer that will get robot position updates more frequently.

Puppeteer Block Diagram

Details and videos of the system can be found on the main research page for the puppeteer project.

Kinect Overview

It is helpful to have some background information on the Kinect to understand how this system operates. The Microsoft Kinect is a device developed by PrimeSense for the Xbox 360 gaming platform to allow the system to track the user's motion and gestures. Because of its low cost and relatively accurate sensor, the open source software community quickly rallied around it. Within three hours of its release, the Kinect's protocol had been hacked and drivers were released under an open source license. This resulted in the OpenKinect project and the implementation of these drivers in several different languages and frameworks. The most significant of these efforts was Willow Garage's decision to support the Kinect and create a stack for it in ROS. Shortly after, PrimeSense released their own open source drivers and NITE middleware that gave users similar versions of their skeleton tracking and edge detection algorithms. All of this software was grouped under the OpenNI project. Willow Garage quickly adopted these drivers and stopped supporting the OpenKinect version. In parallel with this, Willow Garage also began developing a new version of their point cloud library called PCL2. This made working with the raw Kinect data much easier. See below for a hardware summary and relevant links.


Kinect Hardware Summary:

1 RGB camera (640x480)

1 infrared camera (640x480 with 2048 depth levels)

1 infrared emitter

Structured Light approach to measuring depth

30 Hz update rate


Relevant Links:

PrimeSense Homepage

PrimeSense NITE Middleware

OpenNI Homepage

Point Cloud Library Homepage

Microsoft Kinect Homepage

Kinect Teardown - iFixit

Structured Light - Wikipedia

Kinect Projects

Installation

All of the packages used in this example can be found on the following github page. Here are individual links to the packages: (Serial Node, Estimator Node, Control Node, Object Tracker Node, Marker Node, Keyboard Node, Puppeteer Messages)

Once you are in a folder in the ROS package path, enter the following commands to download build the package:

roscreate-pkg PACKAGE_NAME
cd PACKAGE_NAME
git clone git@github.com:jakeware/PACKAGE_NAME.git
cd PACKAGE_NAME
mv * .git .gitignore ../
cd ..
rm -r PACKAGE_NAME
rosmake

If you would like to just look at the source code, you can find a zip file with all of the source code here.

Serial Node

This package is the only node with access to the serial port. Therefore, it is the only node that can use the XBee to talk with the robot. Its primary function is to provide two services to the rest of the system. One is the speed command service, and the other is the position request service. When another node calls either of these services, the serial node takes the incoming message definition and compiles a string of custom floats to send to the robot. If the serial node received a speed command, it simply sends the string out to the XBee and replies to the requesting node whether or not that operation was successful. If the serial node received a position request, it compiles a similar string, and then waits for the reply from the robot. When it receives the reply, it sends this information back to the node that made the original request. If it does not get a reply from the robot, it will eventually time out and return a failure to the requesting node. This node also has an added safety feature where it looks for a key strike on the stdin for its terminal and, if it sees one, it shuts down both services and begins sending the stop string repeatedly.

Estimator Node

The estimator node is responsible for collecting both the state of the mass and the robot. Currently, it is driven by the Kinect frequency and will not function unless the object tracker publishes a new mass position on its outgoing topic. Once the estimator node gets a new mass position, it calls the position request service and waits for the serial node to pass back the robots reported position. Once it has all of this information, it assembles the system state and publishes it on the system_state topic.

Object Tracker

The object tracker is responsible for finding the mass position given a RGBD point cloud from the Kinect. It has two modes of operation. First, it looks at the point cloud from the entire area under the puppeteer stage and finds the centroid of all the points in this cloud. Once it has found the object, it will only look at a cube several inches wide around the last valid centroid position. This reduces the computational time and cuts out noise dispersed across the entire point cloud. If it ever finds a centroid from a cloud with very few data points, it assumes it has lost the mass and returns to looking at the entire puppeteer stage. Although the mass position is passed to the estimator node through the object1_position topic, it also publishes a point cloud for the object, a point cloud for all of the Kinect data, and a frame for the mass position. All three of these can be viewed in rviz.

Control Node

The control node lies downstream of the estimator node and is driven by updates to the system_state topic.

Keyboard Node

Camera Node

Highlights

Timers

Parameter Server

Severity Levels

Launch Files

Command Line Tools

Resources

ROS on Wikipedia

ROS Homepage

ROS Documentation

ROS Getting Started

ROS Tutorials

git Homepage

github Homepage