http://hades.mech.northwestern.edu/api.php?action=feedcontributions&user=ClaraSmart&feedformat=atomMech - User contributions [en]2021-01-27T04:05:44ZUser contributionsMediaWiki 1.18.2http://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:48:28Z<p>ClaraSmart: /* For Tracking Objects in 2D */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 30mm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15<br />
<br />
Error optimization completed by adjusting the values of winty and wintx for various images. <br />
<br />
Calibration results after optimization (with uncertainties):<br />
<br />
Focal Length: fc = [ 1219.49486 1227.09415 ] ± [ 1.98092 1.95916 ]<br />
<br />
Principal point: cc = [ 502.28854 524.25824 ] ± [ 1.26522 1.07298 ]<br />
<br />
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees<br />
<br />
Distortion: kc = [ -0.39504 0.36943 -0.00018 -0.00105 0.00000 ] ± [ 0.00307 0.01033 0.00016 0.00021 0.00000 ]<br />
<br />
Pixel error: err = [ 0.27578 0.25400 ]<br />
<br />
Note: The numerical errors are approximately three times the standard deviations (for reference).<br />
<br />
[[Image:CalibImages.jpg|thumb|200px|Mosaic of Calibration Images|left]]<br />
[[Image:ErrorPlot.jpg|thumb|400px|Error Plot following optimization|center]]</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:46:54Z<p>ClaraSmart: /* Example Intrinsic Parameters (LIMS Lab, Winter 2010) */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15<br />
<br />
Error optimization completed by adjusting the values of winty and wintx for various images. <br />
<br />
Calibration results after optimization (with uncertainties):<br />
<br />
Focal Length: fc = [ 1219.49486 1227.09415 ] ± [ 1.98092 1.95916 ]<br />
<br />
Principal point: cc = [ 502.28854 524.25824 ] ± [ 1.26522 1.07298 ]<br />
<br />
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees<br />
<br />
Distortion: kc = [ -0.39504 0.36943 -0.00018 -0.00105 0.00000 ] ± [ 0.00307 0.01033 0.00016 0.00021 0.00000 ]<br />
<br />
Pixel error: err = [ 0.27578 0.25400 ]<br />
<br />
Note: The numerical errors are approximately three times the standard deviations (for reference).<br />
<br />
[[Image:CalibImages.jpg|thumb|200px|Mosaic of Calibration Images|left]]<br />
[[Image:ErrorPlot.jpg|thumb|400px|Error Plot following optimization|center]]</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:45:28Z<p>ClaraSmart: /* Example Intrinsic Parameters (LIMS Lab, Winter 2010) */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15<br />
<br />
Error optimization completed by adjusting the values of winty and wintx for various images. <br />
<br />
Calibration results after optimization (with uncertainties):<br />
<br />
Focal Length: fc = [ 1219.49486 1227.09415 ] ± [ 1.98092 1.95916 ]<br />
Principal point: cc = [ 502.28854 524.25824 ] ± [ 1.26522 1.07298 ]<br />
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees<br />
Distortion: kc = [ -0.39504 0.36943 -0.00018 -0.00105 0.00000 ] ± [ 0.00307 0.01033 0.00016 0.00021 0.00000 ]<br />
Pixel error: err = [ 0.27578 0.25400 ]<br />
<br />
Note: The numerical errors are approximately three times the standard deviations (for reference).<br />
<br />
[[Image:CalibImages.jpg|thumb|200px|Mosaic of Calibration Images|left]]<br />
[[Image:ErrorPlot.jpg|thumb|200px|Error Plot following optimization|right]]</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:44:26Z<p>ClaraSmart: /* Example Intrinsic Parameters (LIMS Lab, Winter 2010) */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15<br />
<br />
Error optimization completed by adjusting the values of winty and wintx for various images. <br />
<br />
Calibration results after optimization (with uncertainties):<br />
<br />
Focal Length: fc = [ 1219.49486 1227.09415 ] ± [ 1.98092 1.95916 ]<br />
Principal point: cc = [ 502.28854 524.25824 ] ± [ 1.26522 1.07298 ]<br />
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees<br />
Distortion: kc = [ -0.39504 0.36943 -0.00018 -0.00105 0.00000 ] ± [ 0.00307 0.01033 0.00016 0.00021 0.00000 ]<br />
Pixel error: err = [ 0.27578 0.25400 ]<br />
<br />
Note: The numerical errors are approximately three times the standard deviations (for reference).<br />
<br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
[[Image:CalibImages.jpg|thumb|500px|Mosaic of Calibration Images|left]]<br />
[[Image:ErrorPlot.jpgthumb|500px|Error Plot following optimization|right]]</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/File:ErrorPlot.jpgFile:ErrorPlot.jpg2010-06-05T21:42:10Z<p>ClaraSmart: Error Plot following calibration and optimization.</p>
<hr />
<div>Error Plot following calibration and optimization.</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/File:CalibImages.jpgFile:CalibImages.jpg2010-06-05T21:41:18Z<p>ClaraSmart: uploaded a new version of "Image:CalibImages.jpg": Mosaic of the images used for calibration.</p>
<hr />
<div></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/File:CalibImages.jpgFile:CalibImages.jpg2010-06-05T21:38:43Z<p>ClaraSmart: </p>
<hr />
<div></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:38:13Z<p>ClaraSmart: /* Example Intrinsic Parameters (LIMS Lab, Winter 2010) */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15<br />
<br />
Error optimization completed by adjusting the values of winty and wintx for various images. <br />
<br />
Calibration results after optimization (with uncertainties):<br />
<br />
Focal Length: fc = [ 1219.49486 1227.09415 ] ± [ 1.98092 1.95916 ]<br />
Principal point: cc = [ 502.28854 524.25824 ] ± [ 1.26522 1.07298 ]<br />
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ± 0.00000 degrees<br />
Distortion: kc = [ -0.39504 0.36943 -0.00018 -0.00105 0.00000 ] ± [ 0.00307 0.01033 0.00016 0.00021 0.00000 ]<br />
Pixel error: err = [ 0.27578 0.25400 ]<br />
<br />
Note: The numerical errors are approximately three times the standard deviations (for reference).<br />
<br />
[[Image:CalibImages.jpg]]<br />
[[Image:ErrorPlot.jpg]]</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:30:21Z<p>ClaraSmart: /* Calibrating the High Speed Camera */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
<br />
Once downloaded, to open the gui in matlab, go to the toolbox directory then type 'calib_gui' in the command window. (Then select standard memory).<br />
<br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:14:06Z<p>ClaraSmart: /* Example Intrinsic Parameters (LIMS Lab, Winter 2010) */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.<br />
<br />
Number of Images = 15</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:02:55Z<p>ClaraSmart: /* Error Minimization of Calibration Coefficients */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].<br />
<br />
==Example Intrinsic Parameters (LIMS Lab, Winter 2010) ==<br />
<br />
The below example was completed for the high speed vision system in the LIMS lab during Winter Quarter 2010.</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T21:01:34Z<p>ClaraSmart: /* Error Minimization of Calibration Coefficients */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==<br />
<br />
Following the computation of the intrinsic parameters, the relative error can be analyzed using the calibration toolbox. To see the error in a graphical form click 'Analyze Error' in the main gui. This plot will show the error in pixel units. The different point colors represent different images used for calibration. To begin to optimize this error, click on an outlier point. The image number and point of this point will be displayed in the matlab command window. Click outliers of the same color, is there a specific image that has many errors?<br />
<br />
You can also view the projected calibration points on a specific image by clicking 'Reproject on Image' and at the prompt enter the number of the error prone image. This will allow you to see the points which are creating the most error. If the calibration points are significantly off the corners of the grid, it may be a good idea to use 'Add/Suppress Images' to remove then recalibrate the image. (You can also click all corners manually)<br />
<br />
However, if the entire image is not error prone, it is possible to minimize the error by changing the dimensions of the "search box" for which the calibration algorithm looks for a corner. This is accomplished by changing the values of wintx and winty, the default values during calibration are set at 5. To do this click 'Recomp Corners' in the calibration gui, then at the prompt for wintx and winty enter a different (typically larger value, ie 8). Then at the prompt for the numbers of the images for which to recompute the corners, enter the numbers of the images with the error outliers as determined above. <br />
<br />
Repeat this process until the error is within reason. (Typically 3-4 pixels, but dependent on resolution, distance from object, pixel size and application)<br />
<br />
Further information and examples can be found at [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html, the first calibration example].</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:43:04Z<p>ClaraSmart: /* Calculating the Position of an Object on a Plane: Using Matlab */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist_pix *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist_pix is the distance from the focal point to the object in pixels. (measure distance from camera to object using meter stick then convert to pixels by determining the number of pixels per cm in calibration image at that distance)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:40:15Z<p>ClaraSmart: /* Post-Processing of Data */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist is the distance from the focal point to the object in world units. (measure distance from camera to object using meter stick etc)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end<br />
<br />
<br />
==Error Minimization of Calibration Coefficients ==</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:38:48Z<p>ClaraSmart: /* Calculating the Position of an Object on a Plane: Using Matlab */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1) - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist is the distance from the focal point to the object in world units. (measure distance from camera to object using meter stick etc)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:37:10Z<p>ClaraSmart: /* Post-Processing of Data */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Calculating the Position of an Object on a Plane: Using Matlab==<br />
<br />
Similar to the above discussion, a point in 3D space can be determined (in world coordinates) based on the intrinsic and extrinsic parameters from calibration with a MatLab function.<br />
(It is assumed the Z value is a plane and is a constant known parameter)<br />
<br />
[World_x, World_y] = Transpose(Rc_ext) * (zdist *(normalize([pix_x, pix_y], fc, cc, kc, alpha_a), 1] - Tc_ext)<br />
<br />
Where:<br />
Rc_Ext and Tc_ext are parameters from the extrinsic calibration<br />
zdist is the distance from the focal point to the object in world units. (measure distance from camera to object using meter stick etc)<br />
pix_x and pix_y are the camera pixel coordinates corresponding to the desired world point<br />
fc, cc, kc and alpha_a are parameters from the intrinsic camera calibration<br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:23:48Z<p>ClaraSmart: /* Calculating the Position of an Object in 3D */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D: Mathematical Theory==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/High_Speed_Vision_System_and_Object_TrackingHigh Speed Vision System and Object Tracking2010-06-05T20:22:18Z<p>ClaraSmart: /* For Tracking Objects in 2D */</p>
<hr />
<div>__TOC__<br />
=Calibrating the High Speed Camera=<br />
[[image:Distorted_calibration_grid.jpg|thumb|200px|'''Figure 1:''' Calibration grid with distortion.|right]]<br />
Before data can be collected from the HSV system, it is critical that the high speed camera be properly calibrated. In order to obtain accurate data, there is a series of intrinsic and extrinsic parameters that need to be taken into account. Intrinsic parameters include image distortion due to the camera itself, as shown in Figure 1. Extrinsic parameters account for any factors that are external to the camera. These include the orientation of the camera relative to the calibration grid as well as any scalar and translational factors. <br />
<br />
In order to calibrate the camera, download the [http://www.vision.caltech.edu/bouguetj/calib_doc/ Camera Calibration Toolbox for Matlab]. This resource includes detailed descriptions of how to use the various features of the toolbox as well as descriptions of the calibration parameters. <br />
==For Tracking Objects in 2D==<br />
To calibrate the camera to track objects in a plane, first create a calibration grid. The Calibration Toolbox includes a calibration template of black and white squares of side length = 3cm. Print it off and mount it on a sheet of aluminum, foam core board or PVC to create a stiff backing. This grid must be as flat and uniform as possible to obtain an accurate calibration. <br />
<br />
===Intrinsic Parameters===<br />
Following the model of the [http://www.vision.caltech.edu/bouguetj/calib_doc/ first calibration example] on the Calibration Toolbox website, use the high speed camera to capture 10-20 images, holding the calibration grid at various orientations relative to the camera. 1024x1024 images can be obtained using the "Moments" program. The source code can be checked out [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
[[image:Undistorted_calibration_grid.jpg|thumb|200px|'''Figure 2:''' Calibration grid undistorted.|right]]<br />
Calibration Tips:<br />
* One of the images must have the calibration grid in the tracking plane. This image is necessary for calculating the extrinsic parameters and will also be used for defining your origin. <br />
* The images must be saved in the same directory as the Calibration Toolbox.<br />
* Images must be saved under the same name, followed by the image number. (image1.jpg, image2.jpg...)<br />
* The first calibration example on the Calibration Toolbox website uses 20 images to calibrate the camera. I have been using 12-15 images because sometimes the program is incapable of optimizing the calibration parameters if there are too many constraints.<br />
* If a matlab error is generated before the calibration is complete, calibration will be lost. To prevent this, calibrate a set of images, then add groups of images to this set. To do this use the Add/Suppress Image feature. (Described in the 'first example' link above)<br />
<br />
The Calibration Toolbox can also be use to compute the undistorted images as shown in Figure 2. <br />
<br />
After entering all the calibration images, the Calibration Toolbox will calculate the intrinsic parameters, including focal length, principal point, and undistortion coefficients. There is a complete description of the calibration parameters [http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html here]. <br />
<br />
===Extrinsic Parameters===<br />
In addition, the Toolbox will calculate rotation matrices for each image (saved as Rc_1, Rc_2...). These matrices reflect the orientation of the camera relative to the calibration grid for each image. <br />
[[image:2D_imaging.jpg|thumb|500px|'''Figure 3:''' Projection of calibration grid onto the image plane.|center]]<br />
<br />
Figure 3 is an illustration of the [http://en.wikipedia.org/wiki/Pinhole_camera_model pinhole camera model] that is used for camera calibration. Since the coordinates of an object captured by the camera are reported in terms of pixels (camera frame), these data have to be converted to metric units (world frame).<br />
<br />
In order to solve for the extrinsic parameters, first use the 'normalize' function provided by the calibration toolbox:<br />
[xn] = normalize(x_1, fc, cc, kc, alpha_c);<br />
where x_1 is a matrix containing the coordinates of the extracted grid corners in pixels for image1. <br />
The normalize function will apply any intrinsic parameters and return the (x,y) point coordinates free of lens distortion. These points will be dimensionless as they will all be divided by the focal length of the camera. Next, change the [2xn] matrix of points in (x,y) to a [3xn] matrix of points in (x,y,z) by adding a row of all ones. <br />
xn should now look something like this:<br />
<br />
<math>\mathbf{xn} = \begin{bmatrix}<br />
<br />
x_c1/z_c & x_c2/z_c &... & x_cn/z_c \\<br />
y_c1/z_c & y_c2/z_c &... & y_cn/z_c \\<br />
z_c/z_c & z_c/z_c &... & z_c/z_c \end{bmatrix}</math> <br />
Where x_c, y_c, z_c, denote coordinates in the camera frame. z_c = focal length of the camera as calculated by the Calibration Toolbox.<br />
<br />
Then, apply the rotation matrix, Rc_1:<br />
[xn] = Rc_1*x_n;<br />
We now have a matrix of dimensionless coordinates describing the location of the grid corners after accounting for distortion and camera orientation. What remains is to apply scalar and translational factors to convert these coordinates to the world frame. To do so, we have the following equations:<br />
: <math>X_1(1,1) = S1*xn(1,1) + T1 \, </math><br />
: <math>X_1(2,1) = S2*xn(2,1) + T2 \, </math><br />
Where:<br />
: <math>S = (focallength)*(numberofmillimeters/pixel) \, </math><br />
X_1 is the matrix containing the real world coordinates of the grid corners and is provided by the Calibration Toolbox. <br />
Cast these equations into a matrix equation of the form:<br />
: <math>Ax = b \, </math><br />
Where:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
xn(1,1) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,1) & 1 \\<br />
xn(1,2) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,2) & 1 \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
. & . & . & . \\<br />
xn(1,n) & 1 & 0 & 0 \\<br />
0 & 0 & xn(2,n) & 1 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
S_1\\<br />
T_1\\<br />
S_2\\<br />
T_2\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
X_1(1,1)\\<br />
X_1(2,1)\\<br />
X_1(1,2)\\<br />
X_1(2,2)\\<br />
.\\<br />
.\\<br />
.\\<br />
X_1(2,n)\\<br />
X_1(2,n)\end{bmatrix}</math><br />
<br />
Finally, use the Matlab backslash command to compute the least squares approximation for the parameters S1, S2, T1, T1:<br />
x = A\b<br />
<br />
Now that all the parameters have been calculated, percentage error can be calculated, by applying the entire transformation process to the x_1 matrix and comparing the results to the X_1 matrix. If carefully executed, this calibration method can yield percent errors as low as 0.01%.<br />
<br />
==For Tracking Objects in 3D==<br />
In order to obtain the coordinates of an object in three dimensions, a mirror can be used in order to create a "virtual camera," as shown in Figure 4. <br />
[[image:3D_imaging.jpg|thumb|600px|'''Figure 4:''' Three dimensional imaging with mirror at 45 degrees.|center]]<br />
Calibrating the camera to work in three dimensions is very similar to the two dimensional calibration. The main difference is the calibration grid itself. Instead of using a single flat plane for the calibration grid, an additional grid has to be constructed, consisting of two planes at 90 degrees as shown in the figure. This grid must be visible by both the "real" and "virtual" cameras. The purpose of this is to create a common origin. Having a common origin for both cameras will become critical in calculating an object's position in three dimensions. <br />
<br />
Follow the steps for the 2D calibration and complete a separate calibration for each camera. The two sets of intrinsic parameters that are obtained should be identical in theory. In practice, they are likely to differ slightly. <br />
<br />
Once the real world coordinates of an object in the XZ and YZ planes are obtained, the 3D position of the object still has to be obtained. The raw data is inaccurate because it represents the projection of the object's position onto the calibration plane. For the methodology and code required to execute these calculations, refer to the section entitled "Calculating the Position of an Object in 3D."<br />
<br />
=Using the High Speed Vision System=<br />
[[image:VPOD_Setup.jpg|thumb|300px|'''Figure 5:''' VPOD with high speed vision 3D setup.|right]]<br />
Figure 5 shows the current setup for the high speed vision system. The high speed camera is mounted on a tripod. The three lights surrounding the setup are necessary due to the high frequency with which the images are taken. The resulting camera shutter time is very small, so it is important to provide lots of light. The high frequency performance also has to be taken into consideration when processing the images. Processing one thousand 1024x1024 pixel images in real time proves impossible as well as unnecessary. The accompanying C program works by selection one or several regions of interest to focus on. The program applies a threshold to these regions. It then calculates an area centroid based off of the number of white pixels and re-centers the region of interest by locating the boundaries between the white and black pixels. The complete code for this program can be found [http://code.google.com/p/lims-hsv-system/ here].<br />
<br />
I will include a brief description of some of the important constants, which can be found in the header file 'constants.h'<br />
<br />
EXPOSURE & FRAME_TIME: These parameters control number of images to be taken in one second, and are measured in microseconds. Frame time is the amount of time required to take on image. Exposure time is the amount of time the camera shutter is open, and should always be less than the frame time. A ratio of 5:2, FRAME:EXPOSURE seems to work well. To run the camera at 1000images/s for example, EXPOSURE and FRAME_TIME should be set to 400 and 1000, respectively. <br />
<br />
SEQ & SEQ_LEN: For multiple regions of interest, the sequence describes the order in which the regions of interest are addressed. The current setup has the program cycling through the regions in order, so for 1000Hz with two regions, each region would have a frequency for 500images/s, and the camera would alternate between the two regions. For such a setup, SEQ would be set to {ROI_0, ROI_1} and SEQ_LEN set to 2.<br />
<br />
INITIAL_BLOB_XMIN, INITIAL_BLOB_YMIN, INITIAL_BLOB_WIDTH, INITIAL_BLOB_HEIGHT: These parameters describe the initial position and size of the blob surrounding your object of interest. These should be set such that the blob surrounds the initial position of the object. <br />
<br />
THRESHOLD: The threshold is the value (0-255) which is the cutoff value for the black/white boundary. The threshold will have to be adjusted depending on ambient light and the exposure time. It should be set such that the object of interest or fiduciary marker is in the white realm while everything in the background becomes black.<br />
<br />
DISPLAY_TRACKING: This quantity should always be set to one or zero. When equal to one, it displays the region or regions of interest. This is convenient for setting the threshold and positioning the blobs. While taking data, however, this should be disabled in order to speed up the program. <br />
<br />
DTIME: This value is the number of seconds of desired data collection. In order to collect data, run the program, wait for it to initialize, and then press 'd'. 'q' can be pressed at any time to exit the program.<br />
<br />
=Post-Processing of Data=<br />
==Calculating the Position of an Object in 3D==<br />
By comparing the data captured by the "real" camera and the "virtual" camera, it is possible to triangulate the three dimensional position of an object. The calibration procedures discussed earlier provide the real world (x,y) coordinates of any object, projected onto the calibration plane. By determining the fixed location of the two cameras, it is possible to calculate two position vectors and then compute a least squares approximation for their intersection. <br />
<br />
The Calibration Toolbox provides the principal point for each camera measured in pixels. The principal point is defined as the (x,y) position where the principal ray (or optical ray) intersects the image plane. This location is marked by a blue dot in Figure 4. The principal point can be converted to metric by applying the intrinsic and extrinsic parameters discussed in the section "Calibrating the High Speed Camera." <br />
<br />
From here, the distance from the cameras to the origin can be determined from the relationship:<br />
: <math>distance = (focallength)*(numberofmillimeters/pixel) \, </math><br />
<br />
Once this distance is calculated, the parallel equations can be obtained for a line in 3D with 2 known points:<br />
: <math>(x-x_1)/(x_2-x_1)=(y-y_1)/(y_2-y_1)=(z-z_1)/(z_2-z_1) \, </math><br />
Which can be rearranged in order to obtain 3 relationships:<br />
: <math>x*(y_2-y_1)+y*(x_1-x_2) = x_1*(y_2-y_1) + y_1*(x_1-x_2) \, </math><br />
: <math>x*(z_2-z_1)+z*(x_1-x_2) = x_1*(z_2-z_1) + z_1*(x_1-x_2) \, </math><br />
: <math>y*(z_2-z_1)+z*(y_1-y_2) = y_1*(z_2-z_1) + y_1*(y_1-y_2) \, </math><br />
These equations can be cast into matrix form:<br />
<br />
<math>\mathbf{A} = \begin{bmatrix}<br />
y_2-y_1 & x_1-x_2 & 0 \\<br />
z_2-z_1 & 0 & x_1-x_2 \\<br />
0 & z_2-z_1 & y_1-y_2 \end{bmatrix}</math><br />
<math>\mathbf{x} = \begin{bmatrix}<br />
x\\<br />
y\\<br />
z\end{bmatrix}</math><br />
<math>\mathbf{b} = \begin{bmatrix}<br />
x_1*(y_2-y_1) + y_1*(x_1-x_2)\\<br />
x_1*(z_2-z_1) + z_1*(x_1-x_2)\\<br />
y_1*(z_2-z_1) + y_1*(y_1-y_2)\end{bmatrix}</math><br />
<br />
There should be one set of equations for each vector, for a total of six equations. A should be a [6x3] matrix, and b should be a vector of length 6. Computing A\b in Matlab will yield the least squares approximation for the intersection of the two vectors.<br />
<br />
<nowiki>% The function "calc_coords" takes as inputs, the raw data for vision in<br />
% the XZ and XY planes.<br />
<br />
% CC_xz and CC_yz are vectors of size [2x1] containing the principal points<br />
% for each vision plane. Coordinates for principal point should be computed beforehand and reported in<br />
% metric.<br />
<br />
% FC_xz and FC_yz are vectors of size [2x1] containing the two focal<br />
% lengths of the lens for each vision plane. These are obtained from the<br />
% Calibration Toolbox and should be left in pixels. <br />
<br />
% The function returns a matrix of size [nx3] containing the least squares<br />
% approximation of the (x,y,z) coordinates based off of the vectors from<br />
% the two cameras.<br />
<br />
% NOTES:<br />
% - All computations are done in metric. (Based off of calibration grid)<br />
% - The XZ vision plane = "Real Camera"<br />
% - The YZ vision plane = "Virtual Camera"<br />
<br />
function XYZ = calc_coords(XZ, YZ, CC_xz, CC_yz, FC_xz, FC_yz);<br />
matSize_xz = size(XZ);<br />
num_datapoints_xz = matSize_xz(1,1);<br />
matSize_yz = size(YZ);<br />
num_datapoints_yz = matSize_yz(1,1);<br />
num_datapoints = min(num_datapoints_xz, num_datapoints_yz);<br />
XYZ = zeros(num_datapoints-1, 4); % Initialize destination matrix<br />
conv_xz = 5.40; % Conversion factor (number of pixels / mm) for vision in XZ<br />
conv_yz = 3.70; % Conversion factor (number of pixels / mm) for vision in YZ<br />
dist_origin_xz = [(FC_xz(1,1) + FC_xz(2,1)) / 2] / [conv_xz]; % Taking an average of the two focal lengths,<br />
dist_origin_yz = [(FC_yz(1,1) + FC_yz(2,1)) / 2] / [conv_yz]; % Multiplying by conversion factor to get <br />
% the distance to the calibration grid in metric. <br />
prinP_xz = [CC_xz(1,1); -dist_origin_xz; CC_xz(2,1)]; % [3X1] vector containing the coordinates of the principal points<br />
prinP_yz = [dist_origin_yz; -CC_yz(1,1); CC_yz(2,1)]; % for the "Real" and "Virtual" Cameras.<br />
for i = [1 : 1 : num_datapoints - 1]<br />
planeP_xz = [XZ(i,3); 0; XZ(i,4)]; % [3x1] vector containing the points in the plane of the calibration grid<br />
planeP_yz = [0; -(YZ(i+1,3)+YZ(i,3))/2; (YZ(i+1,4)+YZ(i,4))/2];<br />
x1_xz = prinP_xz(1,1); <br />
y1_xz = prinP_xz(2,1); <br />
z1_xz = prinP_xz(3,1); <br />
x1_yz = prinP_yz(1,1); <br />
y1_yz = prinP_yz(2,1); <br />
z1_yz = prinP_yz(3,1);<br />
x2_xz = planeP_xz(1,1);<br />
y2_xz = planeP_xz(2,1);<br />
z2_xz = planeP_xz(3,1);<br />
x2_yz = planeP_yz(1,1);<br />
y2_yz = planeP_yz(2,1);<br />
z2_yz = planeP_yz(3,1);<br />
% Set up matrices to solve matrix equation Ac = b, where c = [x, y, z] <br />
A_xz = [ (y2_xz - y1_xz), (x1_xz - x2_xz), 0; <br />
(z2_xz - z1_xz), 0, (x1_xz - x2_xz); <br />
0, (z2_xz - z1_xz), (y1_xz-y2_xz) ]; <br />
b_xz = [ x1_xz*(y2_xz - y1_xz) + y1_xz*(x1_xz - x2_xz); <br />
x1_xz*(z2_xz - z1_xz) + z1_xz*(x1_xz - x2_xz); <br />
y1_xz*(z2_xz - z1_xz) + z1_xz*(y1_xz - y2_xz) ];<br />
A_yz = [ (y2_yz - y1_yz), (x1_yz - x2_yz), 0; <br />
(z2_yz - z1_yz), 0, (x1_yz - x2_yz); <br />
0, (z2_yz - z1_yz), (y1_yz-y2_yz) ]; <br />
b_yz = [ x1_yz*(y2_yz - y1_yz) + y1_yz*(x1_yz - x2_yz); <br />
x1_yz*(z2_yz - z1_yz) + z1_yz*(x1_yz - x2_yz); <br />
y1_yz*(z2_yz - z1_yz) + z1_yz*(y1_yz - y2_yz) ];<br />
A = [A_xz; A_yz];<br />
b = [b_xz; b_yz];<br />
c = A\b; % Solve for 3D coordinates<br />
XYZ(i, 1) = XZ(i,2);<br />
XYZ(i, 2) = c(1,1);<br />
XYZ(i, 3) = c(2,1);<br />
XYZ(i, 4) = c(3,1);<br />
end<br />
</nowiki><br />
<br />
==Finding Maxes and Mins, Polyfits==<br />
<br />
The following Matlab function was used to calculate the minima of a set of data:<br />
<br />
% Function locates the minima of a function, based on the 3 previous and<br />
% 3 future points. <br />
% Function takes one input, an [nx4] matrix where:<br />
% ydisp(1,:) = image number from which data was taken. <br />
% ydisp(2,:) = time at which data was taken.<br />
% ydisp(3,:) = x or y position of object.<br />
% ydisp(4,:) = z position (height of object).<br />
% The function returns a matrix of size [nx3] containing the image number,<br />
% time, and value of each local minimum. <br />
function Mins = calc_minima(ydisp);<br />
Mins = zeros(10, 3);<br />
mat_size = size(ydisp);<br />
num_points = mat_size(1,1);<br />
local_min = ydisp(1, 4);<br />
time = ydisp(1, 1);<br />
location = 1;<br />
index = 1;<br />
min_found = 0;<br />
for i = [4 : 1 : num_points - 3]<br />
% execute if three previous points are greater in value<br />
if ( ydisp(i, 4) < ydisp(i - 1, 4) && ydisp(i, 4) < ydisp(i - 2, 4) && ydisp(i, 4) < ydisp(i - 3, 4))<br />
local_min = ydisp(i, 4);<br />
time = ydisp(i, 1);<br />
location = i;<br />
% if next three points are also greater in value, must be a min<br />
if ( ydisp(i, 4) < ydisp(i + 1, 4) && ydisp(i, 4) < ydisp(i + 2, 4) && ydisp(i, 4) < ydisp(i + 3, 4))<br />
min_found = 1;<br />
end<br />
end<br />
if (min_found)<br />
Mins(index, 1) = location;<br />
Mins(index, 2) = time;<br />
Mins(index, 3) = local_min;<br />
index = index + 1;<br />
min_found = 0;<br />
end<br />
end<br />
<br />
Once the minima were located, the following was used to compute a second order polyfit between two consecutive minima:<br />
<br />
% plot_polyfit calls on the function calc_minima to locate the local mins<br />
% of the data, and then calculates a parabolic fit in between each of two<br />
% consecutive minima. <br />
% It takes in raw_data in the form of an [nx4] matrix and returns <br />
% a matrix containing the fitted data, the matrix of calculated minima, and<br />
% the matrix of calculated maxima.<br />
function [fittedData, Maxima, Mins] = plot_polyfit(raw_data);<br />
num_poly = 2;<br />
timestep = 1000;<br />
fittedData = zeros(100,3);<br />
Mins = calc_minima(raw_data); % Calculate minima<br />
Maxima = zeros(10,2);<br />
mat_size = size(Mins);<br />
num_mins = mat_size(1,1);<br />
index = 1;<br />
max_index = 1;<br />
% For each min, up until the second to last min, execute the following...<br />
for i = [1 : 1 : num_mins - 1]<br />
% Calculate coefficients, scaling, and translational factors for polyfit<br />
[p,S,mu] = polyfit(raw_data(Mins(i, 1) : Mins(i+1, 1), 2), raw_data(Mins(i, 1) : Mins(i+1, 1), 4), num_poly);<br />
start = index;<br />
% Given the parameters for the polyfit, compute the values of the<br />
% function for (time at first min) -> (time at second min)<br />
for time = [raw_data(Mins(i, 1), 2) : timestep : raw_data(Mins(i + 1, 1), 2)]<br />
fittedData(index, 1) = time;<br />
x = [time - mu(1,1)] / mu(2,1);<br />
y = 0;<br />
v = 0;<br />
% For each term in the polynomial<br />
for poly = [1 : 1 : num_poly + 1]<br />
y = y + [ p(1, num_poly - poly + 2) ]*x^(poly-1);<br />
v = v + (poly-1)*[ p(1, num_poly - poly + 2) ]*x^(poly-2);<br />
end<br />
fittedData(index, 2) = y;<br />
fittedData(index, 3) = v;<br />
index = index + 1;<br />
end<br />
t = -p(1,2)/(2*p(1,1));<br />
x = [t * mu(2,1)] + mu(1,1);<br />
local_max = 0;<br />
for poly = [1 : 1 : num_poly + 1]<br />
local_max = local_max + [ p(1, num_poly - poly + 2) ]*t^(poly-1);<br />
end<br />
Maxima(max_index, 1) = x;<br />
Maxima(max_index, 2) = local_max;<br />
max_index = max_index + 1;<br />
end</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-21T02:47:37Z<p>ClaraSmart: /* The Head Segment */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
<br />
In this project, we developed and built a robot to mimic serpentine motion of a snake. The robot is made up of several body segments and a head. Each body segment contains a RC servo, which is controlled by a PIC microntroller located in the head of the snake robot. This wiki page contains discussions of the motion of a snake, mechanical design, electronic design and PIC code. <br />
<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake]<br />
<br />
[http://www.youtube.com/watch?v=r_GOOFLnI6w Video of the robot snake 2]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include serpentine movement, rectilinear movement, concertina movement and side-winding movement. The most common motion exhibited by most snakes is serpentine motion where each section follows a similar path (Ma, 205). In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficients of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction. (Saito et al, 66)<br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of a snake robot are listed below:<br />
<br />
*Move across uneven terrain, since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand, since it can distribute its weight across a wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot.<br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
(Ma, 206)<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations (Saito etal, 72-73):<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
===Parts List===<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 1/2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
*Ball caster: For the head<br />
<br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
[http://www.youtube.com/watch?v=wBcJkNHEaAs Video of 3 body segments moving]<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chassis Built Showing a Standoff and Batteries]]<br />
[[image:BuiltChasis2_MLS.jpg|thumb|right|Chassis with Batteries Removed]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries)<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part Number)====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G): 1 per segment<br />
*10 pos IDC cable header (A26267-ND): 1 per segment<br />
*3 pos AAA battery holder (BH3AAA-W-ND): 1 per segment<br />
*2 pos AAA battery holder (BH2AAA-W-ND): 1 per segment<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
====Electronics in Each Body Segment====<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]][[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]][[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line as shown in the ribbon cable schematic. Each segment of the snake contains a small circuit board (ServoBoard Schematic) which has a connector for the ribbon cable, a switch to control the power, and a power indicator LED. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
====Electronics in The Head Segment====<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
<br />
The PIC18F4520 Prototyping Board designed by Professor Peshkin was used. Schematics of the board can be found here: [[Main_Page#PIC_18F4520_prototyping_board|18F4520_prototyping_board]]. The only change applied to the board was to replace the 20MHz clock with a 40MHz clock. This allowed the microcontroller to perform calculations faster, improving the resolution of the servo signal. The ribbon cable was connected to the ground and port D pins on the PIC.<br />
<br />
An [[XBee_radio_communication_between_PICs|XBee radio]] was used to communicate between the microcontroller and the PC. The wiring diagram shows a schematic for the Xbee connection with the PIC. The [[XBee_radio_communication_between_PICs#XBee_Interface_Module|XBee Interface Board]] was used to provide a robust mechanical mount for the radio, as well as supply the 3.3V needed by the XBee. On the PC side, another XBee interface board was plugged into the FTDI USB-Serial converter. Other than this, no special electronics were needed for the XBee radio. The radio simply acted as a serial cable replacement The snake was controlled by sending commands with a terminal program. <br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and receives data from a computer via serial communication.<br />
<br />
The main purpose of SnakeServos.c is to calculate the motion profile of the servos, and send a corresponding signal to each of the servos every 20 ms. The code for this is found in the <tt>ISR_20MS</tt> function in the code which is run every 20ms.<br />
<br />
A secondary function is to update the parameters that affect the motion of the snake. The code for this can be found in the <tt>ISR_USART_RX</tt> function, which is run every time a byte is received on the USART's receive buffer.<br />
<br />
====Servo Control Details====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS (defined as 15536), which will cause Timer1 to overflow 20ms later and re-trigger the interrupt. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS (defined as 15536 + 6250), Timer1 is polled, and the value is compared sequentially to the values in the RCservo array plus 15536 (because Timer1 started counting at 15536, not 0). If the value of Timer1 is greater than a value in (RCservo[x]+15536), the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although polling the timer to control the length of a pulse has a lower resolution than using an interrupt (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution for the pulse was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication Details====<br />
The PIC communicates serially with a XBee radio to a PC with a XBee radio. As shown in the code, the serial communication allows the user to change the speed, the amplitude and period of the sine wave, and the direction (forward, reverse, left and right) of the robotic snake. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
//use volatile keyword to avoid problems with optimizer<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS); //set timer to trigger an interrupt 20ms later<br />
SET_ALL_SERVOS(0b11111111); //begin pulse for servo signal<br />
time=get_timer1(); //poll timer<br />
while(time < TMR1_2point25MS){ //end this loop after 2.25 ms<br />
if (time > (RCservo[0] + TMR1_20MS)){ <br />
output_low(SERVO_0); //end the pulse when time is up<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1(); //poll timer<br />
}<br />
SET_ALL_SERVOS(0); //set all servos low in case some pins are still high<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed; //increment time, wrap around if necessary to prevent overflow<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
//load default values<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1); //enable Timer1 interrupt<br />
enable_interrupts(INT_RDA); //enable USART receive interrupt<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. <br />
<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
[http://www.youtube.com/watch?v=r_GOOFLnI6w Video of the robot snake 2]<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
<br />
Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol 15, No 2 (2001): 205-6.<br />
<br />
Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66, 72-73.</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T17:49:28Z<p>ClaraSmart: /* Results */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
<br />
In this project, we developed and built a robot to mimic serpentine motion of a snake. The robot is made up of several body segments and a head. Each body segment contains a RC servo, which is controlled by a PIC microntroller located in the head of the snake robot. This wiki page contains discussions of the motion of a snake, mechanical design, electronic design and PIC code. <br />
<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include serpentine movement, rectilinear movement, concertina movement and side-winding movement. The most common motion exhibited by most snakes is serpentine motion where each section follows a similar path (Ma, 205). In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficients of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction. (Saito et al, 66)<br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of a snake robot are listed below:<br />
<br />
*Move across uneven terrain, since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand, since it can distribute its weight across a wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot.<br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
(Ma, 206)<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations (Saito etal, 72-73):<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
===Parts List===<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 1/2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
*Ball caster: For the head<br />
<br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries)<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part Number)====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G): 1 per segment<br />
*10 pos IDC cable header (A26267-ND): 1 per segment<br />
*3 pos AAA battery holder (BH3AAA-W-ND): 1 per segment<br />
*2 pos AAA battery holder (BH2AAA-W-ND): 1 per segment<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
====Electronics in Each Body Segment====<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]][[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]][[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line as shown in the ribbon cable schematic. At each motor, a small circuit board (ServoBoard Schematic) contains the connector for the ribbon cable, a switch to control the power and a power indicator LED. This circuit board has a common ground, connecting the signal ground with the battery ground and receives power from the batteries. The actual circuit board can be seen in the image (A Complete Circuit Board on the Snake). Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
====Electronics in The Head Segment====<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
<br />
The PIC18F4520 Prototyping Board designed by Professor Peshkin was used. Schematics of the board can be found here: [[Main_Page#PIC_18F4520_prototyping_board|18F4520_prototyping_board]].<br />
<br />
The Xbee Radio<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. [http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
<br />
Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol 15, No 2 (2001): 205-6.<br />
<br />
Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66, 72-73.</div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T16:58:59Z<p>ClaraSmart: /* Electronics */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part Number)====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
====Electronics in Each Body Segment====<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. At each motor, a small circuit board contains the connector for the ribbon cable, a switch to control the power and a power indicator LED. This circuit board has a common ground, connecting the signal ground with the battery ground and receives power from the batteries. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
====Electronics in The Head Segment====<br />
<br />
The PIC /PIC board and power to the PIC Board<br />
<br />
The Xbee Radio<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T16:00:57Z<p>ClaraSmart: /* Parts List (Digikey Part number) */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part Number)====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
====Electronics in Each Body Segment====<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. At each motor, a small circuit board contains the connector for the ribbon cable, a switch to control the power and a power indicator LED. This circuit board has a common ground, connecting the signal ground with the battery ground and receives power from the batteries. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
====Electronics in The Head Segment====<br />
<br />
The PIC /PIC board and power to the PIC Board<br />
The Xbee Radio<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T16:00:46Z<p>ClaraSmart: /* Parts List (Digikey Part number)= */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part number)====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
====Electronics in Each Body Segment====<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. At each motor, a small circuit board contains the connector for the ribbon cable, a switch to control the power and a power indicator LED. This circuit board has a common ground, connecting the signal ground with the battery ground and receives power from the batteries. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
====Electronics in The Head Segment====<br />
<br />
The PIC /PIC board and power to the PIC Board<br />
The Xbee Radio<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T16:00:23Z<p>ClaraSmart: /* Electronics */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
====Parts List (Digikey Part number)=====<br />
<br />
*PIC: PIC18F4520<br />
*Oscillator: 40MHz Oscillator (X225-ND)<br />
*RC Servo (see mechanical design) preferably high-torque <br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*Various switches to turn power electronics and the motors on/off<br />
*Standard Protoboard, to mount connector from ribbon cable, and switches for each motor<br />
*Xbee radio pair and PC <br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
====Electronics in Each Body Segment====<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. At each motor, a small circuit board contains the connector for the ribbon cable, a switch to control the power and a power indicator LED. This circuit board has a common ground, connecting the signal ground with the battery ground and receives power from the batteries. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
====Electronics in The Head Segment====<br />
<br />
The PIC /PIC board and power to the PIC Board<br />
The Xbee Radio<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:50:46Z<p>ClaraSmart: /* SnakeServos.c */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
====SnakeServos.c====<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:50:26Z<p>ClaraSmart: /* Serial Communication */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
====Serial Communication====<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake.<br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:49:59Z<p>ClaraSmart: /* Servo Control */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
====Servo Control====<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1's counter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:48:31Z<p>ClaraSmart: /* High Torque Servos */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
====High Torque Servos====<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:48:18Z<p>ClaraSmart: /* Power Supply */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
==== Power Supply ====<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:47:50Z<p>ClaraSmart: /* Obstacle Avoidance */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
==== Obstacle Avoidance ====<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:47:31Z<p>ClaraSmart: /* Position Sensors */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
==== Position Sensors ====<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:47:14Z<p>ClaraSmart: /* Position Sensors */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course or maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:46:03Z<p>ClaraSmart: /* Results */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels, as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
Wireless control from a laptop allowed easy demonstration of the snakes capabilities, and allowed others to easily control its movement.<br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:43:25Z<p>ClaraSmart: /* Protection and Visual Appeal */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:42:25Z<p>ClaraSmart: /* Parts List */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
====Parts List====<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
<br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:41:27Z<p>ClaraSmart: /* Standoffs */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
===Parts List===<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
====Standoffs ====<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
==== Protection and Visual Appeal ====<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:40:51Z<p>ClaraSmart: /* Standoffs */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
===Parts List===<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
== Standoffs ==<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
<br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
=== Protection and Visual Appeal ===<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-20T15:40:11Z<p>ClaraSmart: /* Mechanical Design */</p>
<hr />
<div>[[image:Snake_Robot_1.jpg|right]]<br />
<br />
== Overview ==<br />
[http://www.youtube.com/watch?v=Sb8WqaLX1Vo Video of the robot snake.]<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
===Parts List===<br />
<br />
*Motors: Futaba S3004 standard ball bearing RC servo motor, Tower Hobbies LXZV41 $12.99<br />
*Wheels: McMasterCarr Acetal Pulley for Fibrous Rope for 1/4" Rope Diameter, 3/4" OD McMasterCarr 8901T11 $1.66<br />
*O-Rings (Tires): McMasterCarr Silicone O-Ring AS568A Dash Number 207, Packs of 50 McMasterCarr 9396K209 $7.60/50<br />
*PVC Pipe: McMasterCarr Sewer & Drain Thin-Wall PVC Pipe Non-Perforated, 3" X 4-1/2' L, Light Green McMasterCarr 2426K24 $7.06<br />
*1/8th inch plastic for chassis: (Shop Stock) or McMasterCarr Polycarbonate Sheet 1/8" Thick, 12" X 12", Clear, McMasterCarr, 8574K26 $6.32<br />
*Dowel Pins: 1" long, 1/4" diameter <br />
*Sheet Metal: For the connecting segments<br />
*Fasteners: Screws for the servos and chassis, washers for the standoffs<br />
*Standoffs: Used 1" and 2" to achieve a level snake<br />
*Velcro: To attach battery packs and housing to the chasis<br />
<br />
<br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
=== Standoffs ===<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
=== Protection and Visual Appeal ===<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
Parts (Digikey Part number)<br />
<br />
*PIC18F4520<br />
*40MHz Oscillator (X225-ND)<br />
*RC Servo,preferably high-torque<br />
*10 wire IDC ribbon cable<br />
*10 pos IDC cable socket (ASC10G)<br />
*10 pos IDC cable header (A26267-ND)<br />
*3 pos AAA battery holder (BH3AAA-W-ND)<br />
*2 pos AAA battery holder (BH2AAA-W-ND)<br />
*475 Ohm resistors (transmission line termination)<br />
*various switches<br />
<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal. The signal generated by the microcontroller is carried by the IDC ribbon cable, and each servo board taps into a single signal line and the reference ground line. Because of the length of the ribbon cable, each signal line must be terminated with a 475 ohm resistor to prevent reflected "ghost" signals from interfering with the original signal.<br />
<br />
Each servo board also has its own power supply of 5 AAA cells, which gives each servo 7.5V. Although the servos are only rated for 6V, 7.5V was used because more torque was needed. The current drain (up to 500mA) caused the voltage across the cells to drop due to the high internal resistance of the alkaline cells. NiMH rechargeable cells are more capable of handling high current draw applications, but are also much more expensive and can take several hours to charge.<br />
<br />
The robot snake can run for about 1 hour on the alkaline cells, after which the servos no longer have enough torque to generate the serpentine motion.<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
<br />
There are two PIC files used in this robotic snake, SnakeServos.c and main.h, which are shown below. main.h sets up the default parameters used in SnakeServos.c. The microcontroller controls the RC servos and communicates via serial communication to a computer. These two functions are discussed below. <br />
<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1 is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated. As shown in the code, the serial communication allows the user to change the speed, the amplitude and phase of the sinewave, and the direction (forward, reverse, left and right) of the robotic snake. <br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9600, UART1) <br />
<br />
#include <main.h><br />
#include <math.h><br />
<br />
/*<br />
Put your desired high duration here; <br />
3200 is center <br />
1000 is 90 deg right <br />
5400 is 90 deg left<br />
*/<br />
int16 RCservo[7]; <br />
<br />
volatile float a = A_DEFAULT;<br />
volatile float b = B_DEFAULT;<br />
volatile float c = C_DEFAULT;<br />
<br />
volatile float alpha;<br />
volatile float gamma;<br />
volatile float beta;<br />
volatile float speed = 0;<br />
volatile float prev_speed = SPEED_DEFAULT;<br />
float t = 0; <br />
<br />
#INT_TIMER1 // designates that this is the routine to call when timer1 overflows<br />
//generates servo signals<br />
void ISR_20MS(){<br />
volatile unsigned int16 time;<br />
set_timer1(TMR1_20MS);<br />
SET_ALL_SERVOS(0b11111111);<br />
time=get_timer1();<br />
while(time < TMR1_2point25MS){<br />
if (time > (RCservo[0] + TMR1_20MS)){<br />
output_low(SERVO_0);<br />
}<br />
if (time > (RCservo[1] + TMR1_20MS)){<br />
output_low(SERVO_1);<br />
}<br />
if (time > (RCservo[2] + TMR1_20MS)){<br />
output_low(SERVO_2);<br />
}<br />
if (time > (RCservo[3] + TMR1_20MS)){<br />
output_low(SERVO_3);<br />
}<br />
if (time > (RCservo[4] + TMR1_20MS)){<br />
output_low(SERVO_4);<br />
}<br />
if (time > (RCservo[5] + TMR1_20MS)){<br />
output_low(SERVO_5);<br />
}<br />
if (time > (RCservo[6] + TMR1_20MS)){<br />
output_low(SERVO_6);<br />
}<br />
time=get_timer1();<br />
}<br />
SET_ALL_SERVOS(0);<br />
<br />
//3200 is center //1000 is 90 deg right // 5400 is 90 deg left<br />
/*<br />
add value of sine wave with phase offset ((alpha*sin(t + X*beta), <br />
3200 for servo center position,<br />
an adjustment value to compensate for offsets when mounting servo horn (SERVO_X_ADJ),<br />
and bias (gamma) for turning.<br />
*/<br />
<br />
RCservo[0]=(int16)(alpha*sin(t) + 3200 + SERVO_3_ADJ + gamma); <br />
RCservo[1]=(int16)(alpha*sin(t + 1*beta) + 3200 + SERVO_4_ADJ + gamma);<br />
RCservo[2]=(int16)(alpha*sin(t + 2*beta) + 3200 + gamma + SERVO_5_ADJ);<br />
RCservo[3]=(int16)(alpha*sin(t + 3*beta) + 3200 + gamma + SERVO_6_ADJ);<br />
RCservo[4]=(int16)(alpha*sin(t + 4*beta) + 3200 + gamma + SERVO_7_ADJ);<br />
RCservo[5]=(int16)(alpha*sin(t + 5*beta) + 3200 + gamma + SERVO_8_ADJ);<br />
RCservo[6]=(int16)(alpha*sin(t + 6*beta) + 3200 + gamma + SERVO_9_ADJ);<br />
<br />
t+= speed;<br />
if (t > 2*pi){<br />
t = 0;<br />
}<br />
else if (t < 0){<br />
t = 2*pi;<br />
}<br />
}<br />
<br />
<br />
#INT_RDA HIGH //High-Priority Interrupt triggered by USART Rx<br />
//parameter update<br />
void ISR_USART_RX(){<br />
char input;<br />
if (kbhit()){<br />
input = getc();<br />
switch(input){<br />
case 'w': //accelerate<br />
speed += 0.002;<br />
break;<br />
case 's': //decelerate<br />
speed -= 0.002;<br />
break;<br />
case 'x': //pause motion<br />
prev_speed = speed;<br />
speed = 0;<br />
break;<br />
case 'z': //resume motion<br />
speed = prev_speed;<br />
break;<br />
case 'c': //reverse speed<br />
speed = -speed;<br />
break;<br />
case 'a': //increase left turn rate<br />
c -= 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'd': //increase right turn rate<br />
c += 1000;<br />
gamma=-c/num_segments;<br />
break;<br />
case 'f': //set turn rate to 0<br />
c = C_DEFAULT;<br />
gamma = 0;<br />
case 't': //increase amplitude<br />
a += 10; <br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'g': //decrease amplitude<br />
a -= 10;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'y': //increase phases in body<br />
b += 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case 'h': //decrease phases in body<br />
b -= 0.1;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
break;<br />
case '1': //preset 1<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
case '2': //preset 2<br />
a = 1400;<br />
b = 2*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break;<br />
case '3': //preset 3<br />
a = 1000;<br />
b = 5*pi;<br />
c = C_DEFAULT;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=SPEED_DEFAULT;<br />
break; <br />
default:<br />
}<br />
}<br />
return;<br />
}<br />
<br />
void main() {<br />
a = A_DEFAULT;<br />
b = B_default;<br />
c = C_default;<br />
gamma=-c/num_segments;<br />
beta=b/num_segments;<br />
alpha=a*abs(sin(beta));<br />
speed=0;<br />
<br />
setup_timer_1(T1_INTERNAL | T1_DIV_BY_4 ); <br />
set_timer1(0);<br />
<br />
enable_interrupts(INT_TIMER1);<br />
enable_interrupts(INT_RDA);<br />
enable_interrupts(GLOBAL);<br />
<br />
while (TRUE) { <br />
<br />
}<br />
}<br />
</pre><br />
<br />
===main.h===<br />
<pre><br />
#ifndef __MAIN_H__<br />
#define __MAIN_H__<br />
<br />
#define SET_ALL_SERVOS(x) output_d(x)<br />
<br />
/*<br />
This chart matches the pin on the PIC to the wire on the ribbon cable<br />
PIN WIRE IN USE<br />
--- ---- -------<br />
RD0 2<br />
RD1 3 *<br />
RD2 4 *<br />
RD3 5 *<br />
RD4 6 *<br />
RD5 7 *<br />
RD6 8 *<br />
RD7 9 *<br />
<br />
*/<br />
#define SERVO_3_ADJ 0<br />
#define SERVO_4_ADJ 300<br />
#define SERVO_5_ADJ (-150)<br />
#define SERVO_6_ADJ 75<br />
#define SERVO_7_ADJ (-200)<br />
#define SERVO_8_ADJ 100<br />
#define SERVO_9_ADJ (-150)<br />
<br />
#define SERVO_0 PIN_D1<br />
#define SERVO_1 PIN_D2<br />
#define SERVO_2 PIN_D3<br />
#define SERVO_3 PIN_D4<br />
#define SERVO_4 PIN_D5<br />
#define SERVO_5 PIN_D6<br />
#define SERVO_6 PIN_D7<br />
<br />
#define A_DEFAULT 1300<br />
#define B_DEFAULT 3*pi<br />
#define C_DEFAULT 0<br />
<br />
#define SPEED_DEFAULT 0.05<br />
#define OMEGA_DEFAULT 1<br />
#define num_segments 8<br />
<br />
#define TMR1_20MS 15536<br />
#define TMR1_2point25MS 15536 + 6250<br />
#endif<br />
</pre><br />
<br />
== Results ==<br />
<br />
Overall, the robotic snake was successful. <br />
<br />
Initially, the mechanical design included a single wheel mounted in the center of the pvc pipe. However, the motion of the snake was very difficult to control because the robotic snake became unstable very easily. As a result, the chassis was built to include two wheels as discussed in the mechanical design section, in order to provide stability which made the robot easier to control. <br />
<br />
The final robotic snake can be seen in action here. (insert link for youtube video)<br />
<br />
== Next Steps ==<br />
<br />
The robotic snake was developed within five weeks, and proved to be a very successful demo. There are many options that could be researched and developed to add to this robot and discussed below.<br />
<br />
=== Position Sensors ===<br />
Sensors could be added to the robot to allow it to know its position. This could be accomplished with a combination of encoders on a segment. Most likely, the middle segment should be used since it would be the approximate center of gravity. Knowledge of the position of the center of gravity would potentially the robotic snake to be sent to different locations or navigate (using dead reckoning) through a pre-determined obstacle course of maze. The information from encoders could be sent to a computer to observe different snakelike motions with different parameters.<br />
<br />
=== Obstacle Avoidance ===<br />
With optical sensors on the head of the snake, the robot would be able to sense an obstacle and either overide the wireless command and avoid it, or stop completely, and wait for further commands.<br />
<br />
=== Power Supply ===<br />
Currently, 5 AAA batteries are required for each servo, meaning that this robot requires many batteries. As a result, a different power supply could be investigated.<br />
<br />
===High Torque Servos===<br />
The servos in the snake have a large load but do not need to move very quickly, so high torque servos should be used instead of standard servos. This would also prolong the battery life because the servos would be operating in a more efficient range.<br />
<br />
== References ==<br />
</references></div>ClaraSmarthttp://hades.mech.northwestern.edu/index.php/Robot_SnakeRobot Snake2008-03-19T15:17:36Z<p>ClaraSmart: /* Team Members */</p>
<hr />
<div>[[image:Snake_Robot.jpg|center]]<br />
<br />
<center> Snake Robot </center><br />
<br />
== Overview ==<br />
<br />
==Team Members==<br />
<br />
[[image:Team23_Members.jpg|thumb|400pix|right|Hwang-Long-Smart]]<br />
<br />
*Michael Hwang - Electrical Engineer - Class 2008<br />
*Andrew Long - Mechanical Engineer - Class 2009<br />
*Clara Smart - Electrical Engineer - Class 2009<br />
<br />
<br />
<br clear=all><br />
<br />
== Snake Motion ==<br />
[[image:Snake_Motion.jpg|thumb|right|Source: [http://science.howstuffworks.com/snake3.htm How Stuff Works]]]<br />
Snakes are able to adapt their movement to various environments. For instance, snakes can move across extreme environments such as sand, mud and water. Research has discovered there are four types of snake motion, as shown in the image. These motions include; serpentine movement, rectilinear movement, concertina movement and side-winding movement.<ref>Ma, Shugen. "Analysis of creeping locomotion of a snake-like robot." ''Advanced Robotics'' Vol.15, No.2 (2001): 205</ref> The most common motion exhibited by most snakes is serpentine motion where section follows a similar path. In order for snakes to successfully locomote using serpentine motion, the belly of the snake must have anisotropic coefficient of friction for the normal and tangential directions. Specifically, the normal friction must be greater than the tangential friction. As a result, when the snake exhibits a force on the ground, it will move in the tangential direction without slipping in the normal direction.<ref>Saito, Fukaya, Iwasaki. "Serpentine Locomotion with Robotic Snakes". ''IEEE Control Systems Magazine'' (Feb 2002): 66.<ref/> <br />
<br />
<br clear=all><br />
<br />
== Advantages / Disadvantages of Robotic Snake Motion ==<br />
<br />
===Advantages===<br />
<br />
Many robots are limited by the use of motorized wheels. However, there are many advantages for building a robot that mimics the motion of a snake. Several advantages for movement of snake robot are listed below:<br />
<br />
*Move across uneven terrain since it is not dependent on wheels<br />
*Possibly swim if water-proofed<br />
*Move across soft ground such as sand since it can distribute its weight across wider area<br />
<br />
Also, from a systems standpoint, the snake robot can be very modular with many redundant segments. As a result, it is very easy to replace broken segments as well as shorten or lengthen the robot. <br />
<br />
===Disadvantages===<br />
<br />
Although there are many advantages for building a snake like robot, there are several disadvantages which are listed below:<br />
<br />
*Low power and movement efficiency<br />
*High cost of actuators (servos or motors)<br />
*Difficult to control high number of degrees of freedom<br />
<br />
Cite = Page 206<br />
<br />
== Robot Snake Motion ==<br />
[[image:Serpentine_curves.jpg|thumb|300pix|right|Serpentine Curves]]<br />
<br />
Real snake motion does not follow specified equations. However, research has proven that the serpentine motion of a snake can be modeled with the following equations:<br />
<br />
<math>x(s)= \int_{0}^{s} \cos (\zeta_\sigma) d\sigma</math><br />
<br />
<math>y(s)= \int_{0}^{s} \sin (\zeta_\sigma) d\sigma </math><br />
<br />
<math>\zeta_\sigma= a \cos (b\sigma) +c\sigma </math><br />
<br />
where the parameters ''a'', ''b'', and ''c'' determine the shape of the serpentine motion. The graph shows how the parameters influence the serpentine curve. Basically, ''a'' changes the appearance of the curve, ''b'' changes the number of phases, and ''c'' changes the direction.<br />
<br />
<br />
The serpentine curve can be modeled with a snake like robot by changing the relative angles between the snake robot segments using the following formula with the number of segments (n):<br />
<br />
<br />
<math>\phi_i = \alpha sin(\omega t +(i-1)\beta ) + \gamma, \left ( i=1, ..., n-1 \right )</math><br />
<br />
where &alpha; , &beta; , and &gamma; are parameters used to characterize the serpentine curve and are dependent on ''a'', ''b'', and ''c'' as shown below:<br />
<br />
<br />
<math>\alpha = a \left | \sin \left ( \frac{\beta}{2} \right ) \right | </math><br />
<br />
<math>\beta = \frac{b}{n} </math><br />
<br />
<math>\gamma = -\frac{c}{n} </math><br />
<br />
<br />
The equations above for &phi;<sub>i</sub>,&alpha;,&beta;, and &gamma; were used in this snake like robot as shown in the [[Robot Snake#PIC Code|code section]].<br />
<br />
<br clear=all><br />
<br />
== Mechanical Design ==<br />
[[image:FullSnake.jpg|thumb|right|The Snake]]<br />
The robotic snake consists of a head segment and several body segments. The head segment houses the onboard microcontroller and xBee radio. The body segments house the servo motors and the batteries required to power each motor. As the snake is designed to be modular, there is no limit to the number of body segments. More segments will allow it to move more smoothly, while fewer segments will be easier to control. For this design, seven body segments were used due to material limitations.<br />
<br />
Mechanically, the snake is designed to move in a serpentine motion, imitating the motion of a real snake. As discussed above, real snakes move with anisotropic coefficients of friction. It is difficult to locate materials with this property, but passive wheels satisfy the friction requirements. The friction will be lower in the direction of rolling, thus providing the required difference in friction. The only problem with this approach is that the wheel may slide in the normal direction if the weight applied to the wheel is not sufficient. <br />
<br />
=== The Body Segments ===<br />
[[image:Chasis.jpg|thumb|right|A Single Chasis Without a Servo]]<br />
<br />
Each of the body segments are identical and includes a chassis, a servo, a connector, standoffs and two passive wheels as can be seen in the picture. <br />
<br />
==== Chassis ====<br />
<br />
The base of the chassis is made from a thin (approx. 1/8th inch) piece of polycarbonate. The chassis must be wide enough to hold a servo motor with a AAA battery pack on each side and long enough for the servo and a standoff (the connection for the previous segment). The polycarbonate was cut into a rectangle to meet the specifications for our servo motor. Five holes were then drilled in the rectangle, four to mount the servo and one for the standoff. The holes are drilled to allow the servo to be located in the center of the chassis. <br />
<br />
==== Connector ====<br />
<br />
A connector was machined to attach to the servo horn of one body segment and to attach to the next segment's standoff. The length of this connector is about 3 inches and is just long enough to prevent collision between segments. A shorter beam allows for greater torque. This connection needs to be as tight as possible and the beam must be mounted perpendicular to the chassis. <br />
<br />
[[image:ChasisUnderside.jpg|thumb|right|The Underside of a Chassis]]<br />
<br />
=== Standoffs ===<br />
<br />
Standoffs were used to attach the servo to the chassis and to attach the connector to the chassis. Two standoffs (1 in and 1/2 in) and several washers were used to make the connector parallel to the ground.<br />
<br />
==== Passive Wheels ====<br />
[[image:Wheel.jpg|thumb|left|A Passive Wheel on the Dowel Pin]]<br />
Passive wheels were mounted to the bottom of the chassis. Each wheel was made of a 3/4 inch pulley and an o-ring. The o-ring was used to increase friction with the ground. The wheels have been set on polished metal dowel pins which allow the wheels to rotate more freely than when placed on wooden dowels. The dowel pin axles were mounted (hot glue works but is not very strong) in the center of the segment. The center of the segment is not the center of the polycarbonate rectangle. Instead, the entire segment length is the distance from the standoff on one chassis to the center of the servo horn on the other. In this project, the length of the connector was made to be about half the length of the segment. Therefore, the wheels were placed at the same location as the stand off as can be seen in the image. The wheels are held in place with zip ties. <br />
<br clear=all><br />
==== Fully Assembled Body Segment ====<br />
[[image:BuiltChasis.jpg|thumb|right|A Chasis Built Showing a Standoff and Batteries]]<br />
A fully assembled chassis has a mounted servo and is connected to a segment on either side. AAA batteries packs were attached to the sides of the motor with velcro to allow easy removal. The small electronic circuit board for each segment was mounted on the front of the motor to allow easy access to the switch. (See Electronic Design for more information on the circuit board and batteries.<br />
<br clear=all><br />
=== The Head Segment ===<br />
[[image:BallCaster.jpg|thumb|left|The Ball Caster Under the Front Segment]]<br />
<br />
The head segment is similar to the body segments except that it contains a PCB board with a PIC instead of a servo motor. The head segment is the same width but slightly longer than the body segment. A ball caster was added to the front of the segment to help support the extra length and help the wheels stay on the ground.<br />
<br clear=all><br />
<br />
=== Protection and Visual Appeal ===<br />
[[image:Housing.jpg|thumb|right|One Segment of the Housing]]<br />
<br />
As a final step, housing for each segment was created from 3" PVC pipe. The pipe was cut into segments the same length as the chassis. The bottom of the pipe was cut off, allowing it to sit flat on the chassis. The housing provides a protective covering for the servo, batteries and electronics. The pipe was attached with velcro straps which mounted under the chassis. This housing can be easily removed to debug and to change batteries.<br />
<br clear=all><br />
<br />
<br />
=== Mechanical Debugging ===<br />
<br />
Wheels come off the ground: Add washers to the standoffs to force the chassis to be parallel to the ground.<br />
<br />
Wheels slide, but do not roll: Increase frictionby either adding weight to the segment or changing the "tires" (the o-ring).<br />
<br />
The segments slip when the servo rotates: Tighten the screws for the connector standoffs, both above the beam and below the chassis.<br />
<br clear=all><br />
<br />
== Electronics ==<br />
<br />
[[image:PICBoard_schematic_HLS.jpg|thumb|right|The Mainboard Schematic]]<br />
[[image:RibbonCable_schematic_HLS.jpg|thumb|right|Ribbon Cable Schematic]]<br />
[[image:ServoBoard_schematic_HLS.jpg|thumb|right|ServoBoard Schematic]]<br />
<br />
[[image:PICBoard_HLS.jpg|thumb|right|The Electronics in the Head]]<br />
[[image:ServoBoard_Hooked_up_HLS.jpg|thumb|right|A Complete Circuit Board on the Snake]]<br />
<br />
The each segment of the snake contains a Futaba Standard RC Servo. Each servo has 3 wires: power, ground, and signal.<br />
<br />
<br />
Needs description of servo - include discussion of how we aligned them (through code)<br />
<br />
Needs discussion on batteries. Rechargeable vs Alkaline, 4 vs 5 etc.<br />
<br />
Needs circuit diagram for servo boards and/or picture<br />
<br />
Needs description of ribbon cable? (ie be careful with it)<br />
<br />
Needs reason for terminator<br />
<br />
<br clear=all><br />
<br />
== PIC Code ==<br />
===Servo Control===<br />
The main function of the PIC microcontroller was to control multiple servos (seven in our case). Timer1is set to overflow every 20 milliseconds and trigger an interrupt. When the interrupt is triggered, Timer1'scounter is set to the value held by TMR1_20MS, which will cause the interrupt to trigger again 20 ms later. At the beginning of the interrupt, all the pins connected to the servos are set high. While Timer1 is less than the value held by TMR1_2point25MS, Timer1 is polled, and the value is compared sequentially to the values in the RCservo array. If the value of Timer1 is greater than a value in RCservo, the the corresponding pin is set low. After all the values have been compared, Timer1 is polled again and the process repeats until 2.25 ms have elapsed (when Timer1 > TMR1_2point25MS). After all the servos signals have been sent, the values in the RCServo array are updated to prepare it for the next 20ms interrupt.<br />
<br />
Although this method of timing the pulse trains has a lower resolution than using interrupts (see [http://peshkin.mech.northwestern.edu/pic/code/RCservoSoft/RCservoSoft.c RCservoSoft.c]), it allows one to add and remove servos more easily and not have to decrease the frequency of the servo signal pulse train. With a 40MHz clock and seven servos, the resolution was about 8us, which was good enough for this purpose.<br />
<br />
===Serial Communication===<br />
The PIC communicates serially with a XBee radio. When a byte is received in the UART receive buffer, a high-priority interrupt is triggered. The received byte is put into a switch-case statement, and the corresponding parameters are updated.<br />
<br />
===SnakeServos.c===<br />
<pre><br />
/*<br />
Andy Long, Clara Smart, and Michael Hwang's snake robot code.<br />
*/<br />
<br />
<br />
#include <18f4520.h><br />
#device high_ints=TRUE // this allows raised priority interrupts, which we need<br />
#fuses HS,NOLVP,NOWDT,NOPROTECT<br />
#use delay(clock=40000000)<br />
#use rs232(baud=9