https://hades.mech.northwestern.edu//api.php?action=feedcontributions&user=Ryan+Cook&feedformat=atomMech - User contributions [en]2024-03-28T23:36:21ZUser contributionsMediaWiki 1.35.9https://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18943RGB Swarm Robot Project Documentation2010-07-22T18:54:12Z<p>Ryan Cook: /* Projector */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
There are 25 calibration markings on the floor. The center of these markings make 5 points ranging from -1300mm to 1300mm in the x direction and 5 points from -880mm to 880mm in the y direction. The calibration markings consist of two perpendicular lines with a dot and "C" marking on one of the lines. An epuck should be placed so that the color sensor side lines up with the line with the dot and "C" marking and the left wheel should line up with the other line. This orientation of the epuck places the center LED on the centers of the calibration markings, which are the points said above. This configuration is used to calibrate the cameras, see <insert camera calibration procedure>.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
'''Before performing any experiments, allow the projector to warm up for at least 20 minutes.'''<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18942RGB Swarm Robot Project Documentation2010-07-22T18:41:31Z<p>Ryan Cook: /* Floor */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
There are 25 calibration markings on the floor. The center of these markings make 5 points ranging from -1300mm to 1300mm in the x direction and 5 points from -880mm to 880mm in the y direction. The calibration markings consist of two perpendicular lines with a dot and "C" marking on one of the lines. An epuck should be placed so that the color sensor side lines up with the line with the dot and "C" marking and the left wheel should line up with the other line. This orientation of the epuck places the center LED on the centers of the calibration markings, which are the points said above. This configuration is used to calibrate the cameras, see <insert camera calibration procedure>.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18941RGB Swarm Robot Project Documentation2010-07-22T18:33:22Z<p>Ryan Cook: /* Floor */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
There are 25 calibration markings on the floor. There are 5 points ranging from -1336 to 1336 in the x direction and 5 points from -916 to 916 in the y direction. The calibration markings consist of two perpendicular lines with a dot and "C" marking on one of the lines. An epuck should be placed so that the color sensor side lines up with the line with the dot and "C" marking and the left wheel should line up with the other line. This orientation of the epuck is used to calibrate the cameras, see <insert camera calibration procedure>.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18940RGB Swarm Robot Project Documentation2010-07-19T16:59:31Z<p>Ryan Cook: /* Future Work and To Do */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18939RGB Swarm Robot Project Documentation2010-07-19T16:58:56Z<p>Ryan Cook: /* Future Work and To Do */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18938RGB Swarm Robot Project Documentation2010-07-19T16:58:42Z<p>Ryan Cook: /* e-puck Modifications */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
<br />
<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18937RGB Swarm Robot Project Documentation2010-07-19T16:40:31Z<p>Ryan Cook: /* Floor */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is 3 sheets of MDF(Medium Density Fiberboard) screwed to a frame of 2x4s. There are 5 2x4s arranged parallel to the longest side and smaller 2x4s arranged where the sheets meet. It is spackled and painted to give a smooth even surface. This floor should not be stepped on with shoes to prevent scuffing and should be swept before each experiment.<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18903RGB Swarm Robot Project Documentation2010-07-13T16:04:20Z<p>Ryan Cook: </p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_Robot_Project_Overview&diff=18902Swarm Robot Project Overview2010-07-13T16:01:47Z<p>Ryan Cook: /* Different Projects */</p>
<hr />
<div>==Overview==<br />
The '''Swarm Robot Project''' is an ongoing research project in the Laboratory for Intelligent Mechanical Systems led by Drs. Kevin Lynch and Randy Freeman. The project has had several phases throughout its years, starting with initial research in decentralized motion control, as well as in the application of semi-autonomous robots. In addition to these topics, the swarm robot project has applied research to swarm theory and consensus estimation, with numerous applications for environmental sensing. Most recently, the project has focused on adapting several aspects of the robots and related support structure from its original phase to the new direction of research. This has been done through updating the physical e-puck hardware, altering the e-puck motion control code and the vision system code, and physically setting up a new arena. The project has drawn on work done in several fields, such as theoretical research done to develop an efficient consensus estimator for the robots, writing simulation programs to test the consensus theory, as well as continuing development of an effective indoor machine/computer vision system to locate and track the robots through their formation moves, to name some of the work done.<br />
<br />
Physically the Swarm Robot Project uses a group of eight (8) e-puck robots equipped with Xbee radios, an overhanging home-made computer vision system made, and a controlling computer with Xbee radio. Add-on's to this swarm robot system differ to from project to project, but include both hardware and software additions. Specific information about different hardware and software options can be found in each projects respective wiki entry.<br />
<br />
:''List of all swarm related pages: [[:Category:SwarmRobotProject|SwarmRobotProject]]''<br />
<br />
==Different Projects==<br />
Through the existence of the Swarm Robot Project, there have been several phases of work done, with each phase focused on a different aspect of the project. The initial phase of the project, the Swarm Consensus Estimation (The Michael Hwang) Project, drew on developing an effect consensus estimation algorithm in a decentralized robot network. The next phase, the RGB Swarm Project, focused on furthering the initial research to develop an autonomous robot network capable of received environmental information, using consensus estimation, to develop a picture of the local environment.<br />
<br />
===RGB Swarm Project (Current)===<br />
:''Main article:'' [[RGB_Swarm_Robot_Project_Documentation|''RGB Swarm Robot Project Documentation'']]<br />
The RGB swarm robot project is the second phase of the swarm experiment. With the addition of color sensors, a projector, and a light blocking tent, the RGB swarm robot project moves beyond the first phase and looks to link data sensing and swarm consensus theory. Currently the project uses the same epucks and xbee radios, but has implemented new vision and epuck code. Many of the operating procedures are similar and can be found in the main article linked to above.<br />
<br />
===Swarm Consensus Estimation Project (Old)===<br />
:''Main article:'' [[Swarm_Robot_Project_Documentation|''Swarm Robot Project Documentation'']]<br />
<br />
The Swarm Consensus Estimation Project was the first phase of the Swarm Robot Project. The main focus of this project was to test swarm theory and consensus estimation theory, both of which are the basis for research in the Swarm Robot Project. This project marked the first implementation of e-pucks, computer vision system, and consensus estimation software for the e-pucks. In addition to the main project, there were several projects related to providing additional data for analysis, such as a robot simulator program for Matlab. Much of the original code, robot and circuit design, theory, and operating procedures from the Swarm Consensus Estimation Project would be adopted in the RGB Swarm Project.<br />
<br />
==Related Links==<br />
These are other, separate, projects that are used by the Swarm Project or projects to which the Swarm Project has contributed.<br />
<br />
* [[Indoor Localization System]]<br />
* [[Machine Vision Localization System]]<br />
* [[E-puck Mobile Robot]]<br />
<br />
<br />
<br />
<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18901RGB Swarm Robot Quickstart Guide2010-07-13T15:59:55Z<p>Ryan Cook: /* Machine Vision Localization System */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project_Documentation#Physical_Setup|'''RGB Swarm Robot Project Documentation: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project_Documentation#Software|'''RGB Swarm Robot Project Documentation: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_Robot_Quickstart_Guide#Programming_the_e-puck|''Swarm Robot Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
Download the files [[RGB_Swarm_Robot_Project#Software|here]].<br />
Open the solution in the main folder SwarmSystem in Visual studio 2010. Make sure the COM port in UserInterface.h is up to date and hit run.<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18900RGB Swarm Robot Quickstart Guide2010-07-13T15:59:28Z<p>Ryan Cook: /* Machine Vision Localization System */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project_Documentation#Physical_Setup|'''RGB Swarm Robot Project Documentation: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project_Documentation#Software|'''RGB Swarm Robot Project Documentation: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_Robot_Quickstart_Guide#Programming_the_e-puck|''Swarm Robot Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
Download the files [[RGB_Swarm_Robot_Project#Software|here]].<br />
Open the solution in the main folder SwarmSystem in Visual studio 2010. Make sure the COM PORT in UserInterface.h is upto date and hit run.<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18897RGB Swarm Robot Quickstart Guide2010-07-13T15:53:00Z<p>Ryan Cook: /* Programming the e-puck */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project_Documentation#Physical_Setup|'''RGB Swarm Robot Project Documentation: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project_Documentation#Software|'''RGB Swarm Robot Project Documentation: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_Robot_Quickstart_Guide#Programming_the_e-puck|''Swarm Robot Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18896RGB Swarm Robot Quickstart Guide2010-07-13T15:52:19Z<p>Ryan Cook: /* e-puck and e-puck Code */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project_Documentation#Physical_Setup|'''RGB Swarm Robot Project Documentation: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project_Documentation#Software|'''RGB Swarm Robot Project Documentation: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18895RGB Swarm Robot Quickstart Guide2010-07-13T15:51:58Z<p>Ryan Cook: /* Checklist */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project_Documentation#Physical_Setup|'''RGB Swarm Robot Project Documentation: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18894RGB Swarm Robot Quickstart Guide2010-07-13T15:51:24Z<p>Ryan Cook: </p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project_Documentation|'''RGB Swarm Robot Project Documentation''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_Robot_Project_Overview&diff=18893Swarm Robot Project Overview2010-07-13T15:50:12Z<p>Ryan Cook: /* RGB Swarm Project */</p>
<hr />
<div>==Overview==<br />
The '''Swarm Robot Project''' is an ongoing research project in the Laboratory for Intelligent Mechanical Systems led by Drs. Kevin Lynch and Randy Freeman. The project has had several phases throughout its years, starting with initial research in decentralized motion control, as well as in the application of semi-autonomous robots. In addition to these topics, the swarm robot project has applied research to swarm theory and consensus estimation, with numerous applications for environmental sensing. Most recently, the project has focused on adapting several aspects of the robots and related support structure from its original phase to the new direction of research. This has been done through updating the physical e-puck hardware, altering the e-puck motion control code and the vision system code, and physically setting up a new arena. The project has drawn on work done in several fields, such as theoretical research done to develop an efficient consensus estimator for the robots, writing simulation programs to test the consensus theory, as well as continuing development of an effective indoor machine/computer vision system to locate and track the robots through their formation moves, to name some of the work done.<br />
<br />
Physically the Swarm Robot Project uses a group of eight (8) e-puck robots equipped with Xbee radios, an overhanging home-made computer vision system made, and a controlling computer with Xbee radio. Add-on's to this swarm robot system differ to from project to project, but include both hardware and software additions. Specific information about different hardware and software options can be found in each projects respective wiki entry.<br />
<br />
:''List of all swarm related pages: [[:Category:SwarmRobotProject|SwarmRobotProject]]''<br />
<br />
==Different Projects==<br />
Through the existence of the Swarm Robot Project, there have been several phases of work done, with each phase focused on a different aspect of the project. The initial phase of the project, the Swarm Consensus Estimation (The Michael Hwang) Project, drew on developing an effect consensus estimation algorithm in a decentralized robot network. The next phase, the RGB Swarm Project, focused on furthering the initial research to develop an autonomous robot network capable of received environmental information, using consensus estimation, to develop a picture of the local environment.<br />
<br />
===Swarm Consensus Estimation (The Michael Hwang) Project===<br />
:''Main article:'' [[Swarm_Robot_Project_Documentation|''Swarm Robot Project Documentation'']]<br />
<br />
The Swarm Consensus Estimation Project was the first phase of the Swarm Robot Project. The main focus of this project was to test swarm theory and consensus estimation theory, both of which are the basis for research in the Swarm Robot Project. This project marked the first implementation of e-pucks, computer vision system, and consensus estimation software for the e-pucks. In addition to the main project, there were several projects related to providing additional data for analysis, such as a robot simulator program for Matlab. Much of the original code, robot and circuit design, theory, and operating procedures from the Swarm Consensus Estimation Project would be adopted in the RGB Swarm Project.<br />
<br />
===RGB Swarm Project===<br />
:''Main article:'' [[RGB_Swarm_Robot_Project_Documentation|''RGB Swarm Robot Project Documentation'']]<br />
The RGB swarm robot project is the second phase of the swarm experiment. With the addition of color sensors, a projector, and a light blocking tent, the RGB swarm robot project moves beyond the first phase and looks to link data sensing and swarm consensus theory. Currently the project uses the same epucks and xbee radios, but has implemented new vision and epuck code. Many of the operating procedures are similar and can be found in the main article linked to above.<br />
<br />
==Related Links==<br />
These are other, separate, projects that are used by the Swarm Project or projects to which the Swarm Project has contributed.<br />
<br />
* [[Indoor Localization System]]<br />
* [[Machine Vision Localization System]]<br />
* [[E-puck Mobile Robot]]<br />
<br />
<br />
<br />
<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_E-puck_Code&diff=18892RGB Swarm Robot Project E-puck Code2010-07-13T15:49:25Z<p>Ryan Cook: RGB Swarm Robot Project E-puck Code moved to RGB Swarm Robot Project E-puck Code (outdated): To remove the old code from the current wiki files</p>
<hr />
<div>#REDIRECT [[RGB Swarm Robot Project E-puck Code (outdated)]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_E-puck_Code_(outdated)&diff=18891RGB Swarm Robot Project E-puck Code (outdated)2010-07-13T15:49:25Z<p>Ryan Cook: RGB Swarm Robot Project E-puck Code moved to RGB Swarm Robot Project E-puck Code (outdated): To remove the old code from the current wiki files</p>
<hr />
<div>This page documents the e-puck code for the RGB Sensing Swarm Robotics project. The code on the e-puck was written in C and compiled using Microchip's MPLAB C Compiler for dsPIC DSCs (student version). <br />
<br />
This code is a branch of the [[Swarm Project E-puck Code]].<br />
<br />
For a high level overview, review the [[Media:Swarm-robotics-flow-chart-epuck-all.pdf|flow chart]].<br />
<br />
==Tasks==<br />
<br />
===Complete===<br />
* Got ADC working for color sensor<br />
** Set up filtering to address projector PWM<br />
* Restructured code to make more modular.<br />
** Split dsPIC_XBeePackets and wheel_speed_coordinator into h and c files<br />
** Pulled packet assembling code out of main and created send_packet() function in send_packet.h/c.<br />
** Pulled a bunch of variables and defines (''NUM_DATA_SETS, NUMBERS_PER_SET, DATATYPE_BYTELENGTH , DATA_ARRAY_LENGTH , ADDITIONAL_NUMS, notRTS, T1_INT_FLAG, x_i, u_i, w_i, x_sum, w_sum, MAX_WHEEL_V_TICKS, deadband, COMMR, SAFEDIST, MINDIST, u_x_ideal, u_y_ideal, x_motion_integral, y_motion_integral, SQUARE'') that were scattered across h files into global_vars.h/c. Makes it easy to include them in a particular file with the ''extern'' keyword.<br />
* Added color_cal() function in color_cal.h/c<br />
* Added wheelSpeedSingleBot to wheel_speed_coordinator<br />
<br />
===To Do===<br />
* Finish color calibration (color_cal) program, to facilitate simple calibration for each e-puck that does not require large amount of data collected for each run, and with minimal user input/control (desire automation and simplification)<br />
* Improve the vision system position information updater<br />
** So that the e-puck does not automatically assume every packet from the Vision System is true, rather 'checks' for accuracy<br />
* Replace wheelSpeedSingleBot with the three step move controller from NUtest.c<br />
**Implement this so that the e-puck can be moved with position and orientation (this is ideal so that data from every orientation of the e-puck can be recorded while calibrating or collecting data<br />
* Implement new algorithm from paper<br />
<br />
==Project Package==<br />
The source code for the project is available here:<br />
*[[Media:RGB_Swarm_Puck_Code_working_version.zip|'''RGB Swarm Puck Code working version.zip''']]<br />
**Open swarm_epucks.mcw and you should be good to go.<br />
<br />
==Description of the files and functions==<br />
<br />
===global_vars(.c/.h)===<br />
* .c/.h: declare and define global variables and macros<br />
====Packet Length Constants====<br />
These variables determine the length of the XBee packets. See [[Swarm_Robot_Project_Documentation#Data_Frame|Data Frame]] and the section on XBee API packets in the XBee manual for further clarification.<br />
<br />
Much of this is still sending integral consensus estimator data. This can be removed or replaces with data needed for decentralized color sensing. <br />
<br />
=====#define NUM_DATA_SETS 5 ===== <br />
Number of statistics on which you are running the consensus estimator. This this particular case, 5. (Ix, Iy, Ixx, Ixy, Iyy)<br />
<br />
=====#define NUMBERS_PER_SET 2=====<br />
Number of variables in each data set (see above) that the consensus estimator needs to transmit to other agents. In this case, 2 because there is <tt>x_i</tt> and <tt>w_i</tt> for each statistic.<br />
<br />
=====#define DATATYPE_BYTELENGTH 4 =====<br />
Number of bytes in the data type (used in the consensus estimator (float = 4 bytes long). This is important because we need to split the numbers into individual bytes to be able to send them out the serial port.<br />
<br />
=====#define DATA_ARRAY_LENGTH (NUM_DATA_SETS*NUMBERS_PER_SET)=====<br />
Total number of data variables needed for the consensus estimator. In this case, it is 5*2=10.<br />
<br />
=====#define ADDITIONAL_NUMS 8=====<br />
Additional number of data to be appended to data array. It is 5 in this case, so that we can append <br />
# Robot X coordinate<br />
# Robot Y coordinate<br />
# Robot Theta orientation<br />
# Robot left wheel speed<br />
# Robot right wheel speed<br />
# ''Sensor Red Value''<br />
# ''Sensor Green Value''<br />
# ''Sensor Blue Value''<br />
<br />
===main(.c/.h)===<br />
* .c: This contains the entry point of the code and contains the initialization routines, main loop, and interrupt service routines. <br />
* .h: Contains variables, function prototypes, and delay function needed for main.<br />
<br />
=====void __attribute__((__interrupt__,auto_psv)) _T1Interrupt(void)=====<br />
Timer1 ISR. Sets T1_INT_FLAG which provides timing for the main loop.<br />
<br />
=====void __attribute__((__interrupt__,auto_psv)) _U2RXInterrupt(void)=====<br />
UART2 receive ISR. Runs with the XBee receives packets. The switch/case structure handles the packets.<br />
<br />
The current handling of coordinate data from the vision system: <br />
<br />
<code><pre><br />
case 0: //coordinate data<br />
e_set_configuration(<br />
InPacket.data[1].dataFloat,<br />
InPacket.data[2].dataFloat, <br />
InPacket.data[3].dataFloat);<br />
break;<br />
</pre></code><br />
<br />
=====TO DO=====<br />
Needs to be improved. Suggested new function:<br />
* Get vision system data<br />
* Check if it's wildly off from current puck estimate<br />
** If not, update, clear log of rejected packets<br />
** If so, reject and log<br />
*** If we have rejected enough packets (some threshold) and they've all been in similar place (within tolerance), we can assume the puck is wrong and vision system is right. In this case, update with vision system data and clear rejected data log.<br />
<br />
=====int main(void)=====<br />
Setup functions and main loop.<br />
<br />
===color_cal(.c/.h)===<br />
* .c: Contains void calibrate_color(void) function to run the calibration routine.<br />
* .h: Contains function prototype and constant definitions for calibrate_color.<br />
<br />
=====void calibrate_color(void)=====<br />
This function runs the color calibration routine. Eventually this should be turned into a separate e-puck command from the vision system. Currently it just runs the puck through a zig zag pattern on the floor and sends packets. You can collect these with the data logger and process them in matlab. From this, you should be able to get a calibration function.<br />
<br />
=====TO DO=====<br />
* Add to the pattern. Need to collect more data.<br />
* Possibly have the puck store data, find a best fit, and create the calibration function on board.<br />
* Store the calibration function in the EEPROM (flash memory) so it's non-volatile. The coefficients of function will be specific to each puck, so it would be nice to not have to program each puck, each time you change the battery.<br />
<br />
===dsPIC_XBeePackets(.c/.h)===<br />
* .c/.h: Contains functions and data structures for assembling and receiving XBee packets.<br />
<br />
=====int readPacket( void)=====<br />
Parses XBee data from UART2. Makes it accessable in the ''InPacket'' struct.<br />
<br />
The UART2 receive ISR takes error codes from this function and flashes the corresponding puck LED (absolute value of the error code).<br />
<br />
Error codes are:<br />
<br />
<code><pre><br />
//////////////ERROR CODES/////////////////<br />
// <br />
// -1: Timeout waiting for UART2<br />
// -2: Start delimiter wrong<br />
// -3: Checksum Error<br />
// -4: UART2 not ready at beginning<br />
// -5: API_ID unidentified<br />
// -6: Send Packet CCA failuue<br />
// -7: Modem Status packet failure<br />
//////////////////////////////////////////<br />
</pre></code><br />
<br />
===send_packet(.c/.h)===<br />
* .c: Contains the void send_packet(void) function which fills an array with data and calls the needed XBee functions to send a packet.<br />
* .h: Contains function prototype.<br />
<br />
=====void send_packet(void)=====<br />
* Creates ''packet'' array.<br />
* Adds consensus estimator data to the array.<br />
* Adds robot statistics to the array.<br />
* Adds color sensor values to the array.<br />
* Asserts flow control line to stop XBee from sending<br />
* Calls assemblePacket to send the packet.<br />
* Deasserts flow control; Xbee can send again.<br />
<br />
<br />
The function e_get_acc_filtered returns a running average of the acc specified. Syntax is:<br />
<br />
return value = e_get_acc_filtered(acc_channel, number of samples to average)<br />
<br />
The number of samples averaged must be less than ACC_SAMP_NB as defined in e_ad_conv.h.<br />
<br />
<code><pre><br />
packet[i]=e_get_acc_filtered(2, 136); // red<br />
i++;<br />
packet[i]=e_get_acc_filtered(1, 136); // green<br />
i++;<br />
packet[i]=e_get_acc_filtered(0, 136); // blue<br />
</pre></code><br />
<br />
===PI_consensus_estimator(.h)===<br />
* .h: Contains functions and data structures for the PI consensus estimator. <br />
* This will probably be replaced by the algorithm for sensor consensus. <br />
<br />
===wheel_speed_coordinator(.c/.h)===<br />
* .c: Contains functions for robot motion control<br />
* .h: Function prototypes and variabls.<br />
<br />
=====void wheelSpeed(int *vL, int *vR)=====<br />
Return needed wheel speeds for the inertial consensus estimator based on the group goal.<br />
<br />
=====void wheelSpeedSingleBot(float gotox, float gotoy, int *vL, int *vR)=====<br />
Return needed wheel speed to get this individual bot to (gotox, gotoy). It's a hacked fix. Should be replaced with the 3 step motion controller from NUtest.c.<br />
<br />
===e_acc(.c/.h)===<br />
* .c: Functions for reading the accelerometers (which is the color sensor).<br />
* .h: Function prototypes.<br />
<br />
This is original e-puck library code with the following modifications:<br />
<br />
<code><pre><br />
//changed by Sam, July 10, default offset is 2000. we want 0 for RGB sensor.<br />
static int centre_z = 0; //zero value for z axe<br />
</pre></code><br />
<br />
=====int e_get_acc_filtered(unsigned int captor, unsigned int filter_size)=====<br />
<br />
===e_ad_conv(.c/.h)===<br />
Set up the ADCs on the puck. Original e-puck library code, with the following modifications<br />
* .h: Define constants and functional prototypes<br />
<br />
MIC_SAMP_FREQ sets the baseline sampling frequency for the ADC, everything else must be a fraction of this. 16384 Hz is the highest possible.<br />
<code><pre><br />
#define MIC_SAMP_FREQ 16384.0 <br />
</pre></code><br />
<br />
ACC_PROX_SAMP_FREQ sets the sampling frequency for the accelerometers (color sensor). We found in testing that the puck become non-responsive with this set to 8192 Hz or 16384 Hz.<br />
<code><pre><br />
// sampling frequency for the accelerometres and proximetres<br />
//#define ACC_PROX_SAMP_FREQ 256.0 // WARNING: should be a fraction of MIC_SAMP_FREQ<br />
#define ACC_PROX_SAMP_FREQ 4096 // to ensure a good timing precision<br />
// So your options are: 1 2 4 8 16 32 64 128 <br />
// 256 512 1024 2048 4096 8192 16384<br />
</pre></code><br />
<br />
ACC_SAMP_NB is the number of samples to store. We can do an average of ''up to'' this many samples. This is set to 140 so we can average 136 samples, which is 4 projector periods.<br />
<code><pre><br />
#define ACC_SAMP_NB 140 // number of accelerometer samples to store<br />
</pre></code><br />
<br />
* .c: Functions and interrupt service routines for ADCs. Original e-puck library, no modifications.<br />
<br />
=====e_init_ad_scan(ALL_ADC)=====<br />
Call to setup ADC and have it work in the background. Use e_acc functions to access data.<br />
<br />
===e_init_port(.c/.h)===<br />
* .c/.h: Initializes the ports on the e-puck. File is from the standard e-puck library. <br />
<br />
=====e_init_ports(void)=====<br />
This function sets up ports on the e-puck. Call before using any ports.<br />
<br />
===e_led(.c/.h)===<br />
* .c/.h: This is a standard e-puck library file that contains functions for manipulating LEDs.<br />
<br />
=====void e_set_led(unsigned int led_number, unsigned int value)=====<br />
Set led_number (0-7) to value (0=off 1=on higher=inverse).<br />
<br />
[[Image:e-puck_LED_numbering.png|thumb|left]]<br />
<br />
<br clear='all'><br />
<br />
===e_motors_swarm(.c/.h)===<br />
* .c/.h: This file is a modified version of the e_motors.h e-puck library file. This version keeps track of the robot's position and orientation, and the motor stepping function contains code to update the robot's position when the wheels turn. <br />
* The functions like e_rotate and e_translate have been removed and this version is not dependent on e_agenda. <br />
<br />
=====#define POINT_OFFSET -31.75 =====<br />
In addition to constants like wheel radius and wheel base, the offset between the center point of the bot and the point we're driving is set in e_motors_swarm.h. -31.75 mm is the distance betwene the center and the color sensor. So technically, we're driving backwards.<br />
<br />
=====void __attribute__((interrupt, auto_psv, shadow)) _T5Interrupt(void)=====<br />
Timer5 ISR, interrupt for left motor. Controls the stepper motors.<br />
<br />
Updates the pucks x, y, theta estimate here: <br />
<br />
<code><pre><br />
// update robot's configuration estimate<br />
thetapos = thetapos - ANGSTEP;<br />
if (thetapos<-PI) thetapos += 2*PI;<br />
xpos = xpos + cval*LINSTEP;<br />
ypos = ypos + sval*LINSTEP;<br />
}<br />
</pre></code><br />
<br />
=====void __attribute__((interrupt, auto_psv, shadow)) _T4Interrupt(void)=====<br />
Timer4 ISR, interrupt for right motor. Controls the stepper motors.<br />
<br />
Updates the pucks x, y, theta estimate here: <br />
<br />
<code><pre><br />
// update robot's configuration estimate<br />
thetapos = thetapos - ANGSTEP;<br />
if (thetapos<-PI) thetapos += 2*PI;<br />
xpos = xpos - cval*LINSTEP;<br />
ypos = ypos - sval*LINSTEP;<br />
}<br />
</pre></code><br />
<br />
=====void e_init_motors(void)=====<br />
Call this function before other motor functions to initialize the motors.<br />
<br />
=====void e_set_speed_left(int motor_speed)/void e_set_speed_right(int motor_speed)=====<br />
Set the motor speed in steps/second.<br />
<br />
=====void e_get_configuration(float *xptr, float *yptr, float *thetaptr)=====<br />
Updates variables with current x, t, and theta (position and orientation) of the center reference point.<br />
<br />
=====void e_set_configuration(float x, float y, float theta)=====<br />
Sets x, y, theta to values. This is used to overwrite the puck position/orientation estimates with data from the vision system.<br />
<br />
=====void e_get_configuration_front(float *xptr, float *yptr, float *thetaptr)=====<br />
Updates variables with current x, t, and theta (position and orientation) of the front reference point (used for motor control).<br />
<br />
<br />
[[Category:SwarmRobotProject]]<br />
[[Category:e-puck]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project&diff=18890RGB Swarm Robot Project2010-07-13T15:47:27Z<p>Ryan Cook: RGB Swarm Robot Project moved to RGB Swarm Robot Project Documentation: To keep with the consistency of names in the Swarm Category</p>
<hr />
<div>#REDIRECT [[RGB Swarm Robot Project Documentation]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18889RGB Swarm Robot Project Documentation2010-07-13T15:47:27Z<p>Ryan Cook: RGB Swarm Robot Project moved to RGB Swarm Robot Project Documentation: To keep with the consistency of names in the Swarm Category</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_E-puck_Quickstart_Guide&diff=18888Swarm E-puck Quickstart Guide2010-07-13T15:46:45Z<p>Ryan Cook: Swarm E-puck Quickstart Guide moved to Swarm Robot Quickstart Guide: To keep with the constancy of names in the Swarm Category</p>
<hr />
<div>#REDIRECT [[Swarm Robot Quickstart Guide]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_Robot_Quickstart_Guide&diff=18887Swarm Robot Quickstart Guide2010-07-13T15:46:45Z<p>Ryan Cook: Swarm E-puck Quickstart Guide moved to Swarm Robot Quickstart Guide: To keep with the constancy of names in the Swarm Category</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the Swarm Robots Project, but contains general information about programming e-pucks.<br />
=Checklist=<br />
In order to run the system, you will need:<br />
# e-puck robots, with XBee extension boards and [[Swarm_Robot_Project_Documentation#XBee_Radios|configured radios]].<br />
# Bluetooth adapter on your computer, either internal or an external one such as the D-link DBT-120.<br />
# [http://hades.mech.northwestern.edu/wiki/index.php/Swarm_Project_E-puck_Code Compiled .hex file found in e-puck code project]<br />
# [http://www.etc.ugal.ro/cchiculita/software/picbootloader.htm Tiny Bootloader]<br />
# [[Machine_Vision_Localization_System|Compiled program for Machine Vision Localization System]].<br />
# You may also want to use one or more of the [[#Analysis_Tools|Analysis Tools]] such as the real-time display.<br />
<br />
==Items needed to edit the e-puck code==<br />
If you want to edit the e-puck code, you will need the following:<br />
# Microchip [http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en019469&part=SW007002 MPLAB] (for editing code, not needed to run)<br />
# Microchip [http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en535363 C compiler for dsPICs] (for editing code, not needed to run)<br />
# [http://hades.mech.northwestern.edu/wiki/index.php/Swarm_Project_E-puck_Code Code for e-pucks]<br />
<br />
===Compiling the project===<br />
To open the project, extract the directory and double-click on the '''<tt>.mcp</tt>''' file; this is the project file. Because the directory it is in is probably different from the original directory from the computer on which the code was written, you may need to remove and re-add the sources, library, and/or linker files to the project. The source files are in the '''source''' folder. The library and linker files are in the Microhip compiler's install directory(try searching for <tt>libp30F6014A-coff.a</tt> and <tt>p30f6014A.gld</tt>). Press f10 to compile the program. This should generate a .hex file, which is the binary file that you will load onto the e-Puck.<br />
<br />
=Programming the e-pucks=<br />
You will need to get the compiled .hex file for the e-puck, and then load it onto the PIC with Tiny Bootloader over the bluetooth radio. The compiled .hex file can be found here: [[Image:Swarm_epucks_code.zip]].<br />
==Connecting to the e-puck==<br />
To program the e-puck, you first need to connect to the e-puck. Go to the Control Panel, and open Bluetooth Devices, and go to '''Devices>Add...'''. Turn on the e-puck, check '''My device is set up and ready to be found''', then click next. When the computer has found the e-puck, select it (the ID of the epuck is a four-digit number found on a sticker on top of the bluetooth chip on the e-puck). When the prompt asks you for a passkey, enter the 4-digit ID number. The computer should then reserve a COM (serial) port to communicate with this particular e-puck. The computer will set up different COM ports for different e-pucks; this is normal. Go back to Bluetooth Devices in the Control Panel, and select the COM Ports tab, and see port is assigned '''Outgoing''' to the e-puck. This is the COM port you will use to program this particular e-puck when using Tiny Bootloader. '''If you're using a USB bluetooth dongle, if you unplug it and replug it into a different USB port, or use a different dongle, you may need to reconnect to the e-pucks.'''<br />
<br />
[[Image:BT_COM_ports.png]]<br />
<br />
==Programming the e-puck==<br />
A bootloader is a type of program that is loaded onto a microcontroller which allows one to re-program the microcontroller in the field without having to use an special flash programmer (the programmer is still needed to load the bootloader onto the microcontroller in the first place). The Bootloader has two components: a programmer that runs on your computer, and a program that runs on the e-puck's PIC. When the PIC is turned on, bootloader will check to see if someone is attempting to re-program the PIC; if yes, the bootloader will overwrite its old program with the new program; if not, or if it times out while waiting for the new data stream, it will simply run the current program.<br />
<br />
To program the e-puck, start the Tiny Bootloader program on your computer. (Although you can also use a flash programmer such as the ICD2 to program the e-puck, it will take much longer and can't be done wirelessly.) Click on '''Browse''' and select the .hex file that you want to load. The .hex file is the compiled code from Microchip's C compiler. Under '''Comm''', use 115200 for the baud rate, and type in the outgoing COM port assigned to this e-puck (e.g. COM10, COM11, etc.). Turn on the e-puck, and click on the Write Flash button. The blue bar underneath the button should start counting down. Now, hit the blue reset button on the e-puck, before the blue bar reaches zero and times out. If the e-puck is successfully connected, an orange LED on the e-puck will turn on, and the bootloader will start to program the PIC on the e-puck. <br />
<br />
===Troubleshooting===<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. See the section below [[#Re-loading the Bootloader|Re-loading the Bootloader]] to see how to restore the bootloader with the ICD2 programmer and MPLAB.<br />
<br />
====Re-loading the Bootloader====<br />
The e-pucks require that a special bootloader program be loaded on it if you want to program the e-puck over a bluetooth radio. <br />
<br />
<br />
You can download the .hex file you need at [http://www.e-puck.org http://www.e-puck.org]. On the website, go to Download>Software>Library and download the zip file. Extract the archive, and navigate to '''e-puck-lib\tool\bootloader\epuck_side'''. The file <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> is the one we will use.<br />
<br />
<br />
You will need Microchip's [http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en019469&part=SW007002 MPLAB] and ICD2 to program in the bootloader on the PIC. After opening MPLAB, select to '''Programmer>Select Programmer>ICD2'''. Then, go to '''File>Import>''' and navigate to <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt>. Then, connect to the ICD and program the PIC.<br />
<br />
=Starting the Vision System=<br />
The vision system is needed to track the position of the robots and send out commands. You can get the compiled program here:[[Image:vision_system_localization_project.zip]]<br />
# Connect an XBee radio with ID 0 to the computer. The ID must be 0 so that the robots can tell that the packet is from the computer and not another robot.<br />
# Follow the directions on the [[Machine_Vision_Localization_System#Operation | Machine Vision Localization]] page to set up and calibrate the system.<br />
# Make sure the robots are in the field of view of the vision system.<br />
# Let the system run for a few seconds so that the robot's positions will be updated.<br />
# Select the GUI window and hit 'c' to enter the command mode.<br />
# Select the console window and enter the goal statistics that you want using the '''goal''' command. You can skip this step if you want to use the default [100 300 160000 40000 40000] goal statistics. Send the command 2 or three times in case one or more of the robots doesn't get the message.<br />
# Use the '''wake''' command (<tt>wake 0</tt>) to start the robots.<br />
# Exit the console by typing <tt>exit</tt>. The system should now be running.<br />
<br />
Try moving the swarm to another configuration:<br />
#press 'c' at the GUI window.<br />
#enter the command <tt>goal 0 -100 -200 120000 0 120000 </tt><br />
#type <tt>exit</tt> and hit enter.<br />
<br />
=Using the Real-Time Display=<br />
See [[Swarm_Robot_Project_Documentation#Real-time_Display]]<br />
<br />
=Using the Packet Parser with Timestamp=<br />
See [[Swarm_Robot_Project_Documentation#Using_the_Data_Logger_with_Timestamp]]<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Project_E-puck_Code&diff=18886RGB Swarm Project E-puck Code2010-07-13T15:45:41Z<p>Ryan Cook: RGB Swarm Project E-puck Code moved to RGB Swarm Robot Project E-puck Code: To keep with the constancy of names in the Swarm Category</p>
<hr />
<div>#REDIRECT [[RGB Swarm Robot Project E-puck Code]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_E-puck_Code_(outdated)&diff=18885RGB Swarm Robot Project E-puck Code (outdated)2010-07-13T15:45:41Z<p>Ryan Cook: RGB Swarm Project E-puck Code moved to RGB Swarm Robot Project E-puck Code: To keep with the constancy of names in the Swarm Category</p>
<hr />
<div>This page documents the e-puck code for the RGB Sensing Swarm Robotics project. The code on the e-puck was written in C and compiled using Microchip's MPLAB C Compiler for dsPIC DSCs (student version). <br />
<br />
This code is a branch of the [[Swarm Project E-puck Code]].<br />
<br />
For a high level overview, review the [[Media:Swarm-robotics-flow-chart-epuck-all.pdf|flow chart]].<br />
<br />
==Tasks==<br />
<br />
===Complete===<br />
* Got ADC working for color sensor<br />
** Set up filtering to address projector PWM<br />
* Restructured code to make more modular.<br />
** Split dsPIC_XBeePackets and wheel_speed_coordinator into h and c files<br />
** Pulled packet assembling code out of main and created send_packet() function in send_packet.h/c.<br />
** Pulled a bunch of variables and defines (''NUM_DATA_SETS, NUMBERS_PER_SET, DATATYPE_BYTELENGTH , DATA_ARRAY_LENGTH , ADDITIONAL_NUMS, notRTS, T1_INT_FLAG, x_i, u_i, w_i, x_sum, w_sum, MAX_WHEEL_V_TICKS, deadband, COMMR, SAFEDIST, MINDIST, u_x_ideal, u_y_ideal, x_motion_integral, y_motion_integral, SQUARE'') that were scattered across h files into global_vars.h/c. Makes it easy to include them in a particular file with the ''extern'' keyword.<br />
* Added color_cal() function in color_cal.h/c<br />
* Added wheelSpeedSingleBot to wheel_speed_coordinator<br />
<br />
===To Do===<br />
* Finish color calibration (color_cal) program, to facilitate simple calibration for each e-puck that does not require large amount of data collected for each run, and with minimal user input/control (desire automation and simplification)<br />
* Improve the vision system position information updater<br />
** So that the e-puck does not automatically assume every packet from the Vision System is true, rather 'checks' for accuracy<br />
* Replace wheelSpeedSingleBot with the three step move controller from NUtest.c<br />
**Implement this so that the e-puck can be moved with position and orientation (this is ideal so that data from every orientation of the e-puck can be recorded while calibrating or collecting data<br />
* Implement new algorithm from paper<br />
<br />
==Project Package==<br />
The source code for the project is available here:<br />
*[[Media:RGB_Swarm_Puck_Code_working_version.zip|'''RGB Swarm Puck Code working version.zip''']]<br />
**Open swarm_epucks.mcw and you should be good to go.<br />
<br />
==Description of the files and functions==<br />
<br />
===global_vars(.c/.h)===<br />
* .c/.h: declare and define global variables and macros<br />
====Packet Length Constants====<br />
These variables determine the length of the XBee packets. See [[Swarm_Robot_Project_Documentation#Data_Frame|Data Frame]] and the section on XBee API packets in the XBee manual for further clarification.<br />
<br />
Much of this is still sending integral consensus estimator data. This can be removed or replaces with data needed for decentralized color sensing. <br />
<br />
=====#define NUM_DATA_SETS 5 ===== <br />
Number of statistics on which you are running the consensus estimator. This this particular case, 5. (Ix, Iy, Ixx, Ixy, Iyy)<br />
<br />
=====#define NUMBERS_PER_SET 2=====<br />
Number of variables in each data set (see above) that the consensus estimator needs to transmit to other agents. In this case, 2 because there is <tt>x_i</tt> and <tt>w_i</tt> for each statistic.<br />
<br />
=====#define DATATYPE_BYTELENGTH 4 =====<br />
Number of bytes in the data type (used in the consensus estimator (float = 4 bytes long). This is important because we need to split the numbers into individual bytes to be able to send them out the serial port.<br />
<br />
=====#define DATA_ARRAY_LENGTH (NUM_DATA_SETS*NUMBERS_PER_SET)=====<br />
Total number of data variables needed for the consensus estimator. In this case, it is 5*2=10.<br />
<br />
=====#define ADDITIONAL_NUMS 8=====<br />
Additional number of data to be appended to data array. It is 5 in this case, so that we can append <br />
# Robot X coordinate<br />
# Robot Y coordinate<br />
# Robot Theta orientation<br />
# Robot left wheel speed<br />
# Robot right wheel speed<br />
# ''Sensor Red Value''<br />
# ''Sensor Green Value''<br />
# ''Sensor Blue Value''<br />
<br />
===main(.c/.h)===<br />
* .c: This contains the entry point of the code and contains the initialization routines, main loop, and interrupt service routines. <br />
* .h: Contains variables, function prototypes, and delay function needed for main.<br />
<br />
=====void __attribute__((__interrupt__,auto_psv)) _T1Interrupt(void)=====<br />
Timer1 ISR. Sets T1_INT_FLAG which provides timing for the main loop.<br />
<br />
=====void __attribute__((__interrupt__,auto_psv)) _U2RXInterrupt(void)=====<br />
UART2 receive ISR. Runs with the XBee receives packets. The switch/case structure handles the packets.<br />
<br />
The current handling of coordinate data from the vision system: <br />
<br />
<code><pre><br />
case 0: //coordinate data<br />
e_set_configuration(<br />
InPacket.data[1].dataFloat,<br />
InPacket.data[2].dataFloat, <br />
InPacket.data[3].dataFloat);<br />
break;<br />
</pre></code><br />
<br />
=====TO DO=====<br />
Needs to be improved. Suggested new function:<br />
* Get vision system data<br />
* Check if it's wildly off from current puck estimate<br />
** If not, update, clear log of rejected packets<br />
** If so, reject and log<br />
*** If we have rejected enough packets (some threshold) and they've all been in similar place (within tolerance), we can assume the puck is wrong and vision system is right. In this case, update with vision system data and clear rejected data log.<br />
<br />
=====int main(void)=====<br />
Setup functions and main loop.<br />
<br />
===color_cal(.c/.h)===<br />
* .c: Contains void calibrate_color(void) function to run the calibration routine.<br />
* .h: Contains function prototype and constant definitions for calibrate_color.<br />
<br />
=====void calibrate_color(void)=====<br />
This function runs the color calibration routine. Eventually this should be turned into a separate e-puck command from the vision system. Currently it just runs the puck through a zig zag pattern on the floor and sends packets. You can collect these with the data logger and process them in matlab. From this, you should be able to get a calibration function.<br />
<br />
=====TO DO=====<br />
* Add to the pattern. Need to collect more data.<br />
* Possibly have the puck store data, find a best fit, and create the calibration function on board.<br />
* Store the calibration function in the EEPROM (flash memory) so it's non-volatile. The coefficients of function will be specific to each puck, so it would be nice to not have to program each puck, each time you change the battery.<br />
<br />
===dsPIC_XBeePackets(.c/.h)===<br />
* .c/.h: Contains functions and data structures for assembling and receiving XBee packets.<br />
<br />
=====int readPacket( void)=====<br />
Parses XBee data from UART2. Makes it accessable in the ''InPacket'' struct.<br />
<br />
The UART2 receive ISR takes error codes from this function and flashes the corresponding puck LED (absolute value of the error code).<br />
<br />
Error codes are:<br />
<br />
<code><pre><br />
//////////////ERROR CODES/////////////////<br />
// <br />
// -1: Timeout waiting for UART2<br />
// -2: Start delimiter wrong<br />
// -3: Checksum Error<br />
// -4: UART2 not ready at beginning<br />
// -5: API_ID unidentified<br />
// -6: Send Packet CCA failuue<br />
// -7: Modem Status packet failure<br />
//////////////////////////////////////////<br />
</pre></code><br />
<br />
===send_packet(.c/.h)===<br />
* .c: Contains the void send_packet(void) function which fills an array with data and calls the needed XBee functions to send a packet.<br />
* .h: Contains function prototype.<br />
<br />
=====void send_packet(void)=====<br />
* Creates ''packet'' array.<br />
* Adds consensus estimator data to the array.<br />
* Adds robot statistics to the array.<br />
* Adds color sensor values to the array.<br />
* Asserts flow control line to stop XBee from sending<br />
* Calls assemblePacket to send the packet.<br />
* Deasserts flow control; Xbee can send again.<br />
<br />
<br />
The function e_get_acc_filtered returns a running average of the acc specified. Syntax is:<br />
<br />
return value = e_get_acc_filtered(acc_channel, number of samples to average)<br />
<br />
The number of samples averaged must be less than ACC_SAMP_NB as defined in e_ad_conv.h.<br />
<br />
<code><pre><br />
packet[i]=e_get_acc_filtered(2, 136); // red<br />
i++;<br />
packet[i]=e_get_acc_filtered(1, 136); // green<br />
i++;<br />
packet[i]=e_get_acc_filtered(0, 136); // blue<br />
</pre></code><br />
<br />
===PI_consensus_estimator(.h)===<br />
* .h: Contains functions and data structures for the PI consensus estimator. <br />
* This will probably be replaced by the algorithm for sensor consensus. <br />
<br />
===wheel_speed_coordinator(.c/.h)===<br />
* .c: Contains functions for robot motion control<br />
* .h: Function prototypes and variabls.<br />
<br />
=====void wheelSpeed(int *vL, int *vR)=====<br />
Return needed wheel speeds for the inertial consensus estimator based on the group goal.<br />
<br />
=====void wheelSpeedSingleBot(float gotox, float gotoy, int *vL, int *vR)=====<br />
Return needed wheel speed to get this individual bot to (gotox, gotoy). It's a hacked fix. Should be replaced with the 3 step motion controller from NUtest.c.<br />
<br />
===e_acc(.c/.h)===<br />
* .c: Functions for reading the accelerometers (which is the color sensor).<br />
* .h: Function prototypes.<br />
<br />
This is original e-puck library code with the following modifications:<br />
<br />
<code><pre><br />
//changed by Sam, July 10, default offset is 2000. we want 0 for RGB sensor.<br />
static int centre_z = 0; //zero value for z axe<br />
</pre></code><br />
<br />
=====int e_get_acc_filtered(unsigned int captor, unsigned int filter_size)=====<br />
<br />
===e_ad_conv(.c/.h)===<br />
Set up the ADCs on the puck. Original e-puck library code, with the following modifications<br />
* .h: Define constants and functional prototypes<br />
<br />
MIC_SAMP_FREQ sets the baseline sampling frequency for the ADC, everything else must be a fraction of this. 16384 Hz is the highest possible.<br />
<code><pre><br />
#define MIC_SAMP_FREQ 16384.0 <br />
</pre></code><br />
<br />
ACC_PROX_SAMP_FREQ sets the sampling frequency for the accelerometers (color sensor). We found in testing that the puck become non-responsive with this set to 8192 Hz or 16384 Hz.<br />
<code><pre><br />
// sampling frequency for the accelerometres and proximetres<br />
//#define ACC_PROX_SAMP_FREQ 256.0 // WARNING: should be a fraction of MIC_SAMP_FREQ<br />
#define ACC_PROX_SAMP_FREQ 4096 // to ensure a good timing precision<br />
// So your options are: 1 2 4 8 16 32 64 128 <br />
// 256 512 1024 2048 4096 8192 16384<br />
</pre></code><br />
<br />
ACC_SAMP_NB is the number of samples to store. We can do an average of ''up to'' this many samples. This is set to 140 so we can average 136 samples, which is 4 projector periods.<br />
<code><pre><br />
#define ACC_SAMP_NB 140 // number of accelerometer samples to store<br />
</pre></code><br />
<br />
* .c: Functions and interrupt service routines for ADCs. Original e-puck library, no modifications.<br />
<br />
=====e_init_ad_scan(ALL_ADC)=====<br />
Call to setup ADC and have it work in the background. Use e_acc functions to access data.<br />
<br />
===e_init_port(.c/.h)===<br />
* .c/.h: Initializes the ports on the e-puck. File is from the standard e-puck library. <br />
<br />
=====e_init_ports(void)=====<br />
This function sets up ports on the e-puck. Call before using any ports.<br />
<br />
===e_led(.c/.h)===<br />
* .c/.h: This is a standard e-puck library file that contains functions for manipulating LEDs.<br />
<br />
=====void e_set_led(unsigned int led_number, unsigned int value)=====<br />
Set led_number (0-7) to value (0=off 1=on higher=inverse).<br />
<br />
[[Image:e-puck_LED_numbering.png|thumb|left]]<br />
<br />
<br clear='all'><br />
<br />
===e_motors_swarm(.c/.h)===<br />
* .c/.h: This file is a modified version of the e_motors.h e-puck library file. This version keeps track of the robot's position and orientation, and the motor stepping function contains code to update the robot's position when the wheels turn. <br />
* The functions like e_rotate and e_translate have been removed and this version is not dependent on e_agenda. <br />
<br />
=====#define POINT_OFFSET -31.75 =====<br />
In addition to constants like wheel radius and wheel base, the offset between the center point of the bot and the point we're driving is set in e_motors_swarm.h. -31.75 mm is the distance betwene the center and the color sensor. So technically, we're driving backwards.<br />
<br />
=====void __attribute__((interrupt, auto_psv, shadow)) _T5Interrupt(void)=====<br />
Timer5 ISR, interrupt for left motor. Controls the stepper motors.<br />
<br />
Updates the pucks x, y, theta estimate here: <br />
<br />
<code><pre><br />
// update robot's configuration estimate<br />
thetapos = thetapos - ANGSTEP;<br />
if (thetapos<-PI) thetapos += 2*PI;<br />
xpos = xpos + cval*LINSTEP;<br />
ypos = ypos + sval*LINSTEP;<br />
}<br />
</pre></code><br />
<br />
=====void __attribute__((interrupt, auto_psv, shadow)) _T4Interrupt(void)=====<br />
Timer4 ISR, interrupt for right motor. Controls the stepper motors.<br />
<br />
Updates the pucks x, y, theta estimate here: <br />
<br />
<code><pre><br />
// update robot's configuration estimate<br />
thetapos = thetapos - ANGSTEP;<br />
if (thetapos<-PI) thetapos += 2*PI;<br />
xpos = xpos - cval*LINSTEP;<br />
ypos = ypos - sval*LINSTEP;<br />
}<br />
</pre></code><br />
<br />
=====void e_init_motors(void)=====<br />
Call this function before other motor functions to initialize the motors.<br />
<br />
=====void e_set_speed_left(int motor_speed)/void e_set_speed_right(int motor_speed)=====<br />
Set the motor speed in steps/second.<br />
<br />
=====void e_get_configuration(float *xptr, float *yptr, float *thetaptr)=====<br />
Updates variables with current x, t, and theta (position and orientation) of the center reference point.<br />
<br />
=====void e_set_configuration(float x, float y, float theta)=====<br />
Sets x, y, theta to values. This is used to overwrite the puck position/orientation estimates with data from the vision system.<br />
<br />
=====void e_get_configuration_front(float *xptr, float *yptr, float *thetaptr)=====<br />
Updates variables with current x, t, and theta (position and orientation) of the front reference point (used for motor control).<br />
<br />
<br />
[[Category:SwarmRobotProject]]<br />
[[Category:e-puck]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18884RGB Swarm Robot Project Documentation2010-07-13T15:44:25Z<p>Ryan Cook: /* SwarmRobot */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Robot Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18883RGB Swarm Robot Project Documentation2010-07-13T15:44:04Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Robot_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Robot Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18882RGB Swarm Robot Project Documentation2010-07-13T15:43:36Z<p>Ryan Cook: /* RGB Swarm Quickstart Guide */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Robot Quickstart Guide|'''RGB Swarm Robot Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Quickstart_Guide&diff=18881RGB Swarm Quickstart Guide2010-07-13T15:43:02Z<p>Ryan Cook: RGB Swarm Quickstart Guide moved to RGB Swarm Robot Quickstart Guide: To keep with the consistency of naming pages in the Swarm Category</p>
<hr />
<div>#REDIRECT [[RGB Swarm Robot Quickstart Guide]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18880RGB Swarm Robot Quickstart Guide2010-07-13T15:43:02Z<p>Ryan Cook: RGB Swarm Quickstart Guide moved to RGB Swarm Robot Quickstart Guide: To keep with the consistency of naming pages in the Swarm Category</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_Project_E-puck_Code&diff=18879Swarm Project E-puck Code2010-07-13T15:41:26Z<p>Ryan Cook: Swarm Project E-puck Code moved to Swarm Robot Project E-puck Code: To keep with the consistency of naming pages in the Swarm Category</p>
<hr />
<div>#REDIRECT [[Swarm Robot Project E-puck Code]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=Swarm_Robot_Project_E-puck_Code&diff=18878Swarm Robot Project E-puck Code2010-07-13T15:41:26Z<p>Ryan Cook: Swarm Project E-puck Code moved to Swarm Robot Project E-puck Code: To keep with the consistency of naming pages in the Swarm Category</p>
<hr />
<div>__TOC__<br />
This page documents the e-puck code for the Swarm Robotics project. The code on the e-puck was written in C and compiled using Microchip's MPLAB C Compiler for dsPIC DSCs (student version).<br />
<br />
Go back to [[Swarm_Robot_Project_Documentation|Swarm_Robot_Project_Documentation]]<br />
<br />
=Project Files=<br />
Project Files: [[Image:swarm_epucks_code.zip]]<br />
<br />
=Description of the files:=<br />
*main.c: This contains the entry point of the code and contains the initialization routines, main loop, and interrupt service routines.<br />
*e_epuck_ports.h: Contains I/O pin definitions.<br />
*main.h: Contains global variables, macros, and the delay function.<br />
*e_init_port.h/.c: Initializes the ports on the e-puck. File is from the standard e-puck library.<br />
*e_led.h/.c: Handes LED manipulation functions. File is from the standard e-puck library.<br />
*e_motors_swarm.h/.c: Motor control and dead reckoning functions. This is a modified version of the standard e_motors.h/.c library file, with dead reckoning functions added.<br />
*dsPIC_XBeePackets.h: Contains functions and data structures for assembling and receiving XBee packets.<br />
*PI_consensus_estimator: Contains functions and data structures for the PI consensus estimator.<br />
<br />
<br />
<br />
=main.c=<br />
This contains the entry point of the code and contains the initialization routines, main loop, and interrupt service routines.<br />
<br />
==main(void)==<br />
This is the entry point of the code, and contains initialization routines and the main (infinite) loop.<br />
<br />
==void __attribute__((__interrupt__,auto_psv)) _T1Interrupt(void)==<br />
This is the interrupt service routine (ISR) for Timer1. This is the "system tick" and triggers every 0.2 seconds. It sets a flag to indicate that the interrupt has been triggered. This is a low priority interrupt.<br />
<br />
==void __attribute__((__interrupt__,auto_psv)) _U2RXInterrupt(void)==<br />
This is the interrupt service routine (ISR) for the UART receiver buffer. This interrupt will trigger whenever a byte comes in on the serial port. This is a high priority interrupt; the PIC will other tasks to run this ISR.<br />
<br />
=main.h=<br />
==#define notRTS SELECTOR3==<br />
Defines the SELECTOR3 pin (this used to be connected to the selector switch on the original extension module that we have replaced with the XBee board) as the notRTS pin.<br />
<br />
==#define notCTS SELECTOR2==<br />
Defines the SELECTOR2 pin (this used to be connected to the selector switch on the original extension module that we have replaced with the XBee board) as the notCTS pin.<br />
<br />
==#define SQUARE(x) ((x) * (x))==<br />
This macro squares a number.<br />
<br />
==#define AGENT_ID 1...#if (AGENT_ID == 1)...==<br />
This allows you to give the robots an ID number for setting parameters for different robots (change the AGENT_ID and recompile for each robot). This is sometimes useful for testing purposes if you wish to give each agent a different set of initial parameters.<br />
<br />
==float robotX robotY robotTheta==<br />
These global variables hold the (X,Y) coordinates and the orientation (in radians) of the robot (-PI to PI).<br />
<br />
==Packet Length Variables==<br />
These variables determine the length of the XBee packets. See [[Swarm_Robot_Project_Documentation#Data_Frame|Data Frame]] and the section on XBee API packets in the XBee manual for further clarification.<br />
<br />
===#define NUM_DATA_SETS 5 === <br />
Number of statistics on which you are running the consensus estimator. This this particular case, 5. (Ix, Iy, Ixx, Ixy, Iyy)<br />
<br />
===#define NUMBERS_PER_SET 2===<br />
Number of variables in each data set (see above) that the consensus estimator needs to transmit to other agents. In this case, 2 because there is <tt>x_i</tt> and <tt>w_i</tt> for each statistic.<br />
<br />
===#define DATATYPE_BYTELENGTH 4 ===<br />
Number of bytes in the data type (used in the consensus estimator (float = 4 bytes long). This is important because we need to split the numbers into individual bytes to be able to send them out the serial port.<br />
<br />
===#define DATA_ARRAY_LENGTH (NUM_DATA_SETS*NUMBERS_PER_SET)===<br />
Total number of data variables needed for the consensus estimator. In this case, it is 5*2=10.<br />
<br />
===#define ADDITIONAL_NUMS 5===<br />
Additional number of data to be appended to data array. It is 5 in this case, so that we can append <br />
# Robot X coordinate<br />
# Robot Y coordinate<br />
# Robot Theta orientation<br />
# Robot left wheel speed<br />
# Robot right wheel speed <br />
<br />
==void delay(long time)==<br />
General purpose delay function.<br />
<br />
=e_init_port.h/.c=<br />
This is from the standard e-puck library. <br />
==e_init_ports(void)==<br />
This function sets up the ports on the e-puck.<br />
<br />
=e_led.h/.c=<br />
This is a standard e-puck library file that contains functions for manipulating LEDs.<br />
==void e_set_led(unsigned int led_number, unsigned int value)==<br />
Set led_number (0-7) to value (0=off 1=on higher=inverse).<br />
<br />
[[Image:e-puck_LED_numbering.png|thumb|left]]<br />
<br />
<br clear='all'><br />
<br />
=e_motors_swarm.h/.c=<br />
This file is a modified version of the e_motors.h e-puck library file. This version keeps track of the robot's position and orientation, and the motor stepping function contains code to update the robot's position when the wheels turn.<br />
<br />
==void e_init_motors(void)==<br />
Call this function before other motor functions to initialize the motors.<br />
<br />
==void e_set_speed_left(int motor_speed)/void e_set_speed_right(int motor_speed)==<br />
Set the motor speed in steps/second.<br />
<br />
==void e_get_configuration(float *xptr, float *yptr, float *thetaptr)==<br />
Updates variables with current x, t, and theta (position and orientation) of the center reference point.<br />
<br />
==void e_set_configuration(float x, float y, float theta)==<br />
Sets x, y, theta to values.<br />
<br />
==void e_get_configuration_front(float *xptr, float *yptr, float *thetaptr)==<br />
Updates variables with current x, t, and theta (position and orientation) of the front reference point (used for motor control).<br />
<br />
=wheel_speed_coordinator.h=<br />
==float MAX_WHEEL_V_TICKS = 500.0==<br />
Max velocity in motor steps per second; the larger this is, the less time you have to do other calculations. Limit is probably around 750.<br />
<br />
<br />
==#define XLIMIT 1000.0 /#define YLIMIT 850.0==<br />
Test area boundaries. robot will stop if it reaches +/-(XLIMIT) or +/-(YLIMIT).<br />
<br />
float deadband=0;<br />
deadband value for motor velocity<br />
float COMMR = 500000.0; //Max communication radius<br />
<br />
<br />
static float k[5] = {2.0,2.0,0.00001,0.00001,0.00001};<br />
static float fgoal[5] = {100.0, 300.0, 160000.0, 40000.0, 40000.0};<br />
<br />
==Variables and functions for obstacle avoidance==<br />
===float SAFEDIST = 200.0===<br />
Distance in mm at which obstacle avoidance routine will start running.<br />
<br />
===float MINDIST = 100.0===<br />
Distance in mm at which obstacle avoidance will dominate the robot's motion.<br />
<br />
===static volatile float nx=10000.0/static volatile float ny=10000.0===<br />
Nearest neighbor's center point position; use a big number so that make sures nx,ny is initialized by the first packet it receives.<br />
<br />
===static volatile int nid = 100=== <br />
ID of the nearest neighbor, assume NOBODY will ever be 100 so initialize as this.<br />
<br />
===static volatile int nflag =0===<br />
Flag used just for initialization so we know if it has run yet.<br />
<br />
<br />
==void wheelSpeed(int *vL, int *vR)==<br />
This function uses the control law to calculate the desired wheel speeds.<br />
<br />
==int abs(int input)==<br />
Absolute value function for integer data type.<br />
<br />
==float fabs(float input)==<br />
Absolute value function for floating point data type.<br />
<br />
=PI_consensus_estimator.h=<br />
<br />
==Estimator Gains==<br />
===float num_agents===<br />
Number of agents you've received data from. You must scale the KP and KI gains.<br />
<br />
===float GAMMA=0.05=== <br />
Forgetting Factor.<br />
===float KP=0.7=== <br />
Proportional gain. KP=5 will make the system unstable.<br />
===float KI=0.1;<br />
<br />
<br />
==Estimator and Controller Data Arrays==<br />
===volatile float x_i[NUM_DATA_SETS]===<br />
Decision variables.<br />
<br />
===volatile float u_i[NUM_DATA_SETS]===<br />
Controller inputs.<br />
<br />
===volatile float w_i[NUM_DATA_SETS]===<br />
Internal estimator state.<br />
<br />
===volatile float x_sum[NUM_DATA_SETS]===<br />
Running sum of x_i data received from other agents.<br />
<br />
===volatile float w_sum[NUM_DATA_SETS]===<br />
Running sum of w_i data received from other agents. <br />
<br />
==Message Flags==<br />
===volatile union MSG_RX_FLAGS===<br />
Flags for received messages, so we can keep track of whom we have received messages from. 4-byte long bitfield (32-bits). 31 robots maximum (bit 0 is reserved for base station).<br />
<br />
===unsigned char getMsgRxFlag(unsigned int flagNum)===<br />
Returns value of the particular message flag.<br />
<br />
===unsigned char setMsgRxFlag(unsigned int flagNum, unsigned int value)===<br />
Sets value of particular message flag.<br />
<br />
===void resetMsgRxFlags(void)===<br />
Reset all message flags to 0.<br />
<br />
==Consensus Estimator Functions==<br />
<br />
<br />
===void addConsensusData(void)===<br />
Adds new data from incoming packet to runnning sum x_sum.<br />
<br />
===void calcNewEstimate(void)===<br />
Calculates a new estimate and resets the x_sum and w_sum running sums.<br />
<br />
[[Category:SwarmRobotProject]]<br />
[[Category:e-puck]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18877RGB Swarm Robot Project Documentation2010-07-13T15:34:47Z<p>Ryan Cook: /* SwarmRobot */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Quickstart_Guide#e-puck_and_e-puck_Code|'''RGB Swarm Quickstart Guide: e-puck and e-puck Code''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18876RGB Swarm Robot Project Documentation2010-07-13T15:33:12Z<p>Ryan Cook: /* SwarmRobot */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']]<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18875RGB Swarm Robot Project Documentation2010-07-13T15:33:02Z<p>Ryan Cook: /* SwarmRobot */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB. If any of these files are edited, they will need to be reloaded on to the epuck by following the instructions in<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18874RGB Swarm Robot Project Documentation2010-07-13T15:32:13Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Analysis_Tools|'''RGB Swarm Quickstart Guide: Analysis Tools''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18873RGB Swarm Robot Project Documentation2010-07-13T15:31:10Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in [[RGB_Swarm_Quickstart_Guide#Software|'''RGB Swarm Quickstart Guide: Software''']]<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18872RGB Swarm Robot Project Documentation2010-07-13T15:29:42Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find MatLab files for receiving data from the epucks. These files make use of the dll to send and receive commands with the epucks. A more detailed description of how to use these files can be found in<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18871RGB Swarm Robot Quickstart Guide2010-07-13T15:27:43Z<p>Ryan Cook: /* Matlab Tools */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder. Inside this folder you will find the following files:<br />
<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18870RGB Swarm Robot Project Documentation2010-07-13T15:27:12Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18869RGB Swarm Robot Quickstart Guide2010-07-13T15:25:17Z<p>Ryan Cook: /* Analysis Tools */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab files by following these instructions [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] and opening the DataAquasition folder.<br />
<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18868RGB Swarm Robot Quickstart Guide2010-07-13T15:18:35Z<p>Ryan Cook: </p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab Tools and instructions from [[RGB_Swarm_Robot_Project#MATLAB_Code_for_RGB_Swarm_Robot_Project|'''RGB_Swarm_Robot_Project: MATLAB Code for RGB Swarm Robot Project''']]<br />
<br />
===Visual Studio Tools===<br />
*'''Note''': remember to change the ''additional nums'' variable in both these programs to be equal to eight (8) if you are using them with the RGB program, if they are set as five (5) they will not work with the e-pucks or the logger radios<br />
*[[media:Swarm packet data viewer.zip|'''Swarm Packet Data Viewer.zip''']]<br />
*[[media:Swarm XBee packet sender.zip|'''Swarm XBee Packet Sender.zip''']]<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18867RGB Swarm Robot Quickstart Guide2010-07-13T15:17:50Z<p>Ryan Cook: /* Computer Setup */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==Computer Setup==<br />
*If using two computers, one for the Vision System and one for Data Aquasition, then make sure you have two radios connected to each computer and working<br />
*Connect the 4 web cams to the Vision System PC<br />
*Set up that<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab Tools and instructions from [[RGB_Swarm_Robot_Project#MATLAB_Code_for_RGB_Swarm_Robot_Project|'''RGB_Swarm_Robot_Project: MATLAB Code for RGB Swarm Robot Project''']]<br />
<br />
===Visual Studio Tools===<br />
*'''Note''': remember to change the ''additional nums'' variable in both these programs to be equal to eight (8) if you are using them with the RGB program, if they are set as five (5) they will not work with the e-pucks or the logger radios<br />
*[[media:Swarm packet data viewer.zip|'''Swarm Packet Data Viewer.zip''']]<br />
*[[media:Swarm XBee packet sender.zip|'''Swarm XBee Packet Sender.zip''']]<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18866RGB Swarm Robot Project Documentation2010-07-13T15:15:54Z<p>Ryan Cook: /* Software */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB IDE v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18865RGB Swarm Robot Quickstart Guide2010-07-13T15:15:14Z<p>Ryan Cook: /* Computer Setup */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==Computer Setup==<br />
*Make sure you have all the <br />
*Set up that<br />
<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab Tools and instructions from [[RGB_Swarm_Robot_Project#MATLAB_Code_for_RGB_Swarm_Robot_Project|'''RGB_Swarm_Robot_Project: MATLAB Code for RGB Swarm Robot Project''']]<br />
<br />
===Visual Studio Tools===<br />
*'''Note''': remember to change the ''additional nums'' variable in both these programs to be equal to eight (8) if you are using them with the RGB program, if they are set as five (5) they will not work with the e-pucks or the logger radios<br />
*[[media:Swarm packet data viewer.zip|'''Swarm Packet Data Viewer.zip''']]<br />
*[[media:Swarm XBee packet sender.zip|'''Swarm XBee Packet Sender.zip''']]<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Quickstart_Guide&diff=18864RGB Swarm Robot Quickstart Guide2010-07-13T15:14:51Z<p>Ryan Cook: /* Checklist */</p>
<hr />
<div>__TOC__<br />
This guide was written as a quickstart guide for the [[RGB_Swarm_Robot_Project|'''RGB Swarm Robot Project''']], but contains general information about programming e-pucks and using the vision system.<br />
==Checklist==<br />
*Physical Set Up<br />
**Follow the setup as described in [[RGB_Swarm_Robot_Project#Physical_Setup|'''RGB Swarm Robot Project: Physical Setup''']]<br />
**In addition, there needs to be a computer connected to the vision system webcams, as well as to a computer to act as a data logger (this can be the same computer)<br />
***Each analytical program run will need its own XBee radio, more information on this below<br />
*Equipment Needed<br />
**e-pucks with XBee radios, XBee Interface Extension Boards Version 2, and LED Pattern Boards<br />
**XBee radio serial adapter boards with radios<br />
***Each analytical program run on the computer requires its own XBee radio, as a radio cannot be shared from program to program<br />
***A simple set up may requires only two (2) radios, one for the Vision System program, and another for a data logging program<br />
**A bluetooth adapter, internal or external; ours looks like [http://content.etilize.com/Large/11811577.jpg this]<br />
*Computer Programs Needed<br />
**X-CTU, download at the [http://www.digi.com/support/productdetl.jsp?pid=3352&osvid=57&tp=5&s=316 '''Digi X-CTU Download''']<br />
**Tiny Bootloader, download at [http://www.etc.ugal.ro/cchiculita/software/tinyblddownload.htm '''Tiny Bootloader 1.9.8''']<br />
**Visual Studio (2010)<br />
**These are needed to edit and build the code for the e-puck<br />
***Microchip MPLAB<br />
***Microchip MPLAB dsPIC C compiler<br />
<br />
==Computer Setup==<br />
*Download this<br />
*Set up that<br />
==e-puck and e-puck Code==<br />
Follow the instructions here [[RGB_Swarm_Robot_Project#Software|'''RGB Swarm Robot Project: Software''']] to download the epucks code. Then build this code using MPLab and the instructions below.<br />
===Preparing the e-puck code===<br />
*Once the code above is built, it generates a .hex file, which is ready to be uploaded onto the e-puck, skip to [[RGB_Swarm_Quickstart_Guide#Connecting_the_e-puck|'''Connecting the e-puck''']]<br />
**'''Note:''' if you cannot find this .hex file, or you have changed the code, follow the steps below<br />
*To compile the code follow these steps<br />
#Open the workspace file (extension .mcw) of your project/code<br />
#Click the '''Build All''' button (looks like a stack of papers getting pushed down by two arrows)<br />
#Check for the '''BUILD SUCCEEDED''' line at the end of the dialog box, save and close the workspace, and then look for the new .hex file in the folder that the .mcw file is in<br />
#*This will be the .hex file you use for programming and uploading onto the e-puck<br />
<br />
===Connecting the e-puck===<br />
#Go to start menu >> control panel >> Bluetooth Devices<br />
#Click the ''''Add...'''' button in the lower left of the dialog window<br />
#*Turn on the e-puck that you wish to configure/connect to the computer<br />
#*Check the box next to ''''My device is set up and ready to be found.''''<br />
#*Continue with the '''Next''' button, and the computer will search for your e-puck<br />
#Once the computer finds your e-puck, select the appropriate e-puck (should be ID'd by the 4 digit ID number), and assign the 4 digit ID number as the 'passkey' for that e-puck<br />
#*This process reserves a pair of COM(serial) port for that specific e-puck, one as '''Incoming''' and the other as '''Outgoing'''; each e-puck will have a different pair of ports<br />
#Return to the Bluetooth Devices Dialog screen, and click on the '''COM Ports''' tab<br />
#Check which COM ports are assigned to your e-puck, specifically the '''Outgoing''' COM port; this COM port will be used when programming this e-puck with the TinyBootloader program<br />
<br />
*'''Note:''' if using a USB Bluetooth dongle, when the dongle is unplugged and replugged it into a different USB port, or a different dongle altogether is used, it may be necessary to repeat these steps for each e-puck that you wish to use<br />
*'''Note:''' these instructions are for using Windows XP Professional Edition. The exact command paths and dialog boxes may differ from OS to OS<br />
<br />
===Programming the e-puck===<br />
:''Refer to:'' [[Swarm_E-puck_Quickstart_Guide#Programming_the_e-puck|''Swarm E-puck Quickstart Guide: Programming the e-puck'']]<br />
Programming the e-puck requires a specific program, called a ''bootloader'', in order to (re)program the microcontroller. The bootloader makes use of the bluetooth/serial port connection, allowing for wireless programming, as opposed to connecting a specific serial port connector directly to the microcontroller. This expedites the reprogramming process. The bootloader is comprised of two parts, one which operates on the e-puck, writing code to the e-puck microcontroller, and the other, which operates on the computer as user interface. On the e-puck side, the bootloader quickly checks to see if something is trying to re-program the PIC. If so, the bootloader will simply write the new software over the old, and then run the new software. If not, the bootloader simply allows the PIC to run the program currently loaded. The computer side allows a user to select which COM port to program over (allowing one to select which e-puck to program), and selecting what HEX file to program to the e-puck.<br />
<br />
To program the e-puck with a bootloader, follow these instructions:<br />
*'''Note''': you will have to compile and build the C code from the Microchip C compiler, as the .hex file is needed to install onto the e-puck<br />
#Start the Tiny Bootloader program on your computer<br />
#Click the '''Browse''' button and select the .hex file that corresponds to the e-puck code that you want to program<br />
#Set '''115200''' under the '''Comm''' dropdown menu, and select the COM port that corresponds with the OUTGOING COM port of the e-puck<br />
#Click the '''Write Flash''' button<br />
#Immediately click the blue reset button on the e-puck<br />
#*'''Note''': The reset button must be clicked before the blue status bar on the Tiny Bootloader dialog screen reaches zero, representing the program timing out.<br />
#Once the bootloader starts writing software, the blue status bar will grow, representing code being uploaded and programmed<br />
#*'''Note''': In addition to the blue status bar, a orange LED on the e-puck will light up, signaling that a connection has been made<br />
<br />
====e-puck Troubleshooting====<br />
* If Tiny Bootloader cannot connect to the COM port, make sure your e-puck is on, and that you've selected the correct COM port assigned to the e-puck (the ID of the e-puck is on a sticker on top of the bluetooth chip on the e-puck's PCB.<br />
* If Tiny Bootloader can connect to the e-puck but cannot find the PIC, it may be that someone has overwritten the bootloader with another program. If this is the case, you will need to reload the bootloader.<br />
#MPLAB and the ICD2 programmer are needed to reload the Tiny Bootloader software via the flash port of the e-puck<br />
#Go to [http://www.e-puck.org http://www.e-puck.org], goto '''Download >> Software >> Library''', download the zip file, and extract the archive<br />
#Navigate to '''e-puck-lib\tool\bootloader\epuck_side''', look for <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> hex file.<br />
#Open MPLAB<br />
#Go to '''Programmer >> Select Programmer >> MPLAB ICD 2'''<br />
#Go to '''File >> Import'''<br />
#Select the <tt>tinybld_ds6014A_7.37Mhz_115200uart1_8xPLL_with_LEDs.hex</tt> file<br />
#Go to '''Programmer >> Connect''', and when connected, go to '''Programmer >> Program'''<br />
#*'''Note''': flash programming the e-puck takes significantly more time that programming via the bluetooth port, so please be patient<br />
<br />
==XBee Radio==<br />
===Programming the XBee Radio===<br />
:''Refer to: [[Swarm_Robot_Project_Documentation#Configuration_for_e-puck_XBee_radios|Configuration for e-puck XBee radios]]''<br />
These steps are for programming the Xbee Radio, whether for the e-puck or the data logging or vision computers. Certain steps may be skipped however for each kind of radio.<br />
*'''Note''': In order to program the e-puck radio, it will be necessary to remove the XBee radio from the e-puck and place it in one of the Serial Port Adapters connected to the computer<br />
#Start the X-CTU program<br />
#Under the '''PC Settings''' tab, Select the COM Port of the Serial Port Adapter you are using from the list on the left<br />
#Set the '''Baud''' to '''115200''', and leave the other settings alone/standard<br />
#Hit the '''Test/Query''' button to ensure that X-CTU can see/hear your radio<br />
#*'''Note''': If there is difficulty connecting, or the XBee radio cannot be found, it may be necessary to try every frequency listed. X-CTU does not automatically detect what the Baud on the radio is, and while the radios currently used are all set to have a Baud of 115200, they may be reset, and it will be necessary to manually test each Baud freqency via X-CTU.<br />
#Upon success, a dialog box should display 'Communication with modem.. OK', 'Modem Type = <something>', and Modem firmware vision = <other thing>'; click '''OK'''<br />
#If there are no problems connecting, switch the X-CTU program to the '''Terminal''' tab<br />
#In the blank terminal screen, type the following commands below<br />
##Type ''''+++'''', and press <enter>. this signals to the XBee radio that you wish to program it. <br />
##*Wait until the radio responds with '''OK'''<br />
##Type ''''atre''''. This command resets the settings of the XBee radio.<br />
##Type ''''atmy'''', followed with the ID number, e.g. your command would be ''''atmy 3''''. This command sets the XBee radio ID.<br />
##*Note, that e-puck ID's are from '''1-31''', whereas all the computer radios have an ID of '''0'''.<br />
##Type ''''atap 2''''. This sets the XBee radio to use the API language with interrupts (to ensure packet quality)<br />
##Type ''''atd6 1''''. This is related to the RTS pin, and allows the radio to hold packets in buffer<br />
##*'''Note: SKIP THIS STEP''' if you are programming a ''computer Xbee radio'', such as for the ''vision system'' or ''data logger.''<br />
##Type ''''atbd 7''''. This sets the Baud to 115200, the frequency used for radios in this project<br />
##Type ''''atwr''''. This writes the settings to non-volatile memory, saving it even when the radio is powered on/off.<br />
<br />
===Troubleshooting the XBee Radios===<br />
*XBee radios can only be used by one program at a time, and if an XBee radio is being called by two different programs, this may lead to problems. This is due to the fact that each XBee radio is connected to one 'virtual' COM port, and said COM port can be used by only one program ''at a time''.<br />
**For instance, after configuring the XBee radios with the X-CTU terminal program, remember close that program to prevent further problems<br />
**If running multiple RGB Swarm programs on one computer, such as the Vision System and the Matlab RGB Swarm Data Grabber program, two XBee radios with two serial port adapters will be needed<br />
***'''Note''': both radios will and must be ID'd as '''0''' even though they are to different programs<br />
*Try some of these steps if your XBee radios do not seem to be working<br />
**''''Not getting any signal from the e-puck XBee radios'''' or ''''E-pucks are not getting any signal from the computer radios''''<br />
***'''Check the serial port adapter boards''': towards the back of the board near where the plug is, there are two columns of three (3) LED lights. The left most column of lights should be blinking. The yellow light signifies that a packet is has been received, and a green light signifies that a packet is being sent.<br />
***'''Check the e-pucks''': the e-pucks should be blinking a red light at the back of the e-puck (where the 4 digit ID is) every time a packet is sent from the e-puck<br />
***'''Check the XBee radio settings''': occasionally, the Vision System or Matlab software can corrupt and reset the settings on their respective XBee radios. Make sure that these settings are correct. Do this also for the XBee radios on the e-pucks (just try one radio first to see if that fixes the problem; rather than doing them all and realizing the problem still exists)<br />
***'''Use packet analysis tools''': these analysis tools allow for direct visualization of packets. Below is a list of tools you can use<br />
**#Open up X-CTU to a connected serial adapter board and XBee radio, and see if you are getting any sort of signal in the terminal box. If the radios are configured to work with one another (e.g. with a e-puck XBee radio and a data logger XBee radio), one should be able to see some signal (it will look like gibberish, this is because it is in hex)<br />
**#Use a Visual Studio packet tools, such as the [[Swarm_Robot_Project_Documentation#Packet_Sender|'''Packet Data Sender''']] or [[Swarm_Robot_Project_Documentation#Packet_Viewer|'''Packet Data Viewer''']] to check if your XBee radios are working<br />
<br />
==Machine Vision Localization System==<br />
*[[media:RGB_Machine_Vision_Localization_System.zip|'''RGB_Machine_Vision_Localization_System.zip''']]<br />
FILL IN HOW TO QUICK START VISION SYSTEM<br />
<br />
==Analysis Tools==<br />
===Matlab Tools===<br />
Obtain the Matlab Tools and instructions from [[RGB_Swarm_Robot_Project#MATLAB_Code_for_RGB_Swarm_Robot_Project|'''RGB_Swarm_Robot_Project: MATLAB Code for RGB Swarm Robot Project''']]<br />
<br />
===Visual Studio Tools===<br />
*'''Note''': remember to change the ''additional nums'' variable in both these programs to be equal to eight (8) if you are using them with the RGB program, if they are set as five (5) they will not work with the e-pucks or the logger radios<br />
*[[media:Swarm packet data viewer.zip|'''Swarm Packet Data Viewer.zip''']]<br />
*[[media:Swarm XBee packet sender.zip|'''Swarm XBee Packet Sender.zip''']]<br />
<br />
[[Category:e-puck]]<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18863RGB Swarm Robot Project Documentation2010-07-13T15:14:08Z<p>Ryan Cook: /* DataAquisition */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll in SerialCommands.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18862RGB Swarm Robot Project Documentation2010-07-13T15:12:45Z<p>Ryan Cook: /* Software */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18861RGB Swarm Robot Project Documentation2010-07-13T15:12:35Z<p>Ryan Cook: /* Software */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB v8.33<br />
<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18860RGB Swarm Robot Project Documentation2010-07-13T15:12:23Z<p>Ryan Cook: /* Software */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
The following compilers were used to generate all the code for the RGB Swarm epuck project:<br />
*Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
*MatLab 7.4.0<br />
*MPLAB v8.33<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLAB.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLAB and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cookhttps://hades.mech.northwestern.edu//index.php?title=RGB_Swarm_Robot_Project_Documentation&diff=18859RGB Swarm Robot Project Documentation2010-07-13T15:08:03Z<p>Ryan Cook: /* SerialCommands */</p>
<hr />
<div>== Overview ==<br />
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, '''Xbee Interface Extension Boards''', '''LED light boards''', and '''changes made to the Machine Vision Localization System''', and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the '''Swarm Robot Project''' category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - [[:Category:SwarmRobotProject|'''Swarm Robot Project''']].<br />
<br />
==RGB Swarm Quickstart Guide==<br />
<br />
Refer to [[RGB Swarm Quickstart Guide|'''RGB Swarm Quickstart Guide''']] for information on how start and use the RGB Swarm system and its setup.<br />
<br />
== Hardware ==<br />
<br />
<br />
<br />
===XBee Interface Extension Board Version 2===<br />
<br />
{|<br />
| [[Image:XBee_interface_extenstion_board_v1.gif|250px|thumb|alt=Traxmaker Image of the Previous Xbee Extension Board|Xbee Interface Extension Board Version]]<br />
| [[Image:IMG 1390-1-.jpg|300px|thumb|alt=Image of an e-Puck with the RGB Xbee Extension Board|e-Puck with previous board ]]<br />
| [[Image:XBee_interface_extenstion_board_v2.gif|vertical|250px|thumb|alt=Traxmaker Image of the Xbee Interface Exension Board Version 2|Xbee Interface Extension Board Version 2]]<br />
| [[Image:E puck XBee board2.JPG|vertical|169px|thumb|e-puck with Xbee Board 2]]<br />
| <br />
|}<br />
<br />
====Previous Version====<br />
<br />
The previous version of XBee Interface Extension Board, designed by Michael Hwang.<br />
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the [[Swarm_Robot_Project_Documentation#Current_Version|Swarm Robot Project Documentation page]].<br />
<br clear="all"><br />
<br />
====Version 2====<br />
This is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:<br />
*[[Media:epuck_xbee_board_v2.zip|'''Xbee Interface Extension Board Version 2.zip''']].<br />
<br />
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper. <br />
<br />
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards. <br />
<br />
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.<br />
<br clear="all"><br />
=====Color Sensor Circuit=====<br />
{|<br />
| [[Image:color_sensor_circuit_diagram_v1_R.gif|300px|thumb|Red Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_G.gif|315px|thumb|Green Color Sensor Circuit]]<br />
| [[Image:color_sensor_circuit_diagram_v1_B.gif|300px|thumb|Blue Color Sensor Circuit]]<br />
|}<br />
<br />
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R<sub>1</sub> = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R<sub>1</sub> as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:<br />
<br />
<math>|V_o| = |V_i * \frac{R_2}{R_{pot}}|</math><br />
<br />
*R<sub>pot</sub> = resistance of the potentiometer (shown in the diagram)<br />
*R<sub>2</sub> = 100K (shown in the diagram)<br />
*V<sub>i</sub> = voltage across R<sub>1</sub> = 680K, which the op-amp takes as an input<br />
*V<sub>o</sub> = output signal amplified from the op-amp<br />
<br />
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, V<sub>i</sub>, is amplified to be 20V<sub>i</sub>, which is V<sub>o</sub>. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.<br />
<br />
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14. <br />
<br />
Output Pins<br />
*Pin #7 - Amplified Red photodiode signal<br />
*Pin #1 - Amplified Green photodiode signal<br />
*Pin #14 - Amplified Blue photodiode signal<br />
<br />
=====Parts used=====<br />
Parts used in both the previous version and the new version of XBee Interface Extension Board<br />
*2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
*LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND) <br />
*2.2uF tantalum capacitor (Digikey 399-3536-ND) <br />
*2x Samtec BTE-020-02-L-D-A (Order directly from Samtec) <br />
*0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection) <br />
*2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND) <br />
<br />
Additional parts for new version of XBee Interface Extension Board<br />
*3x 100K resistors<br />
*3x 680K resistors<br />
*3x 10K potentiometer<br />
*3x 5pF capacitor<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, [http://jp.hamamatsu.com/resources/products/ssd/pdf/s9032-02_kspd1067e03.pdf Datasheet])<br />
*1x High impedence op-amp LMC6484<br />
<br />
=====Future modifications=====<br />
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.<br />
----<br />
<br />
===LED Pattern Board===<br />
[[Image:LED_pattern_board.gif|280px|right|thumb]]<br />
[[Image:E puck LED board.jpg|280px|right|thumb|e-puck with LED pattern board]]<br />
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:<br />
*[[Media:LED_Pattern_Board.zip|'''LED Pattern Board.zip''']]<br />
**This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.<br />
<br />
====LED Pattern Board Design====<br />
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here:<br />
Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch.<br />
The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm. <br />
<br />
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2. <br />
<br />
====Parts used====<br />
*3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots<br />
*3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs<br />
*2x AAA Battery Holder (Digikey 2466K-ND)<br />
*1x Switch (Digikey CKN1068-ND)<br />
*1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)<br />
*1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND) <br />
<br />
====Tests====<br />
<br />
=====LED Distance vs Color Sensor Signal=====<br />
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.<br />
*LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. The LED and the color sensor were placed at the same plane, and both are facing upward. <br />
<br />
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.<br />
<br />
5. The amplified output, V<sub>o</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
<br />
With Ambient light<br />
[[Image:Distance_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 1 || 1.4 || 0.469<br />
|-<br />
| 1 inch || 1.259 || 1.716 || 0.832<br />
|-<br />
| 1.25 inch || 1.185 || 1.619 || 0.757<br />
|-<br />
| 1.5 inch || 1.135 || 1.529 || 0.669<br />
|-<br />
| 1.75 inch || 1.097 || 1.503 || 0.613<br />
|-<br />
| 2 inch || 1.086 || 1.481 || 0.589<br />
|-<br />
| 2.25 inch || 1.071 || 1.47 || 0.563<br />
|-<br />
| 2.5 inch || 1.06 || 1.453 || 0.546<br />
|}<br />
<br clear="all"><br />
Without the Ambient Light<br />
[[Image:Distance_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Distance vs Amplified Output'''<br />
|-<br />
! Distance !! R !! G !! B <br />
|-<br />
| No LED|| 0.028 || 0.025 || 0.019<br />
|-<br />
| 1 inch || 0.244 || 0.221 || 0.223<br />
|-<br />
| 1.25 inch || 0.195 || 0.166 || 0.143<br />
|-<br />
| 1.5 inch || 0.162 || 0.123 || 0.097<br />
|-<br />
| 1.75 inch || 0.130 || 0.097 || 0.069<br />
|-<br />
| 2 inch || 0.102 || 0.077 || 0.054<br />
|-<br />
| 2.25 inch || 0.087 || 0.064 || 0.045<br />
|-<br />
| 2.5 inch || 0.073 || 0.056 || 0.039<br />
|}<br />
<br clear="all"><br />
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. <br />
In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.<br />
<br />
=====LED Angle vs Color Sensor Signal=====<br />
<br />
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, V<sub>i</sub>, the voltage before amplified, is mesured since amplified output, V<sub>o</sub>, easily reaches to the maximum.<br />
<br />
'''Setup and Results'''<br />
<br />
1. A white LED is used again in this experiment with the same reason above for the first experiment.<br />
*LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND<br />
<br />
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light. <br />
<br />
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch. <br />
<br />
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.<br />
<br />
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.<br />
<br />
5. The voltage before amplified, V<sub>i</sub> as shown in the circuit diagram above, of each photodiode is measured.<br />
* The reason to measure the volatage before amplified is that the output becomes too large after amplified.<br />
<br />
With the Ambient Light<br />
[[Image:Angle_vs_output_with_room_light.gif|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.437 || 0.425 || 0.404<br />
|-<br />
| 15º || 0.475 || 0.470 || 0.451<br />
|-<br />
| 30º || 0.490 || 0.491 || 0.501<br />
|-<br />
| 45º || 0.505 || 0.506 || 0.520<br />
|-<br />
| 60º || 0.484 || 0.468 || 0.484<br />
|-<br />
| 75º || 0.457 || 0.453 || 0.440<br />
|-<br />
| 90º || 0.439 || 0.430 || 0.408<br />
|}<br />
<br clear="all"><br />
<br />
Without the Ambient Light<br />
[[Image:Angle_vs_output_without_room_light.jpg|450px|left|thumb]]<br />
*Unit: Volt, V<br />
<br />
{| class="wikitable" border="3"<br />
|+'''Angle vs Voltage Before Amplified'''<br />
|-<br />
! Angle !! R !! G !! B <br />
|-<br />
| 0º || 0.446 || 0.436 || 0.416<br />
|-<br />
| 15º || 0.454 || 0.491 || 0.461<br />
|-<br />
| 30º || 0.493 || 0.505 || 0.480<br />
|-<br />
| 45º || 0.512 || 0.521 || 0.520<br />
|-<br />
| 60º || 0.498 || 0.486 || 0.491<br />
|-<br />
| 75º || 0.498 || 0.492 || 0.487<br />
|-<br />
| 90º || 0.485 || 0.479 || 0.515<br />
|}<br />
<br clear="all"><br />
<br />
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.<br />
<br />
====Next Steps====<br />
The LED Pattern Board design above needs to be modified in the following parts.<br />
*The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.<br />
*The hole size for the switch has to increase so that the switch can be completely inserted through the hole.<br />
*Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.<br />
<br />
==Software==<br />
Visual C++ 2010 Express - http://www.microsoft.com/express/Downloads/<br />
<br />
All the code for the RGB swarm robot project has been moved off of the wiki and placed in to version control for ease. The version control used is GIT, http://git-scm.com/. <br />
<br />
To access the current files, first download GIT for windows at http://code.google.com/p/msysgit/. Next you will need to have access to the LIMS server. Go to one of the swarm PCs or any PC which is set up to access the server and paste the following in to Windows Explorer: <br />
<code><pre><br />
\\mcc.northwestern.edu\dfs\me-labs\lims<br />
</pre></code><br />
Once you have entered your user name and password, you will be connected to the Lims server. Now you can open GIT (Git Bash Shell) and type the following in order to get a copy of the current files on to your Desktop:<br />
<code><pre><br />
cd Desktop<br />
<PRESS ENTER><br />
<br />
git clone //mcc.northwestern.edu/dfs/me-labs/lims/Swarms/SwarmSystem.git<br />
<PRESS ENTER><br />
</pre></code><br />
You will now have the folder SwarmSystem on your Desktop. Inside, you will find the following folders:<br />
*.git<br />
*configuration<br />
*DataAquisition<br />
*debug<br />
*ipch (this will be generated when you open a project in visual studio for the first time)<br />
*OpenCV<br />
*SerialCommands<br />
*SwarmRobot<br />
*VideoInput<br />
*VisionCalibrationAnalysis<br />
*VisionTrackingSystem<br />
*XBeePackets<br />
<br />
<br />
===.git===<br />
This directory contains the inner workings of the version control system, and you should not modify it. See git documentation for details.<br />
<br />
===configuration===<br />
This directory contains the configuration files (calibration data and data associating LED patterns with epucks) generated<br />
and used by the Vision Tracking System<br />
<br />
===DataAquisition===<br />
Inside the DataAquisition folder you will find the following files:<br />
*'''puckUI.fig''' - This is the figure for the data acquisition GUI<br />
*'''puckUI.m''' - This is the m-file which is used to look at the data the epucks are sending back. The color sensor data from the selected epuck is shown on the left and the positions of each epuck is shown on the right. On the bottom is a table of all the data sent from the epucks. There is also a toggle to display the coordinates with respect to the center of the epuck or with respect to the color sensor on each eapuck.<br />
**'''Note:''' In order to access the log of data which has been received by the epucks, simply call global data_log. Currently the data_log variable holds the last 100 packets of data received by each epuck. To change this, simply change the RECDUR variable in the puckUI_OpeningFcn. Also, currently the timer periods of .4 seconds for plot updates and .05 seconds to receive data work efficiently and do not take up too much CPU, but these values can be changed in puckUI_OpeningFcn as well.<br />
*'''serialdllsample.m''' - In this file are all of the matlab functions needed in order to open the port, send commands, etc. from matlab using the dll.<br />
*'''ProjectorLight.m''' - When this file is run, it will generate a matrix of 81 incident angles of light from the projector across the surface of the floor. This file is useful for determining the calibration of location and intensity of light picked up by the color sensor.<br />
<br />
===debug===<br />
This directory contains the files output by the Visual C++ compiler.<br />
It also contains DLL files from the OpenCV library which are necessary to run the Vision Tracking System.<br />
<br />
===ipch===<br />
This is generated by visual studio, and is used for its code completion features. It is not in version control and should be ignored.<br />
<br />
===OpenCV===<br />
This directory contains header files and libraries for the OpenCV project.<br />
Currently we are using OpenCV version 2.10. Leaving these files in version control<br />
lets users compile the project without needing to compile / set up OpenCV on the machine.<br />
<br />
===SerialCommands===<br />
This folder contains the files for the SerialCommands DLL (Dynamic Linked Library). This DLL allows multiple programs (including those made in MATLAB and in Visual Studio) to use the same code to access an XBee radio over the serial port. The DLL exports functions that can be called from MATLAB or<br />
a Visual Studio program and lets these programs send and receive XBee packets. <br />
<br />
If you write another program that needs to use the XBee radio, use the functions provided in the SerialCommands DLL to do the work.<br />
<br />
Currently, this code is compiled using Visual C++ Express 2010, which is freely available from Microsoft.<br />
<br />
===SwarmRobot===<br />
In this folder you will find all of the files which are run on the epuck. In order to access these files simply open the workspace, rgb_swarm_epucks_rwc.mcw in MPLab.<br />
<br />
===VideoInput===<br />
This contains the header and static library needed to use the VideoInput library. Currently,<br />
this library is used to capture video frames from the webcams.<br />
<br />
===VisionCalibrationAnalysis===<br />
Contains MATLAB programs used for analyzing the accuracy of the calibration.<br />
By pointing these programs to a directory containing Vision System configuration information<br />
(i.e the configuration directory), you can get a rough measure of the accuracy of the current camera calibration.<br />
<br />
===VisionTrackingSystem===<br />
This is the main Vision Tracking System project. This program processes images from the webcams to<br />
find the position of the epucks, and sends this information back to the epucks over an XBee radio.<br />
It is the indoor "gps" system.<br />
<br />
Currently, this code is compiled with Visual Studio 2010 Express, which is freely available from Microsoft.<br />
<br />
===XBeePackets===<br />
This directory contains code for handling the structure of packets used for communicating over<br />
the XBee radio. This code can be compiled by Visual Studio and is used in the SerialCommands dll for<br />
forming low level XBee packets. It is also combiled in MPLab and run on the XBees. In this way,<br />
we have the same source code for functions that are common to the epucks and the vision/data pc (currently<br />
just code dealing with our communication protocol).<br />
<br />
==Physical Setup==<br />
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:RGBswarmsetup.jpg|600px|center]]<br />
|}<br />
<br />
<br clear=all><br />
<br />
===Curtains===<br />
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light. <br />
<br />
<br />
<br />
===Floor===<br />
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.<br />
<br />
<br />
<br />
===Projector===<br />
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer. <br />
<br />
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.<br />
<br />
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768. <br />
<br />
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone. <br />
<br />
The size of the projected image is currently 113.25" x 76.5" or (in mm).<br />
<br />
<br />
====Projector PWM Waveform====<br />
{|<br />
| [[Image:Projector-waveform-longtime.jpg|200px|thumb|alt=Waveform from the color sensor under projector light (long timescale)|Waveform from the color sensor under projector light (long timescale)]]<br />
| [[Image:Red-high-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected high value red|Waveform from the color sensor under projected high value red]]<br />
| [[Image:Red-med-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected medium value red|Waveform from the color sensor under projected medium value red]]<br />
| [[Image:Red-low-value.jpg|200px|thumb|alt=Waveform from the color sensor under projected low value red|Waveform from the color sensor under projected low value red]]<br />
| <br />
|}<br />
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.<br />
<br />
Pulse Width Modulation can lead to problems when recording data. For instance, when first setting up data recording for the Xbee radios, it was discovered that the RGB values would fluctuate across a period of several minutes, skewing that data. Doing more research into the projector, such as by using the digital oscilloscopes, the problem was fixed on the fact that projector does not project exactly across 120 hz, resulting a period that is slight off from the 8ms that was being used to sample data. The solution to the problem was to record several samples (currently 4), average the samples, and use the average the correct value. There is time to record 4 samples, or 32ms of data, as puck has 400ms (.4s) to record data, construct a packet, and send the packet out. The result of this averaging is that the irregularities due to PWM are phased out, resulting in a clean and stable trace without low-frequency modulations.<br />
<br />
===Projector Mount===<br />
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here [https://docs.google.com/a/u.northwestern.edu/gview?a=v&pid=gmail&attid=0.1&thid=1227a8bb603d85e5&mt=application%2Fpdf&url=https%3A%2F%2Fmail.google.com%2Fa%2Fu.northwestern.edu%2F%3Fui%3D2%26ik%3D81c0708ccd%26view%3Datt%26th%3D1227a8bb603d85e5%26attid%3D0.1%26disp%3Dattd%26zw&sig=AHBy-hZJxFDToenWNtF3J9ym_QrcbepVbQ&AuthEventSource=SSO]. The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.<br />
<br />
{| align="left" cellpadding = "25" <br />
! [[Image:Projector_Camera Mount.png.jpg|600px|center]]<br />
|}<br />
<br />
<br />
<br clear=all><br />
<br />
==Conclusion==<br />
<br />
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.<br />
<br />
==Future Work and To Do==<br />
===Hardware===<br />
====e-puck Modifications====<br />
*Measure resistance of potentiometers on e-pucks, then remove them and replace with fixed resistors to further reduce variability<br />
====DV Camera====<br />
A camera will be used to record and document the experiments while they take place inside the tent. The quality must be high enough to show/broadcast to interested parties (such as online video streaming), and possible for presentations, etc.<br />
*Get a DV camera, check for fit with the existing physical set up (see projector/webcam framework)<br />
*Check DV camera control functionality when plugged into computer (firewire control), such as play/pause/record controls from the computer to the camera<br />
*Select a camera, wide angle lens, fireware card<br />
**Mini DV cameras seem to be the best bet as they are designed to accommodate for control via firewire cable due to necessity of capturing data from the tape<br />
**A 0.6X magnification lens accommodates 9' x 6' floor, allowing for the camera to be only 5.4' off the ground<br />
***The amount of magnification (x) = 1/x amount of FOV; thus a 0.5X magnification lens = 2X amount of FOV<br />
***Find the amount of focal length the camera has at its widest view (this is the smallest number, and in mm), and then apply the magnification appropriately), so 0.5X magnification = 0.5X focal length<br />
**#Go online to [http://www.tawbaware.com/maxlyons/calc.htm this site], look for the '''Angular Field of View Calculator''' to determine the horizontal and vertical FOV angles<br />
**#Use these angles, to calculate the height need for the camera to capture the entire image<br />
**#For example, if the camera is mounted above the center of the floor, the width of the floor is 9', and the horizontal FOV calculated is 79.6º using a focal length of 36mm converted to 21.6mm by 0.6X magnification, then the math to get the height is: 9'/2 = 4.5', 79.6º/2 = 39.8º; 4.5'/atan(39.8º) = '''5.4''''<br />
<br />
===Vision System===<br />
*Complete vision system calibration by being able to move from floor coordinates to pixel row/column coordinates and then back<br />
*Update vision system to accommodate change between black/white pattern recognition and LED/light intensity recognition (eliminate going through code)\<br />
<br />
===e-puck Code===<br />
*Make a flow chart demonstrating the decision/command scheme of the e-puck as it is working<br />
*Complete/finish calibration scheme (color_cal) to calibrate e-pucks for the projector set up<br />
**Finish writing program to 'lawnmower' e-puck, allowing to collect data from all positions and all orientations across the floor<br />
***Implement 3 part turn from NUtest.c code for e-puck to accommodate control of position and orientation<br />
**Need to analyze the data collected in different ways<br />
***Generate table/equation for different floor positions and e-puck orientations, only one color/shade<br />
***Generate table of scalars (for different positions on the floor) for an equation which can be used to equate e-puck ADC outputs with RGB input, full spectrum sweep<br />
*Get rid of sin/cos calculations which take more cycles than desired<br />
**Calculate discrete # of possible angles per wheel (use ticks/revolution), create a look up table, check the angle from the vision system packets, use standardized table to calculate appropriate vector (reducing total number of calculations and eliminating use of sin/cos)<br />
<br />
[[Category:SwarmRobotProject]]</div>Ryan Cook