RGB Swarm Robot Project Documentation
The swarm robot project has gone through several phases, with each phase focusing on different aspects of swarm robotics and the implementation of the project. This entry focuses on the most recent phase of the project, covering topics such as, but not limited to, Xbee Interface Extension Boards, LED light boards, and changes made to the Machine Vision Localization System, and the overall conversion to LED boards and a controlled light environment. These entries help provide insight into setup and specific details to allow others to replicate or reproduce our results, and to provide additional information for those working on similar projects or this project at a later time. Other articles in the Swarm Robot Project category focus on topics such as the swarm theory and algorithms implemented, as well as previous phases of the project, such as motion control and consensus estimation. You may reach these articles and others by following the category link at the bottom of every page, or through this link - Swarm Robot Project.
RGB Swarm Quickstart Guide
Refer to RGB Swarm Quickstart Guide for information.
XBee Interface Extension Board Version 2
The previous version of XBee Interface Extension Board, designed by Michael Hwang.
Its configuration is shown in the figure on the left, with an actual image of the board mounted on an e-Puck seen in the figure in the center. This version of the XBee Interface Board does not contain a color sensor in it. Details about this version of XBee Interface Extension Board, such as parts used and Traxmaker files can be found on the Swarm Robot Project Documentation page.
Th is the updated version of the Xbee board, or XBee Interface Extension Board Version 2. It is designed by Michael Hwang to accommodate further projects in the Swarm Robot Project. For this reason, the Xbee Interface Extension Board Version 2 has a color sensor circuit built in. The details of the color sensor circuit can be found in the color sensor section below. A copy of the Traxmaker PCB file for the Xbee Board Version 2 can be found below:
The RTS flow control line on the XBee is connected to the sel3 line of the e-puck. Although the CTS line is not connected to the sel2 pin in this board design, it can be easily connected with a jumper.
The XBee Interface Extension Board Version 2 design was actually built and implemented on the e-puck #3. In order to see if there is any working problem in this board design, it is first tested with the other e-puck which uses the previous XBee Boards.
The e-puck #3 upgraded with the new XBee board did not show any problem in communicating with other e-pucks. According to the goal defined, all e-pucks, including e-puck #3, locate themselves to the desired location.
Color Sensor Circuit
As you may draw from the circuit diagrams above, as each photodiode receives light, a certain amount of current start to flow through the photodiodes and generates a voltage across R1 = 680K. Each photodiode is designed to detect the certain range of wavelength of the light, and the amount of current flowing through the photodiodes is determined according to the amount of the corresponding light to each photodiode. The op-amp (LMC6484) takes the voltage generated across R1 as the input signal, amplifying it by a ratio particular to the circuit. This ratio is also known as gain, and is defined by resistance of the potentiometer. The now amplified output is then sent to the analog digital converter, which on the e-Puck had been used as the X,Y, and Z axis accelerometers. This convenient, as each accelerometer axis can be used as a channel for the color sensors three colors. The converted signal can then be used to measure the response of the color sensor to light. The corresponding equation for the circuits illustrated above are as follows:
- Rpot = resistance of the potentiometer (shown in the diagram)
- R2 = 100K (shown in the diagram)
- Vi = voltage across R1 = 680K, which the op-amp takes as an input
- Vo = output signal amplified from the op-amp
The gain of the color sensor circuits is approximately 20. Thus, the input voltage, Vi, is amplified to be 20Vi, which is Vo. As mentioned above, the gain can be adjusted properly by controlling the resistance of the potentiometer.
As shown in the circuit diagram on the left, the siganl from the red photodiode goes into the pin #5, and the amplified signal is sent out through the pin # 7. Similarly, the signal from the green photodiode goes into the pin #3 and it is sent out from pin #1 while the signal from the blue photodiode goes into the pin #12, and it is sent out from pin #14.
- Pin #7 - Amplified Red photodiode signal
- Pin #1 - Amplified Green photodiode signal
- Pin #14 - Amplified Blue photodiode signal
Parts used in both the previous version and the new version of XBee Interface Extension Board
- 2x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND)
- LE-33 low dropout voltage regulator (Digikey 497-4258-1-ND)
- 2.2uF tantalum capacitor (Digikey 399-3536-ND)
- 2x Samtec BTE-020-02-L-D-A (Order directly from Samtec)
- 0.1"header pins for RTS and CTS pins (you can also use wire for a permanent connection)
- 2x 0.1" jumpers for connecting RTS and CTS pins if you used header pins(Digikey S9000-ND)
Additional parts for new version of XBee Interface Extension Board
- 3x 100K resistors
- 3x 680K resistors
- 3x 10K potentiometer
- 3x 5pF capacitor
- 1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02, Datasheet)
- 1x High impedence op-amp LMC6484
As mentioned in the overview, the black dot patterns of e-pucks are replaced with new LED patterns by implementing LED pattern board at the top of each e-puck. Thus, in order for the color sensor to collect data properly, it is necessary to move the color sensor from the XBee Interface Extension Board to the LED pattern board so that nothing will block the color sensor. All other components for the color sensor circuit remains in the XBee Interface Extension Board and only the color sensor will be place in the LED pattern board. We can use a jumper to connect the color sensor placed at the LED pattern board to the color sensor circuit place in the XBee Interface Extension Board. The datails of this LED pattern Board will be presented at the section below.
LED Pattern Board
This is the LED pattern board, which was introduced for the RGB Swarm Robot Project. Currently, the unique black dot pattern of each e-puck was used for the machine vision system to recognize each e-puck. However, this black dot pattern requires a white background in order for the machine vision system to recognize e-pucks. The new LED pattern board uses LEDs with the proper brightness, instead of the black dot pattern. By doing so, the machine vision system can now recognize e-pucks on any background. The reason why this LED pattern is recognized on any background will be presented briefly in the Code section below. In addition, in order to apply this LED pattern to the machine vision system, we made a modification in code. This modification will also be presented in the Code Section below. The PCB file can be downloaded here:
- LED Pattern Board.zip
- This file contains the Traxmaker PCB files for an individual LED Pattern Board, as well as a 2x2 array, along with the necessary Gerber and drill files necessary for ordering PCBs.
LED Pattern Board Design
This LED Pattern Board is created using Traxmaker. This LED Board design can be downloaded here: Although we replaced the black dots with LEDs, we maintain each pattern of dots. The horizontal distance and the vertical distance between the two adjacent LEDs are both 0.8 inch. In order to reduce power consumption of the e-puck battery, we implement a seperate pair of AAA batteries to supply power to the LEDs. This LED board can be turned on and off by the switch. The millicandela rating of the LEDs used is 4850 mcd. In addition, this LED has diffused lens style. The reason to choose this LED is that it has a proper brightness and power consumption, and it is diffused so that the machine vision system can capture this LED in any places.The resistor used are 68.7 ohm.
As mentioned in the XBee Interface Extension Board section, the color sensor has to be moved to this LED pattern board from the XBee Interface Extension Board so that nothing blocks the sensor. Thus, as you can see in the Figure on the left, the color sensor is place at the front, and each photodiode is connected to the 10 pin header. This header connects the color sensor on the LED pattern board to the remaining part of color sensor circuit on the XBee Interface Extension Board v2.
- 3x LED (Digikey 516-1697-ND): Some e-pucks require 4 LEDs since they have a pattern composed of 4 dots
- 3x 68.7 ohm resistors : Some e-pucks require 4 resistors since they have 4 LEDs
- 2x AAA Battery Holder (Digikey 2466K-ND)
- 1x Switch (Digikey CKN1068-ND)
- 1x RGB color sensor (Order directly from HAMAMATSU, part#:s9032-02)
- 1x 10 pos. 2 mm pitch socket (Digikey S5751-10-ND)
LED Distance vs Color Sensor Signal
Tests need be done in order to note the affect of the LED light on the color sensor due to potential interference. The first experiment performed is designed to see how much interference will be caused as the distance between the LED and the color sensor changes.
Setup and Results
1. A white LED is used in this experiment because the white LED will cover the entire wavelengh ranges of the visible light. The experiment with the white LED can yield a general result, while the experiment with the colored LEDs will yield more specific result focused on the interference between the certain photodiode and the certain color.
- LED: 18950 mcd (millicandela), digikey part number: C503B-WAN-CABBB151-ND
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light.
3. The LED and the color sensor were placed at the same plane, and both are facing upward.
4. Distance between the color sensor and the LED is increased by 0.25 inch each time from 1 inch to 2.5 inch.
5. The amplified output, Vo as shown in the circuit diagram above, of each photodiode is measured.
With Ambient light
- Unit: Volt, V
Without the Ambient Light
- Unit: Volt, V
As you can see in the two graphs above, the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the LED is closest to it. As the distance between the LED and the color sensor increases, the interference decreases. When the color sensor is most affected by the LED under the presence of the room light, the output increases up to 25.9%, 22.6%, and 43.6 % of the original output. As the LED is 2.5 inch away from the color sensor, the output becomes very close to the original value. In this experiment, we see that the lights from LEDs can affect the color sensor. However, we used much brighter LED in this experiment than the ones we use in the LED pattern board. The brightness of the LED used in the experiment is 4 times larger than the ones in the LED pattern board. Thus, more experiment with the LEDs used in the LED pattern board is required.
LED Angle vs Color Sensor Signal
The second experiment is designed to see how much interference will be caused as the angle between LED and color sensor changes. Different from the first experiment, Vi, the voltage before amplified, is mesured since amplified output, Vo, easily reaches to the maximum.
Setup and Results
1. A white LED is used again in this experiment with the same reason above for the first experiment.
- LED: 18950 mcd, Digikey part number: C503B-WAN-CABBB151-ND
2. The experiment was performed under the two conditions; with the ambient light and without the ambient light.
3. In this experiment, the distance between LED and color sensor is kept constant, 1 inch.
4. Angle between LED and color sensor is increased by 15º each time from 0º to 90º.
When the angle is 0º, the LED and the color sensor is placed at the same horizontal plane. The LED is facing toward the color sensor(this means that the LED is parallel to the horizontal plane with its head facing the color sensor, which is placed on the same horizontal plane), and the color sensor is facing upward. We increased the angle by 15º each time, and increasing amounts of light from the LED shines onto the color sensor. When the angle is 90º, the LED is right above the color sensor, facing the color sensor directly. This means that the LED and the color sensor are now on the same vertical line, and the LED is facing downward.
5. The voltage before amplified, Vi as shown in the circuit diagram above, of each photodiode is measured.
- The reason to measure the volatage before amplified is that the output becomes too large after amplified.
With the Ambient Light
- Unit: Volt, V
Without the Ambient Light
- Unit: Volt, V
As the first experiment, two graph above shows that the color sensor is affected by the light from the LED. The color sensor is most affectd by the LED when the angle between two is 45º. The inteference increases as the angle goes to 45º, and reaches to the peak at 45º. Then it decreases as the angle goes to 90º. When the color sensor is most affected by the LED under the presence of the room light, the output increases upto 15.6%, 19.1%, and 28.7% of Vi. As angle becomes 90º, the output becomes very close to the value at the angle of 0º. The reason why the interference is reduced as the angle reaches 90º is that the ambient light presented are blocked by the LED board. When we perform this experiment, the LEDs are implemented on the LED plane. This LED plane blocks the light and make a shadow on the color sensor. Thus, the amount of light that the color sensor receives decreases. That is why the output becomes close to its original value while the angle increases.
The LED Pattern Board design above needs to be modified in the following parts.
- The hole size for the LEDs has to increase so that it can accomodate the standoff of the LED chosen.
- The hole size for the switch has to increase so that the switch can be completely inserted through the hole.
- Currently, 10 pos 2mm pitch socket is used to connect the color sensor to the circuit using wires. Instead, the proper header for the color sensor has to be found to connect the color sensor and the circuit more conveniently.
Machine Vision Localization System Modification
Below is the documentation of changes made to the original machine vision localization system code to accommodate changed setup of the RGB Swarm Robot Project. This version of the code can be downloaded in the link below. For more information about the operation of the system, look through the Machine Vision Localization System article.
- RGB Machine Vision Localization System (Project Files with Source Code)
Compatibility Problem of Original Code with LEDs
The Machine Vision Localization System takes the real (color) image from the four cameras, and converts it into a grey-scale image. Then, using a threshold set in the machine vision code, the grey-scaled image is divided into black and white, and this black and white image is presented on the machine vision system computer screen. With this set-up, the white background on the floor is presented as black, and black dot patterns on e-pucks are presented as white patterns. The system recognizes theses white dot patterns and identify e-pucks, and broadcasts the position coordinates to each e-puck via the Xbee Radio. For more information about the theory refer to the Machine Vision Localization System article.
However, there is a problem with using black dot patterns to identify e-pucks. Since the machine vision system and code use a preset threshold to divide the grey image into black and white, black dot patterns are affected by the background color due to lack of contrast. For instance, if the background is black, or any color besides white, the system would have a difficult time distinguishing the pattern from the background, and possible not capture them at all. In addition, other problems arise from dirt and debris tracked onto the white surface of the floor, resulting in false patterns, further throwing the system.
A solution is to substitute the black dots with LEDs placed atop the e-pucks, allowing the machine vision system to capture the identification pattern clearly regardless of background color and condition. By adjusting the threshold set in the machine vision code, the system will rely on the contrast of light intensity, minimizing the interference of the operating environment whose light intensity is which is naturally weaker than LEDs'.
Change from Original Code
In main.cpp in RGBVisionTracking.vcproj, the RGB Vision project, the code has been changed in
Line 56: cvThreshold(greyImage[camerai], thresholdedImage[camerai], threshold, 255, CV_THRESH_BINARY_INV); to cvThreshold(greyImage[camerai], thresholdedImage[camerai], threshold, 255, CV_THRESH_BINARY);
Line 731: cvThreshold(grey, thresholded_image, threshold, 255, CV_THRESH_BINARY_INV); to cvThreshold(grey, thresholded_image, threshold, 255, CV_THRESH_BINARY);
Also, in global_vars.h,
Line 65: double threshold = 75; //black/white threshold to double threshold = 200; //black/white threshold
As change CV_THRESH_BINARY_INV in both line 48 and 735 to CV_THRESH_BINARY and adjust the value of threshold from 75 to 200, the system now clearly presents LED patterns as white dot patterns on the screen, so it can identify e-pucks according to LED patterns.
The threshold value of 200 is determined to be good enough for the test inside. With various conditions, however, the threshold value can, or should, be changed more properly. In addition, the results for different range of threshold under the same test condition is presented below:
|0 - 94||System cannot caputure LED patterns at all; whole screen is white.|
|95 - 170||System can recognize the pattern but it is unstable, since most of background becomes white.|
|171 - 252||System cleary captures and recognizes LED patterns.|
|253||System can recognize the pattern but it is unstable since pattern is too small; stronger intensity is required.|
|254 - 255||System cannot caputure LED patterns at all; whole screen is black.|
An e-puck was fitted with a LED pattern board and then tested with the machine vision localization system. With the changes implemented, the machine vision localization system did not show any problems, showing the ability to capture and locate the e-puck located in anywhere in the field of vision of the cameras. In addition, the vision system was able to capture and locate the e-puck as it moved. There was no loss of positional accuracy as compared to previous implementations of identification systems. The recognition of the e-puck by the machine vision localization system displayed the stability of the LED boards with the vision system, further supporting their implementation for further experiments.
Center of Mass Problem with LEDs
Another problem with the implementation of the LED pattern boards is related to the method that the machine vision localization system generates a position of a puck. Originally, the vision system determines the center of the mass of the paper dice dot patterns (Rdots), and calculates the world position coordinate using the calibration information. From this, the vision system then assigns Rdots as the center of mass of the e-puck (Re-puck). While Rdots is typically not located over Re-puck, the paper dots location was shifted in order to match Rdots with Re-puck.
However, this is not achievable with the LED pattern boards. As LED pattern board's location on the e-puck cannot be shifted, the vision system needs to be able to shift RLEDs to accommodate for this position error. By augmenting the target_classifiers.txt file with additional data, the modified vision system refers to a look-up table in order to determine the amount and direction that the coordinates of center of mass of the LED dots (RLEDs) needs to be shifted to align with Re-puck. The look-up table contains values that are unique and constant to each rotationally invariant pattern, and as a result, only a few simple calculations are needed to generate correct coordinates for the e-puck.
The result is the vision system yields a much more accurate position data for the e-puck. A simple test, which consists of rotating the e-puck around the Re-puck shows that the coordinate data for Re-puck varies by roughly 4-5mm when using the RGB modified code. Using the original Machine Vision Localization System code, the coordinate data for Re-puck varies by much more, between 40-50mm. Note that the only difference between these two tests, is the application of code to shift the Re-puck back into place; otherwise both sets of code are changed to accommodate the LED pattern boards, which were used in this test.
Addition to the Original Code
There are only a few minor changes in the code from the original machine vision code to the RGB machine vision code. The change focused around the introduction of centermag and centeroffset, two new variables. Centermag refers to the magnitude from the vision system calculated Rdots to the desired Re-puck. This value differs from e-puck to e-puck, the the value remains constant for each e-puck regardless of orientation. Centeroffset refers to the angle offset between the vision system calculated angle, and the line from Rdots to the desired Re-puck. Again, this value differs from e-puck to e-puck, and also remains constant for each e-puck regardless of orientation.
The actual code which shifts the coordinate data from Rdots to the desired Re-puck is simple.
In main.cpp of RGBVisionTracking.proj, the code is as follows:
line 1082 through line 1109 targets_temp[camerai]->group.wx = targets_temp[camerai]->group.wx + -1*targets_temp[camerai]->centermag*cos(Adjusted); targets_temp[camerai]->group.wy = targets_temp[camerai]->group.wy + -1*targets_temp[camerai]->centermag*sin(Adjusted);
Adjusted is the center offset angle plus the orientation angle, or centeroffset + angle, which results is the angle of the line from Rdots to Re-puck, from zero degrees. With knowledge of this angle, a simply trigonometry problem is performed to determine the x and y values to add/subtract from Rdots to get coordinates for Re-puck.
The values for centermag and centeroffset are found in the txt file, target_classifiers_augmented, as the fourth and fifth values, respectively. This file can be round in the modified code zip file above.
While there do not need to be any additional changes to the set up of the machine vision localization system, there may be additional considerations for further development. One such consideration is the 'background', or floor material, of the setup. With the modified machine vision code, light intensity is what is picked up and filtered by the system, thus rendering the LEDs from the e-pucks to be the only tracked objects. However, with more advanced set ups, such as one featuring light that is projected onto the background, this may present a problem with the machine vision system picking up reflected light. More testing has to be done with modifying the machine vision system threshold to see if there is an ideal threshold to accommodate this setup. Another option may be to use a non-reflective or matte surface for the background.
Another consideration involves the hardware of the setup, or the themselves. The cameras are equipped with the Logitech software which automatically adjusts the exposure and light contrast settings to correct for poor lighting and setup conditions. However, this leads to issues as with increased exposure (due to light blocking set up) and bright LEDs results in blurry or blobby images received. The machine vision localization system cannot read these images, and as a result cannot track the e-pucks. One potential solution may be to adjust the threshold of the vision system. Other solutions may be to use LEDs with a lower millicandela, or increase the background lighting of the setup, for instance with upward casting lights.
RGB Swarm E-puck Code Overview
Detailed code outline: Swarm RGB E-puck Code
MATLAB Code for RGB Swarm Robot Project
What to do with the files:
- open_serial.m: Change the COM call to whatever COM port your XBee radio is plugged into (COM1, COM2, ...). Run this to open the COM port before you run anything else. It will create a COM object in your workspace. It takes a long time, be patient.
- close_serial.m: Run after you're done to close the COM port. Keeps matlab and your computer happy.
- RGB_Swarm_Data_Grabber.m: Use this to plot the values from the RGB sensor of one puck. It shows a moving plot. Handy for testing.
- RGB_avg_std_logger.m: Interactive point logger. Follow the prompts to grab some nice data. Useful for testing.
- color_hist.m: Plot a histogram from RGB_Swarm_Data_Grabber.
In the RGB swarm robot project, the epucks pick up light from a projector. This projector has to project onto the floor so that the top mounted light sensors can pick up the light. The floor which the epucks roll on must be completely enclosed so that the only light which reaches it, is the light from the projector. Also this floor must be smooth, flat and durable. See the overhead view below.
The floor is enclosed by two walls and 6 curtains. Two bars protrude from the walls and are connected by an L-joint. There are 3 Eclipse absolute zero curtains on each bar (see diagram). These curtains block 100% light and are sewn together so that no light comes through between them. Covering the whole enclosure, above the projector mount are 7 more curtains sewn together to block all light.
The floor is currently a vinyl sheet from home depot. The sheet had a raised pattern on the front, so it was flipped over. The underside of the vinyl floor is not very durable, but holds up as long as no shoes are used when stepping on it. In the future, a more durable light colored floor may want to be used. Also, the floor is taped down with packing tape. The tape should not be removed as the floor has markings which show where the camera calibration dots should go and where the projected image should fall.
The projector is a Benq MP771 DLP prpjector. It has a digital user manual on a CD in the projection computer.
Since it is DLP, it has an array of tiny mirrors which reflect the light from the bulb. The light from the bulb is shown through a color wheel which shines red, green, and blue on to the mirror array. The frequency with which the mirrors turn on and off (reflect light and don't) determines the intensity of light. For example if a dark red was being projected, the mirrors would be on more than off in a certain interval. In the case of our projector that interval is 8.2 millisecond. See the pulse width modulation below.
Each mirror represents a different pixel projected from the projector. This projector has a resolution of 1024 x 768, so in order to get a 1 to 1 pixel ratio, the projection computer should be set to display at 1024 x 768.
As detailed in the user manual, the projector should not be tilted forward or backward more that 15°. Because of this and the wide throw of the projector, a keystone projection shape could not be avoided on the floor. The projector is currently set to compensate for the maximum amount of keystone.
The size of the projected image is currently 113.25" x 76.5" or (in mm).
Projector PWM Waveform
The projector pulse width modulates the color output. So you need to average the measured intensity over the period of the projector to measure the color. The period of the projector is 8.2ms.
The projector mount was ordered online using 80/20®. The order form complete with the parts for the mount is here . The mount is highly adjustable so that the projector can be mounted at any angle and height. The cameras are mounted so that they cover the entire projected area. The cameras overlap by one object described in the camera calibration routine.
The new XBee Interface Extension Board design was tested, and we found out that it does not have any problem. In addition, the black dot pattern of the e-pucks are upgraded to LED patterns. The advantage of this improvement is that the machine vision system can recoginize each e-puck no matter where the e-pucks are located. The color of the background also does not affect the vision system. However, we had to move the color sensor to the LED pattern board since the LED pattern board will block the sensor if the sensor is located in the XBee Interface Extension Board. Thus, we now consider the light interference between the LEDs and the color sensor. In the light interference test, we found out that the color sensor is affected by the light from LED. However, since we used much brighter LEDs in our light interference test than the LEDs used for the LED pattern board, we have to do more experiment on this in order to have more accurate interference data.