<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://hades.mech.northwestern.edu//api.php?action=feedcontributions&amp;feedformat=atom&amp;user=ScottMcLeod</id>
	<title>Mech - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://hades.mech.northwestern.edu//api.php?action=feedcontributions&amp;feedformat=atom&amp;user=ScottMcLeod"/>
	<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php/Special:Contributions/ScottMcLeod"/>
	<updated>2026-05-16T14:02:28Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.8</generator>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Serial_communication_with_Matlab&amp;diff=13153</id>
		<title>Serial communication with Matlab</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Serial_communication_with_Matlab&amp;diff=13153"/>
		<updated>2009-04-25T22:53:05Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Matlab Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
Matlab has a &amp;quot;serial&amp;quot; function that allows it to communicate through a serial port. This project is to establish serial port connection with the PIC microcontroller and demonstrate bidirectional communication between the PIC and a Matlab program. For demonstration purposes, the PIC will send digital potentiometer readings to Matlab as well as receive keystrokes from the Matlab user to light up LEDs on its circuit board.&lt;br /&gt;
&lt;br /&gt;
A USB to RS232 adapter and level shifter chip were used to connect the computer to the PIC. In this lab, we used a cheap cable found at http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&amp;amp;item=220199148938&amp;amp;ih=012&amp;amp;category=41995&amp;amp;ssPageName=WDVW&amp;amp;rd=1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;**Important! DO NOT connect the serial Rx/Tx lines DIRECTLY to the PIC!!!**&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A level shifter chip is necessary to convert the high and low logic voltages from the desktop computer (+12V/-5V) to (+5V,0V) for the PIC. A standard RS232 connection is called a DB9 connector and follows the pin diagram shown here: http://www.aggsoft.com/rs232-pinout-cable/serial-cable-connections.htm&lt;br /&gt;
This cable requires 1 driver installation as included on the mini-cd.  To install this driver, you must first plug in the USB cable, and run the installation program located on the CD corresponding to the model on the USB Cable (&amp;lt;CDROM&amp;gt;:\HL-232-340\HL-340.exe).  This driver is also available online at this link:  &lt;br /&gt;
http://129.105.69.13/pic/usb_drivers/HL-340_USB_serial_drivers_WinXP/ .  To configure the Matlab script to connect to the proper serial port, use the device manager (Right click My Computer-&amp;gt;manage) and expand the section &amp;quot;Ports (COM &amp;amp; LPT)&amp;quot;.  Make a note of the COM port number corresponding to &amp;quot;USB-SERIAL CH340&amp;quot; as listed in this section.  In our program, our serial port was COM4.  A picture is shown below of how to get this information in the device manager.&lt;br /&gt;
&lt;br /&gt;
[[Image:ComPortLookup.JPG |thumb|300px|right| COM Port Lookup - Device Manager]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A female DB9 connector was wired to our level shifter to convert the voltages, with the level shifter connected to our PIC.  The female DB9 connector is used so no wires need to be directly soldered to the serial cable.  Refer to the Circuit section for details on this connection.&lt;br /&gt;
&lt;br /&gt;
The PIC was programmed with our C code as shown below. Our program was designed to read a potentiometer through the PIC&#039;s ADC (Analog to Digital Converter) port and transmit the digitized readings over the serial cable to the PC (upon request).  In Matlab, if a users sends data to the PIC by entering a character, the PIC responds with the current potentiometer reading and the last received byte from the PC.  The PIC is also programmed to display the character received from the PC on its LED array (D register) as a 8-bit ASCII number.  The programs can easily be modified to create any custom protocol, but are designed to show simple 2-way communication between Matlab and the PIC.&lt;br /&gt;
&lt;br /&gt;
== Circuit ==&lt;br /&gt;
The wiring diagram for serial communication is shown below. There are three basic components in this setup. The potentiometer serves as an analog input to the PIC, which is converted to a digital signal through the PIC&#039;s analog to digital converter pin. The MAX232N level converter provides bidirectional voltage shifting for digital communication between the PIC and PC (read more about this chip and level conversion on the RS232 wiki [http://hades.mech.northwestern.edu/wiki/index.php/PIC_RS232 here]). Finally, the female DB-9 connector allows the circuit to connect to the PC&#039;s serial port.&lt;br /&gt;
 &lt;br /&gt;
[[Image:Team26-SerialComCircuit.jpg |thumb|640x470 px|center| Circuit Diagram for Serial Communication between PIC and PC]]&lt;br /&gt;
&lt;br /&gt;
The connections to the female DB-9 adapter are shown below.  &#039;&#039;&#039;These wires are soldered to the cup-side of the adapter, not directly to the serial cable.&#039;&#039;&#039;  Our DB-9 adapter is pictured below and follows the given connections:&lt;br /&gt;
  &lt;br /&gt;
 PIN5:DB-9 (green wire)  -&amp;gt;  Common Ground&lt;br /&gt;
 PIN3:DB-9 (yellow wire) -&amp;gt;  Receive (RX)   -&amp;gt;  PIN14:MAX232&lt;br /&gt;
 PIN2:DB-9 (white wire)  -&amp;gt;  Transmit (TX)  -&amp;gt;  PIN13:MAX232&lt;br /&gt;
&lt;br /&gt;
  [[Image:DB9Connector.jpg |thumb|300px|center| Closeup of DB-9 Connector]]&lt;br /&gt;
&lt;br /&gt;
Our final circuit is pictured below.&lt;br /&gt;
&lt;br /&gt;
[[Image:P1120664.JPG |thumb|640x470 px|center| Image of wiring for serial communication between PIC 18F4520 and PC]]&lt;br /&gt;
&lt;br /&gt;
== PIC Code ==&lt;br /&gt;
&lt;br /&gt;
 /*&lt;br /&gt;
    SerialComm.c Scott McLeod, Sandeep Prabhu, Brett Pihl 2/4/2008&lt;br /&gt;
    This program is designed to communicate to a computer using RS232 (Serial) Communication.&lt;br /&gt;
    &lt;br /&gt;
    The main loop of this program waits for a data transmission over the Serial port, and&lt;br /&gt;
    responds with a current reading of an analog input (potentiometer) and the last received data.&lt;br /&gt;
   &lt;br /&gt;
    Note the analog input is only for testing purposes, and is not necessary for serial communication.&lt;br /&gt;
    Lines unnecessary for RS232 communication are commented with enclosing asterisks (&#039;*..*&#039;).&lt;br /&gt;
  */&lt;br /&gt;
  &lt;br /&gt;
 #include &amp;lt;18f4520.h&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 #fuses HS,NOLVP,NOWDT,NOPROTECT&lt;br /&gt;
 #DEVICE ADC=8                          // *set ADC to 8 bit accuracy*&lt;br /&gt;
 #use delay(clock=20000000)             // 20 MHz clock&lt;br /&gt;
 #use rs232(baud=19200, UART1)          // Set up PIC UART on RC6 (tx) and RC7 (rx)  &lt;br /&gt;
  &lt;br /&gt;
 int8 data_tx, data_rx = 0;             // Set up data_tx (transmit value), data_rx (recieve value)&lt;br /&gt;
  &lt;br /&gt;
 void main()&lt;br /&gt;
 {&lt;br /&gt;
    setup_adc_ports(AN0);               // *Enable AN0 as analog potentiometer input*&lt;br /&gt;
    setup_adc(ADC_CLOCK_INTERNAL);      // *the range selected has to start with AN0*&lt;br /&gt;
    set_adc_channel(0);                 // *Enable AN0 as analog input*&lt;br /&gt;
    delay_us(10);                       // *Pause 10us to set up ADC*&lt;br /&gt;
    &lt;br /&gt;
    while (TRUE)&lt;br /&gt;
    {&lt;br /&gt;
       data_tx = read_adc();            // *Read POT on analog port (0-255)*&lt;br /&gt;
       output_d(data_rx);               // Output last recieved value from computer&lt;br /&gt;
       delay_ms(10);&lt;br /&gt;
       &lt;br /&gt;
       if (kbhit())                     // If PIC senses data pushed to serial buffer&lt;br /&gt;
       {&lt;br /&gt;
          data_rx = fgetc();            // Read in recieved value from buffer&lt;br /&gt;
          printf(&amp;quot;Pot: %u Char: %u\n&amp;quot;, data_tx, data_rx);  // Once data sent and read, PIC sends data back&lt;br /&gt;
  &lt;br /&gt;
          //delay_ms(10);               // As tested briefly, this delay is unnecessary for our (relatively) slow data rate&lt;br /&gt;
                                        // If you are receiving data errors, you may want to introduce a slight delay&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
       }&lt;br /&gt;
    }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
== Tips on Designing a Protocol ==&lt;br /&gt;
A good way to start or debug your program is to use the PIC-C Serial Port Monitor (Tools Tab-&amp;gt;Serial Port Monitor).  This will allow you to send and receive raw data over the serial port, which is much easier for understanding why a protocol isn&#039;t behaving correctly.  You can also use HyperTerminal on windows XP (Start-&amp;gt;Programs-&amp;gt;Accessories-&amp;gt;Communications-&amp;gt;HyperTerminal), although in our testing this appeared to be less stable than the PIC-C Compiler&#039;s monitor.  Note for both programs, you will have to configure which serial port to monitor using the same method described in the Overview.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Be sure to close all other programs accessing the serial ports (PIC-C/Hypterm etc.) if you are having difficulty opening the port in MATLAB.&lt;br /&gt;
&lt;br /&gt;
== Matlab Code ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;**** THIS MATLAB CODE SHOULD BE UPDATED TO USE &amp;quot;fread(s,1)&amp;quot; instead of &amp;quot;fscanf(s)&amp;quot;.  fread(s,1) reads 1 bitwise value at a time (opposed to an ascii value).  Without this change, matlab can only read 8 bit ASCII characters and will reject a subset of the values between 0 and 255. *****&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If your program doesn&#039;t close and delete the serial port object correctly, you can use the command shown below to delete all of the serial port objects.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 delete(instrfind)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 %  SerialComm.m  Scott McLeod, Sandeep Prabhu, Brett Pihl 2/4/2008&lt;br /&gt;
 %  This program is designed to communicate to a PIC 18F4520 via RS232 (Serial) Communication.&lt;br /&gt;
 %  &lt;br /&gt;
 %  The main loop of this program waits for a character input from the user,&lt;br /&gt;
 %  upon which it transmits the ascii value and waits for data to be written.&lt;br /&gt;
 &lt;br /&gt;
 s = serial(&#039;COM4&#039;,&#039;BAUD&#039;,19200);            % Create serial object (PORT Dependent)&lt;br /&gt;
 fopen(s)                                    % Open the serial port for r/w&lt;br /&gt;
 &lt;br /&gt;
 myChar = &#039;a&#039;;                               &lt;br /&gt;
 prompt = &#039;Enter a character (q to exit): &#039;; &lt;br /&gt;
 &lt;br /&gt;
 while (myChar ~= &#039;q&#039;)                       % While user hasn&#039;t typed &#039;q&#039;&lt;br /&gt;
     fprintf(s, &#039;%s&#039;, myChar(1))             % Write first char of user input to serial port&lt;br /&gt;
     fprintf(fscanf(s))                      % Read Data back from PIC&lt;br /&gt;
     myChar = input(prompt, &#039;s&#039;);            % Get user input&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 fclose(s);                                  % Close the serial port&lt;br /&gt;
 delete(s);                                  % Delete the serial object&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
More on Serial and the PIC: http://hades.mech.northwestern.edu/wiki/index.php/PIC_RS232&lt;br /&gt;
&lt;br /&gt;
MAX232 Data Sheet: http://rocky.digikey.com/WebLib/Texas%20Instruments/Web%20data/MAX232,232I.pdf&lt;br /&gt;
&lt;br /&gt;
Overview of RS232 Protocol: http://en.wikipedia.org/wiki/RS-232&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Serial_communication_with_Matlab&amp;diff=13152</id>
		<title>Serial communication with Matlab</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Serial_communication_with_Matlab&amp;diff=13152"/>
		<updated>2009-04-25T22:52:41Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Matlab Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
Matlab has a &amp;quot;serial&amp;quot; function that allows it to communicate through a serial port. This project is to establish serial port connection with the PIC microcontroller and demonstrate bidirectional communication between the PIC and a Matlab program. For demonstration purposes, the PIC will send digital potentiometer readings to Matlab as well as receive keystrokes from the Matlab user to light up LEDs on its circuit board.&lt;br /&gt;
&lt;br /&gt;
A USB to RS232 adapter and level shifter chip were used to connect the computer to the PIC. In this lab, we used a cheap cable found at http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&amp;amp;item=220199148938&amp;amp;ih=012&amp;amp;category=41995&amp;amp;ssPageName=WDVW&amp;amp;rd=1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;**Important! DO NOT connect the serial Rx/Tx lines DIRECTLY to the PIC!!!**&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A level shifter chip is necessary to convert the high and low logic voltages from the desktop computer (+12V/-5V) to (+5V,0V) for the PIC. A standard RS232 connection is called a DB9 connector and follows the pin diagram shown here: http://www.aggsoft.com/rs232-pinout-cable/serial-cable-connections.htm&lt;br /&gt;
This cable requires 1 driver installation as included on the mini-cd.  To install this driver, you must first plug in the USB cable, and run the installation program located on the CD corresponding to the model on the USB Cable (&amp;lt;CDROM&amp;gt;:\HL-232-340\HL-340.exe).  This driver is also available online at this link:  &lt;br /&gt;
http://129.105.69.13/pic/usb_drivers/HL-340_USB_serial_drivers_WinXP/ .  To configure the Matlab script to connect to the proper serial port, use the device manager (Right click My Computer-&amp;gt;manage) and expand the section &amp;quot;Ports (COM &amp;amp; LPT)&amp;quot;.  Make a note of the COM port number corresponding to &amp;quot;USB-SERIAL CH340&amp;quot; as listed in this section.  In our program, our serial port was COM4.  A picture is shown below of how to get this information in the device manager.&lt;br /&gt;
&lt;br /&gt;
[[Image:ComPortLookup.JPG |thumb|300px|right| COM Port Lookup - Device Manager]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A female DB9 connector was wired to our level shifter to convert the voltages, with the level shifter connected to our PIC.  The female DB9 connector is used so no wires need to be directly soldered to the serial cable.  Refer to the Circuit section for details on this connection.&lt;br /&gt;
&lt;br /&gt;
The PIC was programmed with our C code as shown below. Our program was designed to read a potentiometer through the PIC&#039;s ADC (Analog to Digital Converter) port and transmit the digitized readings over the serial cable to the PC (upon request).  In Matlab, if a users sends data to the PIC by entering a character, the PIC responds with the current potentiometer reading and the last received byte from the PC.  The PIC is also programmed to display the character received from the PC on its LED array (D register) as a 8-bit ASCII number.  The programs can easily be modified to create any custom protocol, but are designed to show simple 2-way communication between Matlab and the PIC.&lt;br /&gt;
&lt;br /&gt;
== Circuit ==&lt;br /&gt;
The wiring diagram for serial communication is shown below. There are three basic components in this setup. The potentiometer serves as an analog input to the PIC, which is converted to a digital signal through the PIC&#039;s analog to digital converter pin. The MAX232N level converter provides bidirectional voltage shifting for digital communication between the PIC and PC (read more about this chip and level conversion on the RS232 wiki [http://hades.mech.northwestern.edu/wiki/index.php/PIC_RS232 here]). Finally, the female DB-9 connector allows the circuit to connect to the PC&#039;s serial port.&lt;br /&gt;
 &lt;br /&gt;
[[Image:Team26-SerialComCircuit.jpg |thumb|640x470 px|center| Circuit Diagram for Serial Communication between PIC and PC]]&lt;br /&gt;
&lt;br /&gt;
The connections to the female DB-9 adapter are shown below.  &#039;&#039;&#039;These wires are soldered to the cup-side of the adapter, not directly to the serial cable.&#039;&#039;&#039;  Our DB-9 adapter is pictured below and follows the given connections:&lt;br /&gt;
  &lt;br /&gt;
 PIN5:DB-9 (green wire)  -&amp;gt;  Common Ground&lt;br /&gt;
 PIN3:DB-9 (yellow wire) -&amp;gt;  Receive (RX)   -&amp;gt;  PIN14:MAX232&lt;br /&gt;
 PIN2:DB-9 (white wire)  -&amp;gt;  Transmit (TX)  -&amp;gt;  PIN13:MAX232&lt;br /&gt;
&lt;br /&gt;
  [[Image:DB9Connector.jpg |thumb|300px|center| Closeup of DB-9 Connector]]&lt;br /&gt;
&lt;br /&gt;
Our final circuit is pictured below.&lt;br /&gt;
&lt;br /&gt;
[[Image:P1120664.JPG |thumb|640x470 px|center| Image of wiring for serial communication between PIC 18F4520 and PC]]&lt;br /&gt;
&lt;br /&gt;
== PIC Code ==&lt;br /&gt;
&lt;br /&gt;
 /*&lt;br /&gt;
    SerialComm.c Scott McLeod, Sandeep Prabhu, Brett Pihl 2/4/2008&lt;br /&gt;
    This program is designed to communicate to a computer using RS232 (Serial) Communication.&lt;br /&gt;
    &lt;br /&gt;
    The main loop of this program waits for a data transmission over the Serial port, and&lt;br /&gt;
    responds with a current reading of an analog input (potentiometer) and the last received data.&lt;br /&gt;
   &lt;br /&gt;
    Note the analog input is only for testing purposes, and is not necessary for serial communication.&lt;br /&gt;
    Lines unnecessary for RS232 communication are commented with enclosing asterisks (&#039;*..*&#039;).&lt;br /&gt;
  */&lt;br /&gt;
  &lt;br /&gt;
 #include &amp;lt;18f4520.h&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 #fuses HS,NOLVP,NOWDT,NOPROTECT&lt;br /&gt;
 #DEVICE ADC=8                          // *set ADC to 8 bit accuracy*&lt;br /&gt;
 #use delay(clock=20000000)             // 20 MHz clock&lt;br /&gt;
 #use rs232(baud=19200, UART1)          // Set up PIC UART on RC6 (tx) and RC7 (rx)  &lt;br /&gt;
  &lt;br /&gt;
 int8 data_tx, data_rx = 0;             // Set up data_tx (transmit value), data_rx (recieve value)&lt;br /&gt;
  &lt;br /&gt;
 void main()&lt;br /&gt;
 {&lt;br /&gt;
    setup_adc_ports(AN0);               // *Enable AN0 as analog potentiometer input*&lt;br /&gt;
    setup_adc(ADC_CLOCK_INTERNAL);      // *the range selected has to start with AN0*&lt;br /&gt;
    set_adc_channel(0);                 // *Enable AN0 as analog input*&lt;br /&gt;
    delay_us(10);                       // *Pause 10us to set up ADC*&lt;br /&gt;
    &lt;br /&gt;
    while (TRUE)&lt;br /&gt;
    {&lt;br /&gt;
       data_tx = read_adc();            // *Read POT on analog port (0-255)*&lt;br /&gt;
       output_d(data_rx);               // Output last recieved value from computer&lt;br /&gt;
       delay_ms(10);&lt;br /&gt;
       &lt;br /&gt;
       if (kbhit())                     // If PIC senses data pushed to serial buffer&lt;br /&gt;
       {&lt;br /&gt;
          data_rx = fgetc();            // Read in recieved value from buffer&lt;br /&gt;
          printf(&amp;quot;Pot: %u Char: %u\n&amp;quot;, data_tx, data_rx);  // Once data sent and read, PIC sends data back&lt;br /&gt;
  &lt;br /&gt;
          //delay_ms(10);               // As tested briefly, this delay is unnecessary for our (relatively) slow data rate&lt;br /&gt;
                                        // If you are receiving data errors, you may want to introduce a slight delay&lt;br /&gt;
  &lt;br /&gt;
  &lt;br /&gt;
       }&lt;br /&gt;
    }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
== Tips on Designing a Protocol ==&lt;br /&gt;
A good way to start or debug your program is to use the PIC-C Serial Port Monitor (Tools Tab-&amp;gt;Serial Port Monitor).  This will allow you to send and receive raw data over the serial port, which is much easier for understanding why a protocol isn&#039;t behaving correctly.  You can also use HyperTerminal on windows XP (Start-&amp;gt;Programs-&amp;gt;Accessories-&amp;gt;Communications-&amp;gt;HyperTerminal), although in our testing this appeared to be less stable than the PIC-C Compiler&#039;s monitor.  Note for both programs, you will have to configure which serial port to monitor using the same method described in the Overview.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Be sure to close all other programs accessing the serial ports (PIC-C/Hypterm etc.) if you are having difficulty opening the port in MATLAB.&lt;br /&gt;
&lt;br /&gt;
== Matlab Code ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;**** THIS MATLAB CODE SHOULD BE UPDATED TO USE &amp;quot;fread(s,1)&amp;quot;.  fread(s,1) reads 1 bitwise value at a time (opposed to an ascii value).  Without this change, matlab can only read 8 bit ASCII characters and will reject a subset of the values between 0 and 255. *****&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If your program doesn&#039;t close and delete the serial port object correctly, you can use the command shown below to delete all of the serial port objects.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 delete(instrfind)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 %  SerialComm.m  Scott McLeod, Sandeep Prabhu, Brett Pihl 2/4/2008&lt;br /&gt;
 %  This program is designed to communicate to a PIC 18F4520 via RS232 (Serial) Communication.&lt;br /&gt;
 %  &lt;br /&gt;
 %  The main loop of this program waits for a character input from the user,&lt;br /&gt;
 %  upon which it transmits the ascii value and waits for data to be written.&lt;br /&gt;
 &lt;br /&gt;
 s = serial(&#039;COM4&#039;,&#039;BAUD&#039;,19200);            % Create serial object (PORT Dependent)&lt;br /&gt;
 fopen(s)                                    % Open the serial port for r/w&lt;br /&gt;
 &lt;br /&gt;
 myChar = &#039;a&#039;;                               &lt;br /&gt;
 prompt = &#039;Enter a character (q to exit): &#039;; &lt;br /&gt;
 &lt;br /&gt;
 while (myChar ~= &#039;q&#039;)                       % While user hasn&#039;t typed &#039;q&#039;&lt;br /&gt;
     fprintf(s, &#039;%s&#039;, myChar(1))             % Write first char of user input to serial port&lt;br /&gt;
     fprintf(fscanf(s))                      % Read Data back from PIC&lt;br /&gt;
     myChar = input(prompt, &#039;s&#039;);            % Get user input&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 fclose(s);                                  % Close the serial port&lt;br /&gt;
 delete(s);                                  % Delete the serial object&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
More on Serial and the PIC: http://hades.mech.northwestern.edu/wiki/index.php/PIC_RS232&lt;br /&gt;
&lt;br /&gt;
MAX232 Data Sheet: http://rocky.digikey.com/WebLib/Texas%20Instruments/Web%20data/MAX232,232I.pdf&lt;br /&gt;
&lt;br /&gt;
Overview of RS232 Protocol: http://en.wikipedia.org/wiki/RS-232&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8949</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8949"/>
		<updated>2008-06-21T04:15:26Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Final Project Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_calibration_alignment.jpg|center|thumb|300px|Calibration Pattern Alignment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.  To quit the program, Press &#039;Esc&#039;.&lt;br /&gt;
[[Image:visual_localization_real_time.jpg|center|thumb|300px|Real-Time Processing]]&lt;br /&gt;
[[Image:visual_localization_data.jpg|center|thumb|300px|Real-Time Data]]&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;br /&gt;
&lt;br /&gt;
Real-time algorithm source:&lt;br /&gt;
&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/8/85/TrackSysV1_6_20_08.zip&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pre-processing program source with final patterns:&lt;br /&gt;
&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/f/fe/TrackSysTraining_6_20_08.zip&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8948</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8948"/>
		<updated>2008-06-21T04:14:50Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Final Project Code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_calibration_alignment.jpg|center|thumb|300px|Calibration Pattern Alignment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.  To quit the program, Press &#039;Esc&#039;.&lt;br /&gt;
[[Image:visual_localization_real_time.jpg|center|thumb|300px|Real-Time Processing]]&lt;br /&gt;
[[Image:visual_localization_data.jpg|center|thumb|300px|Real-Time Data]]&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;br /&gt;
&lt;br /&gt;
Real-time algorithm source:&lt;br /&gt;
&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/f/fe/TrackSysTraining_6_20_08.zip&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Pre-processing program source with final patterns:&lt;br /&gt;
&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/8/85/TrackSysV1_6_20_08.zip&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:TrackSysV1_6_20_08.zip&amp;diff=8947</id>
		<title>File:TrackSysV1 6 20 08.zip</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:TrackSysV1_6_20_08.zip&amp;diff=8947"/>
		<updated>2008-06-21T04:12:53Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Real-Time Pattern Tracking program
Scott McLeod 6-20-2008&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-Time Pattern Tracking program&lt;br /&gt;
Scott McLeod 6-20-2008&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:TrackSysTraining_6_20_08.zip&amp;diff=8946</id>
		<title>File:TrackSysTraining 6 20 08.zip</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:TrackSysTraining_6_20_08.zip&amp;diff=8946"/>
		<updated>2008-06-21T04:10:03Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Real-Time pattern tracking algorithm c++
Scott McLeod 6-20-2008&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-Time pattern tracking algorithm c++&lt;br /&gt;
Scott McLeod 6-20-2008&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8894</id>
		<title>Granular Flow Rotating Sphere</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8894"/>
		<updated>2008-06-13T02:37:03Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Spring Quarter Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ME 333 final projects]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-main-picture.JPG|right|Our Final Design|thumb|500px]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Spring Quarter Update==&lt;br /&gt;
&lt;br /&gt;
This zip file contains the documentation, MATLAB code/examples, PIC code and circuit diagram.&lt;br /&gt;
&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/3/3d/Tumbler.zip&lt;br /&gt;
&lt;br /&gt;
Contact Scott McLeod for further questions.&lt;br /&gt;
&lt;br /&gt;
==Team Members==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
A clear sphere is filled with grains of different sizes.  Our apparatus rotates this ball about two different axis based on a series of user inputs.  The user inputs the specific values for things such as angle and rotational speed into Matlab. Our device takes these inputs and processes them using a series of master and slave PICs to appropriately control the motors.  The motors then turn for the input duration at the desired speed causing the ball to spin correctly due to the frictional connection between both motors and the sphere or lazy susan, respectively.  &lt;br /&gt;
&lt;br /&gt;
This apparatus will be used for the study of granular flow and the mixing of particles within the sphere.  It was important to leave the ball as visible as possible to allow for pictures to be taken of the grains within from many angles.  With this apparatus we hope to aide the study of granular flow theory and allow the researches to use the device for many different applications.&lt;br /&gt;
&lt;br /&gt;
==Mechanical Set-up==&lt;br /&gt;
&lt;br /&gt;
====Main Housing====&lt;br /&gt;
&lt;br /&gt;
The main housing, or case, for our design is composed of the following pieces.&lt;br /&gt;
&lt;br /&gt;
*One 13.5” x 12” x ¾” plywood rectangle&lt;br /&gt;
*One 13.5” x 12”  x ¾” plywood rectangle with a 3.5” diameter circle removed from the center&lt;br /&gt;
*Two 12” x 2.5”x ¾” plywood rectangles&lt;br /&gt;
&lt;br /&gt;
The two larger rectangles form the top and bottom of the set-up with the two smaller rectangles placed vertically between to form a box with two open ends on the front and back face.  &lt;br /&gt;
&lt;br /&gt;
====Ball Support====&lt;br /&gt;
&lt;br /&gt;
Three ball casters are placed on vertical mounts around the center circle of the top piece of the housing at equal angles.  These casters prevent the ball from moving in any horizontal direction so it is only free to rotate.  One of these casters is adjustable to allow the user to make sure the ball is correctly supported above the drive wheel.  The force of gravity is strong enough to prevent the ball from moving up and out of the housing and also ensures a good connection with the drive wheel that is placed directly under the center of the sphere.&lt;br /&gt;
&lt;br /&gt;
====Main Drive Wheel====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel is centered under the rotating sphere.  The wheel is mounted onto a ¼” aluminum shaft which is connected to the Pittman motor with a flexible coupling and is also supported by a sleeve bearing on the other side of the wheel.  &lt;br /&gt;
&lt;br /&gt;
====Lazy Susan====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel with its corresponding motor and other components are all mounted on top of a lazy susan that is centered on the bottom piece of the housing and secured with screws.  This lazy susan allows rotational motion but prevents movement in any other direction allowing the wheel to turn but always have the same center of contact with the sphere above.  It is important to ensure that the drive wheel has a good connection with the sphere above because the frictional force between the wheel and the sphere must be as large as possible so that as the wheel spins the ball spins at the same rate.  &lt;br /&gt;
&lt;br /&gt;
====Position Control Motor====&lt;br /&gt;
&lt;br /&gt;
A second motor is used in our design to turn the lazy susan.  The motor is mounted vertically through the top plate.  Another, smaller, drive wheel is mounted directly to the motor shaft and then aligned with only the top, free half of the lazy susan.  A second, idler wheel is mounted on the bottom plate, so the drive wheel is sandwiched between this wheel and the lazy susan. This ensures that the drive wheel is always in contact with the lazy susan because the idler wheel exerts only a normal force.  Again, this is to ensure there is no slip between the lazy susan and the drive wheel so it is automatically controlled more easily.&lt;br /&gt;
&lt;br /&gt;
====Complete Parts List====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman GM8224 motor &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Helical Beam Set-Screw Shaft Coupling&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt; 9861T508 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Mounted Sleeve Bearing&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5912K21 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Flange Mount Ball Caster&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5674K77  &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Lazy Susan&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1443T2&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K23 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Small Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2471K12 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Idler Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K79 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;1/4” Aluminum Rod&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Aluminum Sheet Metal&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Plywood&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Circuitry==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
Our project was unique in that we relied on the use of three different PICs to precisely coordinate the motion of our ball. The main reason for this was because the 18F4520 chip only has enough encoder inputs for one Pittman motor. &lt;br /&gt;
&lt;br /&gt;
===Component List===&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;PIC18F4520 Prototyping Board&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Microchip 8-bit PIC Microcontroller&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PIC18F4520&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman Motor with Encoder&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;GM8224&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hex Inverter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;SN74HC04&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Counter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;LS7083&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;H-Bridge Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;L293&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Diodes&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1N4001&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;10K Resistor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hall Effect Sensor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A3240LUA-T&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Big Cat Super Strong Magnet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PM20134&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Set Up===&lt;br /&gt;
The electrical design for our project was pretty basic. All of our components (including the Pittman motors) were powered with 5V DC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PICs&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three PICs communicated via I2C, which enabled us to control the two motors by telling the master PIC what to do (more information can be found [[I2C communication between PICs|here]]). We designated the PIC on the 18F4520 Prototyping Board as the &amp;quot;Master&amp;quot; and the other two PICs as the &amp;quot;Slaves.&amp;quot; It is important to connect the clock from the prototyping board to the two Slave PICs, but the two main lines of communication are shared on pin 18 (RC3) on each chip, and pin 23 (RC4).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;H-Bridge&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each slave PIC sends an individual pulse to one of the two H-bridges (the L298 has two). The pulse width determines the direction and speed of each motor. At 50% duty cycle, the motor is at rest, while at 0 and 100% duty cycles the motor runs at maximum speed but in opposite directions. Pin 16 from the first slave PIC needs to connect to pin 10 on the L298, while other other should connect to pin 5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Hex Inverter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now pins 5 and 10 from the H-bridge needs to go into pins 1 and 13 on the hex inverter chip. The outputs of these two need to go back the the H-bridge as an inverted signal for pulse width modulation (pin 1 to L298&#039;s pin 12, and pin 14 to L298&#039;s pin 7).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Counter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last main component that needs to be implemented in the counter chip. This enables us to take fewer counts from the high encoder on the Pittman motors to control the movement of the motor. Pins 4 and 5 are used to connect directly to the blue and yellow lines on each Pittman encoder. Pins 1, 3, and 6 should all be hooked to ground, while pin 2 should be +5V. Pins 7 and 8 should connect to pins 15 and 6, respectively on the slave PIC that is correspondent to this counter chip.&lt;br /&gt;
&lt;br /&gt;
===Schematic===&lt;br /&gt;
Here is a visual representation of how our circuit components fit together:&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:Team-21-circuit.JPG|left|Team Granular Flow Schematic|thumb|400px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-circuit-image.JPG|left|Team Granular Flow Circuit|thumb|300px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
&lt;br /&gt;
===Code Overview===&lt;br /&gt;
&lt;br /&gt;
Three sets of code were required for our project: the MATLAB code for the user interface, C code for the master PIC, and C code for the slave PICs. The MATLAB code set up a GUI for intuitive control of the ball. The master PIC code read all the serial communication from MATLAB and converted it into appropriate I2C commands for the slave PICs, which were completely dedicated to motor control and encoding. All required code is provided below (note an additional file BKSMotorControllerFunctions.c which is used by the master PIC as well).&lt;br /&gt;
&lt;br /&gt;
Note: Thanks to Matt Turpin (of the [[IR Tracker]] project) whose code from his 399 independent study proved incredibly useful for our project.&lt;br /&gt;
&lt;br /&gt;
===PIC Code===&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallMasterv1.c|BKSBallMasterv1.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSMotorControllerFunctions.c|BKSMotorControllerFunctions.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallSlavev1.c|BKSBallSlavev1.c]]&lt;br /&gt;
&lt;br /&gt;
===MATLAB Code===&lt;br /&gt;
&lt;br /&gt;
The GUIDE toolset in MATLAB was used to create the GUI. Once all the code is loaded onto the correct PICs, everything can be run through BKSBallControl.m.&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallControl.m|BKSBallControl.m]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallControl.fig|BKSBallControl.fig]]&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
At the time of our presentation, we were able to demonstrate rotation along two axes by using the drive motor and Lazy Susan. Additionally, we were able to show effective and consistent communication of multiple bytes of data from MATLAB to the master PIC vis RS-232. Unfortunately, we were unable to get the hardware for I2C working, despite the code working on Matt&#039;s setup. Overall, we were pleased with our progress in a relatively short amount of time.&lt;br /&gt;
&lt;br /&gt;
We hope to fix the hardware issue in the near future and possibly consolidate all the circuitry onto a PCB for a more robust device. We would also like to add a hall effect switch or limit switch to indicate a set &amp;quot;Home&amp;quot; position. Additional work can be done on the motor control functions to implement feedback control as necessary. As there are clients that would like to see this project come to fruition, we want to make sure they are given a robust and flexible system for their use.&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8893</id>
		<title>Granular Flow Rotating Sphere</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8893"/>
		<updated>2008-06-13T02:36:48Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Spring Quarter Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ME 333 final projects]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-main-picture.JPG|right|Our Final Design|thumb|500px]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Spring Quarter Update==&lt;br /&gt;
&lt;br /&gt;
This zip file contains the documentation, MATLAB code/examples, PIC code and circuit diagram.&lt;br /&gt;
http://hades.mech.northwestern.edu/wiki/images/3/3d/Tumbler.zip&lt;br /&gt;
Contact Scott McLeod for further questions.&lt;br /&gt;
&lt;br /&gt;
==Team Members==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
A clear sphere is filled with grains of different sizes.  Our apparatus rotates this ball about two different axis based on a series of user inputs.  The user inputs the specific values for things such as angle and rotational speed into Matlab. Our device takes these inputs and processes them using a series of master and slave PICs to appropriately control the motors.  The motors then turn for the input duration at the desired speed causing the ball to spin correctly due to the frictional connection between both motors and the sphere or lazy susan, respectively.  &lt;br /&gt;
&lt;br /&gt;
This apparatus will be used for the study of granular flow and the mixing of particles within the sphere.  It was important to leave the ball as visible as possible to allow for pictures to be taken of the grains within from many angles.  With this apparatus we hope to aide the study of granular flow theory and allow the researches to use the device for many different applications.&lt;br /&gt;
&lt;br /&gt;
==Mechanical Set-up==&lt;br /&gt;
&lt;br /&gt;
====Main Housing====&lt;br /&gt;
&lt;br /&gt;
The main housing, or case, for our design is composed of the following pieces.&lt;br /&gt;
&lt;br /&gt;
*One 13.5” x 12” x ¾” plywood rectangle&lt;br /&gt;
*One 13.5” x 12”  x ¾” plywood rectangle with a 3.5” diameter circle removed from the center&lt;br /&gt;
*Two 12” x 2.5”x ¾” plywood rectangles&lt;br /&gt;
&lt;br /&gt;
The two larger rectangles form the top and bottom of the set-up with the two smaller rectangles placed vertically between to form a box with two open ends on the front and back face.  &lt;br /&gt;
&lt;br /&gt;
====Ball Support====&lt;br /&gt;
&lt;br /&gt;
Three ball casters are placed on vertical mounts around the center circle of the top piece of the housing at equal angles.  These casters prevent the ball from moving in any horizontal direction so it is only free to rotate.  One of these casters is adjustable to allow the user to make sure the ball is correctly supported above the drive wheel.  The force of gravity is strong enough to prevent the ball from moving up and out of the housing and also ensures a good connection with the drive wheel that is placed directly under the center of the sphere.&lt;br /&gt;
&lt;br /&gt;
====Main Drive Wheel====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel is centered under the rotating sphere.  The wheel is mounted onto a ¼” aluminum shaft which is connected to the Pittman motor with a flexible coupling and is also supported by a sleeve bearing on the other side of the wheel.  &lt;br /&gt;
&lt;br /&gt;
====Lazy Susan====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel with its corresponding motor and other components are all mounted on top of a lazy susan that is centered on the bottom piece of the housing and secured with screws.  This lazy susan allows rotational motion but prevents movement in any other direction allowing the wheel to turn but always have the same center of contact with the sphere above.  It is important to ensure that the drive wheel has a good connection with the sphere above because the frictional force between the wheel and the sphere must be as large as possible so that as the wheel spins the ball spins at the same rate.  &lt;br /&gt;
&lt;br /&gt;
====Position Control Motor====&lt;br /&gt;
&lt;br /&gt;
A second motor is used in our design to turn the lazy susan.  The motor is mounted vertically through the top plate.  Another, smaller, drive wheel is mounted directly to the motor shaft and then aligned with only the top, free half of the lazy susan.  A second, idler wheel is mounted on the bottom plate, so the drive wheel is sandwiched between this wheel and the lazy susan. This ensures that the drive wheel is always in contact with the lazy susan because the idler wheel exerts only a normal force.  Again, this is to ensure there is no slip between the lazy susan and the drive wheel so it is automatically controlled more easily.&lt;br /&gt;
&lt;br /&gt;
====Complete Parts List====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman GM8224 motor &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Helical Beam Set-Screw Shaft Coupling&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt; 9861T508 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Mounted Sleeve Bearing&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5912K21 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Flange Mount Ball Caster&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5674K77  &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Lazy Susan&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1443T2&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K23 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Small Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2471K12 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Idler Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K79 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;1/4” Aluminum Rod&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Aluminum Sheet Metal&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Plywood&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Circuitry==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
Our project was unique in that we relied on the use of three different PICs to precisely coordinate the motion of our ball. The main reason for this was because the 18F4520 chip only has enough encoder inputs for one Pittman motor. &lt;br /&gt;
&lt;br /&gt;
===Component List===&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;PIC18F4520 Prototyping Board&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Microchip 8-bit PIC Microcontroller&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PIC18F4520&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman Motor with Encoder&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;GM8224&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hex Inverter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;SN74HC04&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Counter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;LS7083&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;H-Bridge Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;L293&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Diodes&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1N4001&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;10K Resistor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hall Effect Sensor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A3240LUA-T&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Big Cat Super Strong Magnet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PM20134&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Set Up===&lt;br /&gt;
The electrical design for our project was pretty basic. All of our components (including the Pittman motors) were powered with 5V DC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PICs&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three PICs communicated via I2C, which enabled us to control the two motors by telling the master PIC what to do (more information can be found [[I2C communication between PICs|here]]). We designated the PIC on the 18F4520 Prototyping Board as the &amp;quot;Master&amp;quot; and the other two PICs as the &amp;quot;Slaves.&amp;quot; It is important to connect the clock from the prototyping board to the two Slave PICs, but the two main lines of communication are shared on pin 18 (RC3) on each chip, and pin 23 (RC4).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;H-Bridge&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each slave PIC sends an individual pulse to one of the two H-bridges (the L298 has two). The pulse width determines the direction and speed of each motor. At 50% duty cycle, the motor is at rest, while at 0 and 100% duty cycles the motor runs at maximum speed but in opposite directions. Pin 16 from the first slave PIC needs to connect to pin 10 on the L298, while other other should connect to pin 5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Hex Inverter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now pins 5 and 10 from the H-bridge needs to go into pins 1 and 13 on the hex inverter chip. The outputs of these two need to go back the the H-bridge as an inverted signal for pulse width modulation (pin 1 to L298&#039;s pin 12, and pin 14 to L298&#039;s pin 7).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Counter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last main component that needs to be implemented in the counter chip. This enables us to take fewer counts from the high encoder on the Pittman motors to control the movement of the motor. Pins 4 and 5 are used to connect directly to the blue and yellow lines on each Pittman encoder. Pins 1, 3, and 6 should all be hooked to ground, while pin 2 should be +5V. Pins 7 and 8 should connect to pins 15 and 6, respectively on the slave PIC that is correspondent to this counter chip.&lt;br /&gt;
&lt;br /&gt;
===Schematic===&lt;br /&gt;
Here is a visual representation of how our circuit components fit together:&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:Team-21-circuit.JPG|left|Team Granular Flow Schematic|thumb|400px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-circuit-image.JPG|left|Team Granular Flow Circuit|thumb|300px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
&lt;br /&gt;
===Code Overview===&lt;br /&gt;
&lt;br /&gt;
Three sets of code were required for our project: the MATLAB code for the user interface, C code for the master PIC, and C code for the slave PICs. The MATLAB code set up a GUI for intuitive control of the ball. The master PIC code read all the serial communication from MATLAB and converted it into appropriate I2C commands for the slave PICs, which were completely dedicated to motor control and encoding. All required code is provided below (note an additional file BKSMotorControllerFunctions.c which is used by the master PIC as well).&lt;br /&gt;
&lt;br /&gt;
Note: Thanks to Matt Turpin (of the [[IR Tracker]] project) whose code from his 399 independent study proved incredibly useful for our project.&lt;br /&gt;
&lt;br /&gt;
===PIC Code===&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallMasterv1.c|BKSBallMasterv1.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSMotorControllerFunctions.c|BKSMotorControllerFunctions.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallSlavev1.c|BKSBallSlavev1.c]]&lt;br /&gt;
&lt;br /&gt;
===MATLAB Code===&lt;br /&gt;
&lt;br /&gt;
The GUIDE toolset in MATLAB was used to create the GUI. Once all the code is loaded onto the correct PICs, everything can be run through BKSBallControl.m.&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallControl.m|BKSBallControl.m]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallControl.fig|BKSBallControl.fig]]&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
At the time of our presentation, we were able to demonstrate rotation along two axes by using the drive motor and Lazy Susan. Additionally, we were able to show effective and consistent communication of multiple bytes of data from MATLAB to the master PIC vis RS-232. Unfortunately, we were unable to get the hardware for I2C working, despite the code working on Matt&#039;s setup. Overall, we were pleased with our progress in a relatively short amount of time.&lt;br /&gt;
&lt;br /&gt;
We hope to fix the hardware issue in the near future and possibly consolidate all the circuitry onto a PCB for a more robust device. We would also like to add a hall effect switch or limit switch to indicate a set &amp;quot;Home&amp;quot; position. Additional work can be done on the motor control functions to implement feedback control as necessary. As there are clients that would like to see this project come to fruition, we want to make sure they are given a robust and flexible system for their use.&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:Tumbler.zip&amp;diff=8892</id>
		<title>File:Tumbler.zip</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:Tumbler.zip&amp;diff=8892"/>
		<updated>2008-06-13T02:36:27Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Granular Flow Spring quarter update
Produced by Scott McLeod&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Granular Flow Spring quarter update&lt;br /&gt;
Produced by Scott McLeod&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8891</id>
		<title>Granular Flow Rotating Sphere</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8891"/>
		<updated>2008-06-13T02:34:27Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Spring Quarter Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ME 333 final projects]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-main-picture.JPG|right|Our Final Design|thumb|500px]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Spring Quarter Update==&lt;br /&gt;
&lt;br /&gt;
This zip file contains the documentation, MATLAB code/examples, PIC code and circuit diagram.&lt;br /&gt;
&lt;br /&gt;
Contact Scott McLeod for further questions.&lt;br /&gt;
&lt;br /&gt;
==Team Members==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
A clear sphere is filled with grains of different sizes.  Our apparatus rotates this ball about two different axis based on a series of user inputs.  The user inputs the specific values for things such as angle and rotational speed into Matlab. Our device takes these inputs and processes them using a series of master and slave PICs to appropriately control the motors.  The motors then turn for the input duration at the desired speed causing the ball to spin correctly due to the frictional connection between both motors and the sphere or lazy susan, respectively.  &lt;br /&gt;
&lt;br /&gt;
This apparatus will be used for the study of granular flow and the mixing of particles within the sphere.  It was important to leave the ball as visible as possible to allow for pictures to be taken of the grains within from many angles.  With this apparatus we hope to aide the study of granular flow theory and allow the researches to use the device for many different applications.&lt;br /&gt;
&lt;br /&gt;
==Mechanical Set-up==&lt;br /&gt;
&lt;br /&gt;
====Main Housing====&lt;br /&gt;
&lt;br /&gt;
The main housing, or case, for our design is composed of the following pieces.&lt;br /&gt;
&lt;br /&gt;
*One 13.5” x 12” x ¾” plywood rectangle&lt;br /&gt;
*One 13.5” x 12”  x ¾” plywood rectangle with a 3.5” diameter circle removed from the center&lt;br /&gt;
*Two 12” x 2.5”x ¾” plywood rectangles&lt;br /&gt;
&lt;br /&gt;
The two larger rectangles form the top and bottom of the set-up with the two smaller rectangles placed vertically between to form a box with two open ends on the front and back face.  &lt;br /&gt;
&lt;br /&gt;
====Ball Support====&lt;br /&gt;
&lt;br /&gt;
Three ball casters are placed on vertical mounts around the center circle of the top piece of the housing at equal angles.  These casters prevent the ball from moving in any horizontal direction so it is only free to rotate.  One of these casters is adjustable to allow the user to make sure the ball is correctly supported above the drive wheel.  The force of gravity is strong enough to prevent the ball from moving up and out of the housing and also ensures a good connection with the drive wheel that is placed directly under the center of the sphere.&lt;br /&gt;
&lt;br /&gt;
====Main Drive Wheel====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel is centered under the rotating sphere.  The wheel is mounted onto a ¼” aluminum shaft which is connected to the Pittman motor with a flexible coupling and is also supported by a sleeve bearing on the other side of the wheel.  &lt;br /&gt;
&lt;br /&gt;
====Lazy Susan====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel with its corresponding motor and other components are all mounted on top of a lazy susan that is centered on the bottom piece of the housing and secured with screws.  This lazy susan allows rotational motion but prevents movement in any other direction allowing the wheel to turn but always have the same center of contact with the sphere above.  It is important to ensure that the drive wheel has a good connection with the sphere above because the frictional force between the wheel and the sphere must be as large as possible so that as the wheel spins the ball spins at the same rate.  &lt;br /&gt;
&lt;br /&gt;
====Position Control Motor====&lt;br /&gt;
&lt;br /&gt;
A second motor is used in our design to turn the lazy susan.  The motor is mounted vertically through the top plate.  Another, smaller, drive wheel is mounted directly to the motor shaft and then aligned with only the top, free half of the lazy susan.  A second, idler wheel is mounted on the bottom plate, so the drive wheel is sandwiched between this wheel and the lazy susan. This ensures that the drive wheel is always in contact with the lazy susan because the idler wheel exerts only a normal force.  Again, this is to ensure there is no slip between the lazy susan and the drive wheel so it is automatically controlled more easily.&lt;br /&gt;
&lt;br /&gt;
====Complete Parts List====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman GM8224 motor &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Helical Beam Set-Screw Shaft Coupling&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt; 9861T508 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Mounted Sleeve Bearing&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5912K21 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Flange Mount Ball Caster&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5674K77  &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Lazy Susan&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1443T2&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K23 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Small Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2471K12 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Idler Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K79 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;1/4” Aluminum Rod&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Aluminum Sheet Metal&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Plywood&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Circuitry==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
Our project was unique in that we relied on the use of three different PICs to precisely coordinate the motion of our ball. The main reason for this was because the 18F4520 chip only has enough encoder inputs for one Pittman motor. &lt;br /&gt;
&lt;br /&gt;
===Component List===&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;PIC18F4520 Prototyping Board&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Microchip 8-bit PIC Microcontroller&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PIC18F4520&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman Motor with Encoder&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;GM8224&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hex Inverter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;SN74HC04&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Counter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;LS7083&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;H-Bridge Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;L293&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Diodes&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1N4001&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;10K Resistor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hall Effect Sensor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A3240LUA-T&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Big Cat Super Strong Magnet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PM20134&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Set Up===&lt;br /&gt;
The electrical design for our project was pretty basic. All of our components (including the Pittman motors) were powered with 5V DC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PICs&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three PICs communicated via I2C, which enabled us to control the two motors by telling the master PIC what to do (more information can be found [[I2C communication between PICs|here]]). We designated the PIC on the 18F4520 Prototyping Board as the &amp;quot;Master&amp;quot; and the other two PICs as the &amp;quot;Slaves.&amp;quot; It is important to connect the clock from the prototyping board to the two Slave PICs, but the two main lines of communication are shared on pin 18 (RC3) on each chip, and pin 23 (RC4).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;H-Bridge&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each slave PIC sends an individual pulse to one of the two H-bridges (the L298 has two). The pulse width determines the direction and speed of each motor. At 50% duty cycle, the motor is at rest, while at 0 and 100% duty cycles the motor runs at maximum speed but in opposite directions. Pin 16 from the first slave PIC needs to connect to pin 10 on the L298, while other other should connect to pin 5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Hex Inverter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now pins 5 and 10 from the H-bridge needs to go into pins 1 and 13 on the hex inverter chip. The outputs of these two need to go back the the H-bridge as an inverted signal for pulse width modulation (pin 1 to L298&#039;s pin 12, and pin 14 to L298&#039;s pin 7).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Counter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last main component that needs to be implemented in the counter chip. This enables us to take fewer counts from the high encoder on the Pittman motors to control the movement of the motor. Pins 4 and 5 are used to connect directly to the blue and yellow lines on each Pittman encoder. Pins 1, 3, and 6 should all be hooked to ground, while pin 2 should be +5V. Pins 7 and 8 should connect to pins 15 and 6, respectively on the slave PIC that is correspondent to this counter chip.&lt;br /&gt;
&lt;br /&gt;
===Schematic===&lt;br /&gt;
Here is a visual representation of how our circuit components fit together:&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:Team-21-circuit.JPG|left|Team Granular Flow Schematic|thumb|400px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-circuit-image.JPG|left|Team Granular Flow Circuit|thumb|300px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
&lt;br /&gt;
===Code Overview===&lt;br /&gt;
&lt;br /&gt;
Three sets of code were required for our project: the MATLAB code for the user interface, C code for the master PIC, and C code for the slave PICs. The MATLAB code set up a GUI for intuitive control of the ball. The master PIC code read all the serial communication from MATLAB and converted it into appropriate I2C commands for the slave PICs, which were completely dedicated to motor control and encoding. All required code is provided below (note an additional file BKSMotorControllerFunctions.c which is used by the master PIC as well).&lt;br /&gt;
&lt;br /&gt;
Note: Thanks to Matt Turpin (of the [[IR Tracker]] project) whose code from his 399 independent study proved incredibly useful for our project.&lt;br /&gt;
&lt;br /&gt;
===PIC Code===&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallMasterv1.c|BKSBallMasterv1.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSMotorControllerFunctions.c|BKSMotorControllerFunctions.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallSlavev1.c|BKSBallSlavev1.c]]&lt;br /&gt;
&lt;br /&gt;
===MATLAB Code===&lt;br /&gt;
&lt;br /&gt;
The GUIDE toolset in MATLAB was used to create the GUI. Once all the code is loaded onto the correct PICs, everything can be run through BKSBallControl.m.&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallControl.m|BKSBallControl.m]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallControl.fig|BKSBallControl.fig]]&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
At the time of our presentation, we were able to demonstrate rotation along two axes by using the drive motor and Lazy Susan. Additionally, we were able to show effective and consistent communication of multiple bytes of data from MATLAB to the master PIC vis RS-232. Unfortunately, we were unable to get the hardware for I2C working, despite the code working on Matt&#039;s setup. Overall, we were pleased with our progress in a relatively short amount of time.&lt;br /&gt;
&lt;br /&gt;
We hope to fix the hardware issue in the near future and possibly consolidate all the circuitry onto a PCB for a more robust device. We would also like to add a hall effect switch or limit switch to indicate a set &amp;quot;Home&amp;quot; position. Additional work can be done on the motor control functions to implement feedback control as necessary. As there are clients that would like to see this project come to fruition, we want to make sure they are given a robust and flexible system for their use.&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8890</id>
		<title>Granular Flow Rotating Sphere</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8890"/>
		<updated>2008-06-13T02:34:02Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Spring Quarter Update */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ME 333 final projects]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-main-picture.JPG|right|Our Final Design|thumb|500px]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Spring Quarter Update==&lt;br /&gt;
&lt;br /&gt;
This zip file contains the documentation, MATLAB code/examples, PIC code and circuit diagram.&lt;br /&gt;
&lt;br /&gt;
==Team Members==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
A clear sphere is filled with grains of different sizes.  Our apparatus rotates this ball about two different axis based on a series of user inputs.  The user inputs the specific values for things such as angle and rotational speed into Matlab. Our device takes these inputs and processes them using a series of master and slave PICs to appropriately control the motors.  The motors then turn for the input duration at the desired speed causing the ball to spin correctly due to the frictional connection between both motors and the sphere or lazy susan, respectively.  &lt;br /&gt;
&lt;br /&gt;
This apparatus will be used for the study of granular flow and the mixing of particles within the sphere.  It was important to leave the ball as visible as possible to allow for pictures to be taken of the grains within from many angles.  With this apparatus we hope to aide the study of granular flow theory and allow the researches to use the device for many different applications.&lt;br /&gt;
&lt;br /&gt;
==Mechanical Set-up==&lt;br /&gt;
&lt;br /&gt;
====Main Housing====&lt;br /&gt;
&lt;br /&gt;
The main housing, or case, for our design is composed of the following pieces.&lt;br /&gt;
&lt;br /&gt;
*One 13.5” x 12” x ¾” plywood rectangle&lt;br /&gt;
*One 13.5” x 12”  x ¾” plywood rectangle with a 3.5” diameter circle removed from the center&lt;br /&gt;
*Two 12” x 2.5”x ¾” plywood rectangles&lt;br /&gt;
&lt;br /&gt;
The two larger rectangles form the top and bottom of the set-up with the two smaller rectangles placed vertically between to form a box with two open ends on the front and back face.  &lt;br /&gt;
&lt;br /&gt;
====Ball Support====&lt;br /&gt;
&lt;br /&gt;
Three ball casters are placed on vertical mounts around the center circle of the top piece of the housing at equal angles.  These casters prevent the ball from moving in any horizontal direction so it is only free to rotate.  One of these casters is adjustable to allow the user to make sure the ball is correctly supported above the drive wheel.  The force of gravity is strong enough to prevent the ball from moving up and out of the housing and also ensures a good connection with the drive wheel that is placed directly under the center of the sphere.&lt;br /&gt;
&lt;br /&gt;
====Main Drive Wheel====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel is centered under the rotating sphere.  The wheel is mounted onto a ¼” aluminum shaft which is connected to the Pittman motor with a flexible coupling and is also supported by a sleeve bearing on the other side of the wheel.  &lt;br /&gt;
&lt;br /&gt;
====Lazy Susan====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel with its corresponding motor and other components are all mounted on top of a lazy susan that is centered on the bottom piece of the housing and secured with screws.  This lazy susan allows rotational motion but prevents movement in any other direction allowing the wheel to turn but always have the same center of contact with the sphere above.  It is important to ensure that the drive wheel has a good connection with the sphere above because the frictional force between the wheel and the sphere must be as large as possible so that as the wheel spins the ball spins at the same rate.  &lt;br /&gt;
&lt;br /&gt;
====Position Control Motor====&lt;br /&gt;
&lt;br /&gt;
A second motor is used in our design to turn the lazy susan.  The motor is mounted vertically through the top plate.  Another, smaller, drive wheel is mounted directly to the motor shaft and then aligned with only the top, free half of the lazy susan.  A second, idler wheel is mounted on the bottom plate, so the drive wheel is sandwiched between this wheel and the lazy susan. This ensures that the drive wheel is always in contact with the lazy susan because the idler wheel exerts only a normal force.  Again, this is to ensure there is no slip between the lazy susan and the drive wheel so it is automatically controlled more easily.&lt;br /&gt;
&lt;br /&gt;
====Complete Parts List====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman GM8224 motor &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Helical Beam Set-Screw Shaft Coupling&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt; 9861T508 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Mounted Sleeve Bearing&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5912K21 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Flange Mount Ball Caster&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5674K77  &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Lazy Susan&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1443T2&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K23 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Small Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2471K12 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Idler Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K79 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;1/4” Aluminum Rod&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Aluminum Sheet Metal&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Plywood&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Circuitry==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
Our project was unique in that we relied on the use of three different PICs to precisely coordinate the motion of our ball. The main reason for this was because the 18F4520 chip only has enough encoder inputs for one Pittman motor. &lt;br /&gt;
&lt;br /&gt;
===Component List===&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;PIC18F4520 Prototyping Board&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Microchip 8-bit PIC Microcontroller&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PIC18F4520&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman Motor with Encoder&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;GM8224&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hex Inverter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;SN74HC04&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Counter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;LS7083&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;H-Bridge Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;L293&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Diodes&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1N4001&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;10K Resistor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hall Effect Sensor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A3240LUA-T&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Big Cat Super Strong Magnet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PM20134&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Set Up===&lt;br /&gt;
The electrical design for our project was pretty basic. All of our components (including the Pittman motors) were powered with 5V DC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PICs&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three PICs communicated via I2C, which enabled us to control the two motors by telling the master PIC what to do (more information can be found [[I2C communication between PICs|here]]). We designated the PIC on the 18F4520 Prototyping Board as the &amp;quot;Master&amp;quot; and the other two PICs as the &amp;quot;Slaves.&amp;quot; It is important to connect the clock from the prototyping board to the two Slave PICs, but the two main lines of communication are shared on pin 18 (RC3) on each chip, and pin 23 (RC4).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;H-Bridge&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each slave PIC sends an individual pulse to one of the two H-bridges (the L298 has two). The pulse width determines the direction and speed of each motor. At 50% duty cycle, the motor is at rest, while at 0 and 100% duty cycles the motor runs at maximum speed but in opposite directions. Pin 16 from the first slave PIC needs to connect to pin 10 on the L298, while other other should connect to pin 5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Hex Inverter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now pins 5 and 10 from the H-bridge needs to go into pins 1 and 13 on the hex inverter chip. The outputs of these two need to go back the the H-bridge as an inverted signal for pulse width modulation (pin 1 to L298&#039;s pin 12, and pin 14 to L298&#039;s pin 7).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Counter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last main component that needs to be implemented in the counter chip. This enables us to take fewer counts from the high encoder on the Pittman motors to control the movement of the motor. Pins 4 and 5 are used to connect directly to the blue and yellow lines on each Pittman encoder. Pins 1, 3, and 6 should all be hooked to ground, while pin 2 should be +5V. Pins 7 and 8 should connect to pins 15 and 6, respectively on the slave PIC that is correspondent to this counter chip.&lt;br /&gt;
&lt;br /&gt;
===Schematic===&lt;br /&gt;
Here is a visual representation of how our circuit components fit together:&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:Team-21-circuit.JPG|left|Team Granular Flow Schematic|thumb|400px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-circuit-image.JPG|left|Team Granular Flow Circuit|thumb|300px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
&lt;br /&gt;
===Code Overview===&lt;br /&gt;
&lt;br /&gt;
Three sets of code were required for our project: the MATLAB code for the user interface, C code for the master PIC, and C code for the slave PICs. The MATLAB code set up a GUI for intuitive control of the ball. The master PIC code read all the serial communication from MATLAB and converted it into appropriate I2C commands for the slave PICs, which were completely dedicated to motor control and encoding. All required code is provided below (note an additional file BKSMotorControllerFunctions.c which is used by the master PIC as well).&lt;br /&gt;
&lt;br /&gt;
Note: Thanks to Matt Turpin (of the [[IR Tracker]] project) whose code from his 399 independent study proved incredibly useful for our project.&lt;br /&gt;
&lt;br /&gt;
===PIC Code===&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallMasterv1.c|BKSBallMasterv1.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSMotorControllerFunctions.c|BKSMotorControllerFunctions.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallSlavev1.c|BKSBallSlavev1.c]]&lt;br /&gt;
&lt;br /&gt;
===MATLAB Code===&lt;br /&gt;
&lt;br /&gt;
The GUIDE toolset in MATLAB was used to create the GUI. Once all the code is loaded onto the correct PICs, everything can be run through BKSBallControl.m.&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallControl.m|BKSBallControl.m]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallControl.fig|BKSBallControl.fig]]&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
At the time of our presentation, we were able to demonstrate rotation along two axes by using the drive motor and Lazy Susan. Additionally, we were able to show effective and consistent communication of multiple bytes of data from MATLAB to the master PIC vis RS-232. Unfortunately, we were unable to get the hardware for I2C working, despite the code working on Matt&#039;s setup. Overall, we were pleased with our progress in a relatively short amount of time.&lt;br /&gt;
&lt;br /&gt;
We hope to fix the hardware issue in the near future and possibly consolidate all the circuitry onto a PCB for a more robust device. We would also like to add a hall effect switch or limit switch to indicate a set &amp;quot;Home&amp;quot; position. Additional work can be done on the motor control functions to implement feedback control as necessary. As there are clients that would like to see this project come to fruition, we want to make sure they are given a robust and flexible system for their use.&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8889</id>
		<title>Granular Flow Rotating Sphere</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Granular_Flow_Rotating_Sphere&amp;diff=8889"/>
		<updated>2008-06-13T02:32:42Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Team Members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[ME 333 final projects]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-main-picture.JPG|right|Our Final Design|thumb|500px]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Spring Quarter Update==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Team Members==&lt;br /&gt;
*Brian Kephart - Electrical Engineering Class of 2009&lt;br /&gt;
*Jonathan Shih - Mechanical Engineering Class of 2009&lt;br /&gt;
*Kristi Bond - Mechanical Engineering Class of 2008&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
A clear sphere is filled with grains of different sizes.  Our apparatus rotates this ball about two different axis based on a series of user inputs.  The user inputs the specific values for things such as angle and rotational speed into Matlab. Our device takes these inputs and processes them using a series of master and slave PICs to appropriately control the motors.  The motors then turn for the input duration at the desired speed causing the ball to spin correctly due to the frictional connection between both motors and the sphere or lazy susan, respectively.  &lt;br /&gt;
&lt;br /&gt;
This apparatus will be used for the study of granular flow and the mixing of particles within the sphere.  It was important to leave the ball as visible as possible to allow for pictures to be taken of the grains within from many angles.  With this apparatus we hope to aide the study of granular flow theory and allow the researches to use the device for many different applications.&lt;br /&gt;
&lt;br /&gt;
==Mechanical Set-up==&lt;br /&gt;
&lt;br /&gt;
====Main Housing====&lt;br /&gt;
&lt;br /&gt;
The main housing, or case, for our design is composed of the following pieces.&lt;br /&gt;
&lt;br /&gt;
*One 13.5” x 12” x ¾” plywood rectangle&lt;br /&gt;
*One 13.5” x 12”  x ¾” plywood rectangle with a 3.5” diameter circle removed from the center&lt;br /&gt;
*Two 12” x 2.5”x ¾” plywood rectangles&lt;br /&gt;
&lt;br /&gt;
The two larger rectangles form the top and bottom of the set-up with the two smaller rectangles placed vertically between to form a box with two open ends on the front and back face.  &lt;br /&gt;
&lt;br /&gt;
====Ball Support====&lt;br /&gt;
&lt;br /&gt;
Three ball casters are placed on vertical mounts around the center circle of the top piece of the housing at equal angles.  These casters prevent the ball from moving in any horizontal direction so it is only free to rotate.  One of these casters is adjustable to allow the user to make sure the ball is correctly supported above the drive wheel.  The force of gravity is strong enough to prevent the ball from moving up and out of the housing and also ensures a good connection with the drive wheel that is placed directly under the center of the sphere.&lt;br /&gt;
&lt;br /&gt;
====Main Drive Wheel====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel is centered under the rotating sphere.  The wheel is mounted onto a ¼” aluminum shaft which is connected to the Pittman motor with a flexible coupling and is also supported by a sleeve bearing on the other side of the wheel.  &lt;br /&gt;
&lt;br /&gt;
====Lazy Susan====&lt;br /&gt;
&lt;br /&gt;
The main drive wheel with its corresponding motor and other components are all mounted on top of a lazy susan that is centered on the bottom piece of the housing and secured with screws.  This lazy susan allows rotational motion but prevents movement in any other direction allowing the wheel to turn but always have the same center of contact with the sphere above.  It is important to ensure that the drive wheel has a good connection with the sphere above because the frictional force between the wheel and the sphere must be as large as possible so that as the wheel spins the ball spins at the same rate.  &lt;br /&gt;
&lt;br /&gt;
====Position Control Motor====&lt;br /&gt;
&lt;br /&gt;
A second motor is used in our design to turn the lazy susan.  The motor is mounted vertically through the top plate.  Another, smaller, drive wheel is mounted directly to the motor shaft and then aligned with only the top, free half of the lazy susan.  A second, idler wheel is mounted on the bottom plate, so the drive wheel is sandwiched between this wheel and the lazy susan. This ensures that the drive wheel is always in contact with the lazy susan because the idler wheel exerts only a normal force.  Again, this is to ensure there is no slip between the lazy susan and the drive wheel so it is automatically controlled more easily.&lt;br /&gt;
&lt;br /&gt;
====Complete Parts List====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman GM8224 motor &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Helical Beam Set-Screw Shaft Coupling&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt; 9861T508 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Mounted Sleeve Bearing&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5912K21 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Flange Mount Ball Caster&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;5674K77  &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Lazy Susan&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1443T2&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K23 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Small Drive Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2471K12 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Idler Wheel&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;60885K79 &amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;1/4” Aluminum Rod&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Aluminum Sheet Metal&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Plywood&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Circuitry==&lt;br /&gt;
&lt;br /&gt;
===Summary===&lt;br /&gt;
Our project was unique in that we relied on the use of three different PICs to precisely coordinate the motion of our ball. The main reason for this was because the 18F4520 chip only has enough encoder inputs for one Pittman motor. &lt;br /&gt;
&lt;br /&gt;
===Component List===&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;Part&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Part No.&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;Qty&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;PIC18F4520 Prototyping Board&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Microchip 8-bit PIC Microcontroller&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PIC18F4520&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Pittman Motor with Encoder&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;GM8224&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hex Inverter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;SN74HC04&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Counter Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;LS7083&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;H-Bridge Chip&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;L293&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Diodes&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1N4001&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;10K Resistor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;---&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Hall Effect Sensor&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;A3240LUA-T&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;Big Cat Super Strong Magnet&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;PM20134&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Set Up===&lt;br /&gt;
The electrical design for our project was pretty basic. All of our components (including the Pittman motors) were powered with 5V DC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;PICs&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three PICs communicated via I2C, which enabled us to control the two motors by telling the master PIC what to do (more information can be found [[I2C communication between PICs|here]]). We designated the PIC on the 18F4520 Prototyping Board as the &amp;quot;Master&amp;quot; and the other two PICs as the &amp;quot;Slaves.&amp;quot; It is important to connect the clock from the prototyping board to the two Slave PICs, but the two main lines of communication are shared on pin 18 (RC3) on each chip, and pin 23 (RC4).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;H-Bridge&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each slave PIC sends an individual pulse to one of the two H-bridges (the L298 has two). The pulse width determines the direction and speed of each motor. At 50% duty cycle, the motor is at rest, while at 0 and 100% duty cycles the motor runs at maximum speed but in opposite directions. Pin 16 from the first slave PIC needs to connect to pin 10 on the L298, while other other should connect to pin 5.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Hex Inverter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now pins 5 and 10 from the H-bridge needs to go into pins 1 and 13 on the hex inverter chip. The outputs of these two need to go back the the H-bridge as an inverted signal for pulse width modulation (pin 1 to L298&#039;s pin 12, and pin 14 to L298&#039;s pin 7).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Counter&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The last main component that needs to be implemented in the counter chip. This enables us to take fewer counts from the high encoder on the Pittman motors to control the movement of the motor. Pins 4 and 5 are used to connect directly to the blue and yellow lines on each Pittman encoder. Pins 1, 3, and 6 should all be hooked to ground, while pin 2 should be +5V. Pins 7 and 8 should connect to pins 15 and 6, respectively on the slave PIC that is correspondent to this counter chip.&lt;br /&gt;
&lt;br /&gt;
===Schematic===&lt;br /&gt;
Here is a visual representation of how our circuit components fit together:&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:Team-21-circuit.JPG|left|Team Granular Flow Schematic|thumb|400px]]&lt;br /&gt;
&lt;br /&gt;
[[Image:Team-21-circuit-image.JPG|left|Team Granular Flow Circuit|thumb|300px]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
&lt;br /&gt;
===Code Overview===&lt;br /&gt;
&lt;br /&gt;
Three sets of code were required for our project: the MATLAB code for the user interface, C code for the master PIC, and C code for the slave PICs. The MATLAB code set up a GUI for intuitive control of the ball. The master PIC code read all the serial communication from MATLAB and converted it into appropriate I2C commands for the slave PICs, which were completely dedicated to motor control and encoding. All required code is provided below (note an additional file BKSMotorControllerFunctions.c which is used by the master PIC as well).&lt;br /&gt;
&lt;br /&gt;
Note: Thanks to Matt Turpin (of the [[IR Tracker]] project) whose code from his 399 independent study proved incredibly useful for our project.&lt;br /&gt;
&lt;br /&gt;
===PIC Code===&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallMasterv1.c|BKSBallMasterv1.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSMotorControllerFunctions.c|BKSMotorControllerFunctions.c]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallSlavev1.c|BKSBallSlavev1.c]]&lt;br /&gt;
&lt;br /&gt;
===MATLAB Code===&lt;br /&gt;
&lt;br /&gt;
The GUIDE toolset in MATLAB was used to create the GUI. Once all the code is loaded onto the correct PICs, everything can be run through BKSBallControl.m.&lt;br /&gt;
&lt;br /&gt;
[[Media:BKSBallControl.m|BKSBallControl.m]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[Media:BKSBallControl.fig|BKSBallControl.fig]]&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
At the time of our presentation, we were able to demonstrate rotation along two axes by using the drive motor and Lazy Susan. Additionally, we were able to show effective and consistent communication of multiple bytes of data from MATLAB to the master PIC vis RS-232. Unfortunately, we were unable to get the hardware for I2C working, despite the code working on Matt&#039;s setup. Overall, we were pleased with our progress in a relatively short amount of time.&lt;br /&gt;
&lt;br /&gt;
We hope to fix the hardware issue in the near future and possibly consolidate all the circuitry onto a PCB for a more robust device. We would also like to add a hall effect switch or limit switch to indicate a set &amp;quot;Home&amp;quot; position. Additional work can be done on the motor control functions to implement feedback control as necessary. As there are clients that would like to see this project come to fruition, we want to make sure they are given a robust and flexible system for their use.&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8752</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8752"/>
		<updated>2008-03-29T21:12:38Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* How to Run the Program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_calibration_alignment.jpg|center|thumb|300px|Calibration Pattern Alignment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.  To quit the program, Press &#039;Esc&#039;.&lt;br /&gt;
[[Image:visual_localization_real_time.jpg|center|thumb|300px|Real-Time Processing]]&lt;br /&gt;
[[Image:visual_localization_data.jpg|center|thumb|300px|Real-Time Data]]&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_data.jpg&amp;diff=8751</id>
		<title>File:Visual localization data.jpg</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_data.jpg&amp;diff=8751"/>
		<updated>2008-03-29T21:11:42Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Real-Time Visual Localization Data&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-Time Visual Localization Data&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_real_time.jpg&amp;diff=8750</id>
		<title>File:Visual localization real time.jpg</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_real_time.jpg&amp;diff=8750"/>
		<updated>2008-03-29T21:11:15Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Real-Time Visual Localization Tracking&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Real-Time Visual Localization Tracking&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_calibration_alignment.jpg&amp;diff=8749</id>
		<title>File:Visual localization calibration alignment.jpg</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_calibration_alignment.jpg&amp;diff=8749"/>
		<updated>2008-03-29T21:10:49Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: How to Align Calibration Patterns for Visual Localization System&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;How to Align Calibration Patterns for Visual Localization System&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8748</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8748"/>
		<updated>2008-03-29T21:10:11Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* How to Run the Program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_calibration_alignment.jpg|center|thumb|300px|Calibration Pattern Alignment]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
[[Image:visual_localization_real_time.jpg|center|thumb|300px|Real-Time Processing]]&lt;br /&gt;
[[Image:visual_localization_data.jpg|center|thumb|300px|Real-Time Data]]&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8747</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8747"/>
		<updated>2008-03-29T20:25:07Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software (though this has not been implemented).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8746</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8746"/>
		<updated>2008-03-29T01:29:14Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Pattern Isolation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8745</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8745"/>
		<updated>2008-03-29T01:28:45Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Pattern Isolation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8744</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8744"/>
		<updated>2008-03-29T01:24:03Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Computer Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install the Logitech QuickCam Deluxe Webcam Drivers - http://www.logitech.com/index.cfm/435/3057&amp;amp;cl=us,en&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
5.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
6.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
7.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8743</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8743"/>
		<updated>2008-03-29T01:21:06Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Real time Adjustable Parameters = */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
5.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
6.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ===&lt;br /&gt;
During the real time operation of the program, many of the algorithm parameters will need to be adjusted for the current setup.  These parameters and keys are listed below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;+&#039;/&#039;-&#039;: Binary Thresholding:  To adjust the black and white levels, use the + and - keys to increase or decrease the threshold.  Ideally, this should be increased as high as possible without producing random noise.&lt;br /&gt;
&lt;br /&gt;
&#039;[&#039;/&#039;]&#039;: Target Size:  This measurement corresponds to the maximum distance between dots in a target.  If your 3x3 grid is spaced at 1 inch intervals, this value is ideally 1 inch.  This parameter should be decreased as much as possible before targets are lost.  If this parameter is too large, the algorithm will blend the patterns together.&lt;br /&gt;
&lt;br /&gt;
&#039;z&#039;/&#039;x&#039;: Area Thresholding:  This parameter controls the desired size of &#039;dots&#039;.  This is used to remove relatively large or small noise artifacts.  This parameter should be as large as possible to remove specular noise.&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8742</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8742"/>
		<updated>2008-03-29T01:14:57Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* How to Run the Program */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
5.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
6.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
&lt;br /&gt;
Open the project file as listed at the bottom of this page.  Compile and run the program from Visual Studio or as in the build folder.&lt;br /&gt;
&lt;br /&gt;
Once running, the program should outline what is necessary to setup and operate.  The steps are also outlined below.&lt;br /&gt;
&lt;br /&gt;
1.  Connect to cameras.  You will have to type the 4 numbers (in the command prompt) associated with the correct capture devices.  Most likely, these will be 0-3.&lt;br /&gt;
&lt;br /&gt;
2.  Orient cameras.  To correlate an individual camera with its position overhead, you must click once on the quadrant corresponding to the live camera image shown.  You will click 4 times, once for each camera.&lt;br /&gt;
&lt;br /&gt;
3.  Check.  Once you have selected the quadrants for each camera, you can look at the combined image in the main window to make sure the cameras are positioned correctly and that the background is all white.  Press the Enter button in the main window to continue.&lt;br /&gt;
&lt;br /&gt;
4.  Enter Calibration Parameters.  To calibrate the cameras to world coordinates, you must type in numbers relating to the patterns positions.  Ensure that you place the patterns down in a rectangular fashion (in the world frame not the images).  The numbers are typed into the command prompt.  You will enter three numbers as measured (they may be in inches or cm or whatever unit desired).&lt;br /&gt;
&lt;br /&gt;
5.  Capture Calibration Pattern.  To calibrate the cameras, you must now take a picture of the patterns as viewed by ALL FOUR Cameras.  By now, you should have the calibration pattern arranged on the surface.  ****ENSURE THAT THE ONLY VISIBLE OBJECTS IN THE IMAGE ARE THE DOTS FROM THE CALIBRATION PATTERN****  The program is designed to calibrate to the first 9 dots in each image that it sees.  If there are more than 9 dots in ANY camera image, the program will correlate the FIRST 9 dots to the measured positions.  If one camera sees more than one calibration pattern (fully or partially) it will NOT calibrate properly.  You may adjust the threshold with the + and - keys dynamically to remove any specular noise.&lt;br /&gt;
&lt;br /&gt;
6.  Remove the Calibration Pattern.  Now remove or cover up the calibration dots and press &#039;Enter&#039; to proceed to the real time operation of the program.&lt;br /&gt;
&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8741</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8741"/>
		<updated>2008-03-29T00:59:35Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Computer Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
5.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
6.  Download the program as listed at the end of this page.&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8740</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8740"/>
		<updated>2008-03-29T00:58:36Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Computer Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Microsoft Windows Platform SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=0BAF2B35-C656-4969-ACE8-E4C0C0716ADB&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
3.  Download and install Microsoft DirectX 9+ SDK - http://www.microsoft.com/downloads/details.aspx?FamilyId=572BE8A6-263A-4424-A7FE-69CFF1A5B180&amp;amp;displaylang=en&lt;br /&gt;
&lt;br /&gt;
4.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
5.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8739</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8739"/>
		<updated>2008-03-29T00:55:49Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express - http://www.microsoft.com/express/default.aspx.&lt;br /&gt;
&lt;br /&gt;
2.  Download and install Intel OpenCV Library - http://sourceforge.net/projects/opencvlibrary/&lt;br /&gt;
&lt;br /&gt;
3.  Download and install the videoInput Library - http://muonics.net/school/spring05/videoInput/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8738</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8738"/>
		<updated>2008-03-29T00:54:16Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Computer Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
In the current implementation, this system has been developed for a Windows based computer (as restricted by the videoInput library).  The system should run in Windows XP or Vista.  To setup the computer to develop and run the software, the three required libraries must be installed.&lt;br /&gt;
&lt;br /&gt;
1.  Download and install Microsoft Visual Studio Express here.&lt;br /&gt;
2.  Download and install&lt;br /&gt;
&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8737</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8737"/>
		<updated>2008-03-29T00:50:10Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired ALL WHITE area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8736</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8736"/>
		<updated>2008-03-29T00:18:55Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8735</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8735"/>
		<updated>2008-03-29T00:18:03Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Approximate Viewing Angle of Logitech Cameras]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_viewing_angle.jpg&amp;diff=8734</id>
		<title>File:Visual localization viewing angle.jpg</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:Visual_localization_viewing_angle.jpg&amp;diff=8734"/>
		<updated>2008-03-29T00:17:17Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Viewing angle of logitech cameras&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Viewing angle of logitech cameras&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8733</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8733"/>
		<updated>2008-03-29T00:16:50Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_viewing_angle.jpg|right|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8732</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8732"/>
		<updated>2008-03-28T23:48:30Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired area to cover.&#039;&#039;&#039;  &#039;&#039;&#039;***IMPORTANT: If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera***&#039;&#039;&#039; Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8731</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8731"/>
		<updated>2008-03-28T23:48:09Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.  Choose a desired area to cover.&#039;&#039;&#039;  ***If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera*** Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.  Ensure that the cameras are all facing the same direction. &#039;&#039;&#039; As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.&#039;&#039;&#039;  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8730</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8730"/>
		<updated>2008-03-28T23:47:42Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
&lt;br /&gt;
1.  Choose a desired area to cover.  ***If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera*** Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
&lt;br /&gt;
2.  Ensure that the cameras are all facing the same direction.  As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
&lt;br /&gt;
3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8729</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8729"/>
		<updated>2008-03-28T23:47:27Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The four cameras used were standard Logitech QuickCam Communicate Deluxes.  For future use, the videoInput library used is very compatible and works with most capture devices.  As measured, the viewing angle (from center) of the Logitech cameras was around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
Before attaching the cameras, several considerations must be made.&lt;br /&gt;
1.  Choose a desired area to cover.  ***If you want to be able to cover an continuous region, the images as seen by the cameras must overlap to ensure a target is at least fully visible in one frame of a camera*** Keep in mind that there is a trade-off between area, and resolution.  In addition, the size of the patterns will have to be increased above the threshold of noise.&lt;br /&gt;
2.  Ensure that the cameras are all facing the same direction.  As viewed from above, the &amp;quot;top&amp;quot; of the cameras should all be facing the same direction (N/S/E/W).  For future use, if these directions must be variable, the image reflection and rotation parameters can be adjusted in software.&lt;br /&gt;
3.  Try to mount the cameras as &amp;quot;normal&amp;quot; as possible.  Although the camera calibration should determine the correct pose information, keeping the lenses of the cameras as normal as possible will reduce the amount of noise at the edges of the images.&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8728</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8728"/>
		<updated>2008-03-28T23:35:12Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Camera Setup */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
The cameras tested were standard USB2.0 Logitech QuickCam Deluxe.  As measured, their viewing angle is around 30 degrees (horizontal plane) and 25 degrees (vertical plane).&lt;br /&gt;
&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8727</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8727"/>
		<updated>2008-03-28T23:31:41Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8726</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8726"/>
		<updated>2008-03-28T23:31:10Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* How to Use The System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
=== Camera Setup ===&lt;br /&gt;
=== Computer Setup ===&lt;br /&gt;
=== How to Run the Program ===&lt;br /&gt;
=== Real time Adjustable Parameters ====&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8725</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8725"/>
		<updated>2008-03-28T23:29:31Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8724</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8724"/>
		<updated>2008-03-28T23:29:13Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Algorithm Design */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Blind Spot Rejection ====&lt;br /&gt;
Since the images recorded by the cameras overlap by at least the size of one pattern, the algorithm will identify the same pattern for each overlapping section.  Ideally, the images of the cameras would be perfectly matched such that this information was redundant.  In practice, the simple camera calibration scheme results in slightly different data.  When the same pattern is identified in multiple images, it position information is averaged.  The other scenario for errors is if a pattern is only partially visible in one image (which can make it appear as a different pattern).  To correct this problem, the algorithm analyzes the individual outputs of each image at the same time.  If any two patterns are within the thresholded distance of each other, the program rejects the pattern with fewer dots.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8723</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8723"/>
		<updated>2008-03-28T23:24:33Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Real Time Operation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as all four cameras have reported a new frame, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8722</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8722"/>
		<updated>2008-03-28T23:23:26Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used for each object.  To create both position and orientation, each pattern must have at least 3 dots in a rotationally invariant configuration.  The algorithm identifies unique patterns by assuming dots within a certain adjustable distance are part of the same pattern.  It then identifies which pattern is associated with the dots via a comparison of the relative distances between the dots.&lt;br /&gt;
&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as a new frame is available, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8721</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8721"/>
		<updated>2008-03-28T23:20:01Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== How to Use The System ==&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used.  To create both position and orientation, the patterns must have at least 3 dots in a rotationally invariant configuration.&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as a new frame is available, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8720</id>
		<title>Indoor Localization System</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Indoor_Localization_System&amp;diff=8720"/>
		<updated>2008-03-28T23:07:28Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Motivation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Motivation ==&lt;br /&gt;
&lt;br /&gt;
For relatively simple autonomous robots, knowing an absolute position in the world frame is a very complex challenge.  Many systems attempt to approximate this positioning information using relative measurements from encoders, or local landmarks.  Opposed to an absolute system, these relativistic designs are subject to cumulating errors.  In this design, the positioning information is calculated by an external computer which then transmits data over a wireless module.&lt;br /&gt;
&lt;br /&gt;
This system can be envisioned as an indoor GPS, where positioning information of known patterns is transmitted over a wireless module available for anyone to read.  Unlike a GPS, this vision system is designed for indoor use on a smaller scale (in/cm).&lt;br /&gt;
&lt;br /&gt;
== Overview of Design ==&lt;br /&gt;
&lt;br /&gt;
This system uses four standard webcams to locate known patterns in a real time image, and transmit positioning information over a serial interface.  This serial interface is most often connected to a wireless Zigbee® module.  The cameras are mounted in fixed positions above the target area.  The height of the cameras can be adjusted to increase either the positioning resolution or the area of the world frame.  These constraints are a function of the field of view of the lenses.  Below is a diagram illustrating this system’s basic setup.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_system.jpg|center|thumb|600px|Overview of System]]&lt;br /&gt;
&lt;br /&gt;
Here, we can see the four cameras are mounted rigidly above the world frame.  Note that the cameras actual placement must have an overlap along inside edges at least the size of one target.  This is necessary to ensure any given target is at least fully inside one camera’s frame.&lt;br /&gt;
&lt;br /&gt;
== Goals ==&lt;br /&gt;
•	To provide real-time (X, Y, θ) position information to an arbitrary number of targets (&amp;lt;20) in a fixed world frame using a home-made computer vision system.&lt;br /&gt;
&lt;br /&gt;
•	Maximize throughput and accuracy&lt;br /&gt;
&lt;br /&gt;
•	Minimize latency and noise&lt;br /&gt;
&lt;br /&gt;
•	Easy re-calibration of camera poses.&lt;br /&gt;
&lt;br /&gt;
•	Reduced cost (as compared to real-time operating systems and frame grabbing technology)&lt;br /&gt;
&lt;br /&gt;
== Tools Used ==&lt;br /&gt;
=== Software ===&lt;br /&gt;
•	IDE: Microsoft Visual C++ Express Edition – freeware (http://www.microsoft.com/express/default.aspx)&lt;br /&gt;
&lt;br /&gt;
•	Vision Library: Intel OpenCV – open source c++  (http://sourceforge.net/projects/opencvlibrary/) &lt;br /&gt;
&lt;br /&gt;
•	Camera Capture Library: VideoInput – open source c++ (http://muonics.net/school/spring05/videoInput/)&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
&lt;br /&gt;
•	Four Logitech QuickCam Communicate Deluxe USB2.0 webcams&lt;br /&gt;
&lt;br /&gt;
•	One 4-port USB2.0 Hub&lt;br /&gt;
&lt;br /&gt;
•	Computer to run algorithm&lt;br /&gt;
&lt;br /&gt;
== Algorithm Design ==&lt;br /&gt;
=== Overview ===&lt;br /&gt;
To identify different objects and transmit position and rotation information a 3x3 pattern of black circles is used.  To create both position and orientation, the patterns must have at least 3 dots in a rotationally invariant configuration.&lt;br /&gt;
=== Pre-Processing ===&lt;br /&gt;
==== Target Classification ====&lt;br /&gt;
Before the system is executed in real time, two tasks must be completed.  The first is pre-processing the possible targets to match (the patterns of 3x3 dots).  This is done by first creating a subset of targets from the master template.  Each target must be invariant in rotation, reflection, scale and translation.  A set of targets has been included in the final project, with sample patterns.  When the program is executed in real time, it will only identify targets from the trained subset of patterns.&lt;br /&gt;
&lt;br /&gt;
When the targets are pre-processed, unique information is recorded to identify each pattern.  In particular, the algorithm counts the number of dots and the relative spacing between each dot.  In this sense, the pattern is identified as a unique number (corresponding to the order of target patterns in the image directory), the number of dots in the pattern, and the normalized spacing between each dot.  Since the pattern is a fixed 3x3 grid, the only possible spacing between dots is 1, √2, 2, √5, or √8 units.  As in networking theory, this is a fully connected network and thus has at most n*(n-1)/2 links.  Since the most number of dots in a pattern is 9, the maximum number of interspacing distances is 9*8/2 = 36.&lt;br /&gt;
&lt;br /&gt;
The last piece of information recorded is the orientation of the target.  Below is a picture of this configuration.  *Note to change the target, various dots are removed.&lt;br /&gt;
&lt;br /&gt;
[[Image:visual_localization_patterns.jpg|center|thumb|300px|Pattern Recognition]]&lt;br /&gt;
&lt;br /&gt;
==== Camera Calibration ====&lt;br /&gt;
The second required step before the system can be used is training the cameras both their intrinsic parameters (focal length, geometric distortions, pixel to plane transformation) and their extrinsic pose parameters (rotation and translation of origins).  In other words, the pixels in the image must be correlated to the world frame in centimeters or inches.  This step is performed by using a simple linear least squares best fit model.  The calibration process needs at least 6 points as measured in the world and image frames to compute a 3x4 projection matrix.  In practice, we use more than these 6 points to add redundancy and help best compute an accurate projection matrix.  The method used is outlined in &#039;&#039;Multiple View Geometry in Computer Vision&#039;&#039; by Richard Harley and Andrew Zisserman.&lt;br /&gt;
&lt;br /&gt;
=== Real Time Operation ===&lt;br /&gt;
In the actual operation of the program, it flows in an infinite loop performing three basic tasks.  As soon as a new frame is available, the program follows the following outlined steps.&lt;br /&gt;
&lt;br /&gt;
==== Image Formation ====&lt;br /&gt;
First, the computer converts the image to a grayscale (0-255) image.  After this operation is performed, the algorithm thresholds the image based on a set level.  This converts the grayscale image to a binary image by converting colors above the threshold to white (255) and all other values to black (0).  After these two operations are performed, the resultant image is a black and white binary image.  The default threshold level is set to 80.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Isolation ====&lt;br /&gt;
&lt;br /&gt;
After the image has been prepared in the binary format, all the contours must be outlined.  This process is known as connected-component-labeling (CCL) in computer vision.  This process identifies continuous “blobs” or regions of pixels that are adjacent.  In OpenCV the connected components are stored into a linked list data structure called a contour.&lt;br /&gt;
Fortunately, the connected component labeling process is a native function of OpenCV and we only have to process the resultant contour data structure.  To do this, the program iterates through each contour in the data structure, extracting position and area information (in pixels).  This data is stored into a custom data structure called dotData.  While processing each contour, the algorithm groups the individual dots into their 3x3 patterns by using a second custom data structure called a “target”.  The target data structure is a linked list and has elements for the number of dots in the pattern, dotData structures for each dot in the pattern, and other data to be generated later such as group position and orientation.  To group the dots into targets, the program follows the outlined algorithm.&lt;br /&gt;
&lt;br /&gt;
 1.	For each new dot, check each dot inside each existing target.&lt;br /&gt;
 &lt;br /&gt;
 2.	If there exists a dot in any target such that the new dot is within the maximum spacing between dots, add the new dot to this target.&lt;br /&gt;
 &lt;br /&gt;
 3.	Else; create a new target and add the new dot to the new target.&lt;br /&gt;
&lt;br /&gt;
We can see that this is not necessary the best method for grouping dots together as it is based on a maximum distance between dots.  In fact, this forces the patterns to be a certain distance away from each other to avoid confusion.  In concept, the patterns should be at least half the size of the robot to avoid this conflict.  The benefits of this classification scheme are its simplicity and speed for computation.&lt;br /&gt;
&lt;br /&gt;
==== Pattern Identification ====&lt;br /&gt;
Once the contours (dots) have been grouped into the linked list of targets, the targets must be matched to the trained patterns.  This data appears extremely robust since the spacing between dots is at such clean and quantized intervals.  To match each target to a trained pattern, the algorithm compares all patterns with the same number of dots and searches for the best match.  The comparison between patterns with the same number of dots is done with a simple squared differences error calculation.  After this process, each region of dots is classified by the global number as trained by the pre-processing algorithm.&lt;br /&gt;
&lt;br /&gt;
==== Position and Angle ====&lt;br /&gt;
To calculate the position and orientation of each target, the camera calibration matrix comes into play.  First, the position is calculated by finding the center of mass of each dot in a target in the image frame (pixels).  For each dot, the camera calibration matrix is used to transform this data into world coordinates.  Finally, the world coordinates of the dots are summed to find the group center of mass.  This group center of mass becomes the world coordinates for the target position.  To calculate the angle, specific angle information is extracted from the patterns.  This angle information is then used in combination with the pre-processed offset angle to generate a group orientation.  These two vectors are then sent out to the user-specified serial port.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Final Project Code ==&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Intelligent_Oscillation_Controller&amp;diff=8148</id>
		<title>Intelligent Oscillation Controller</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Intelligent_Oscillation_Controller&amp;diff=8148"/>
		<updated>2008-03-20T17:13:00Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Circuitry */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Team Members ==&lt;br /&gt;
&#039;&#039;&#039;Scott Mcleod:&#039;&#039;&#039; &#039;&#039;Electrical Engineering Class of 2009&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Brett Pihl:&#039;&#039;&#039; &#039;&#039;Mechanical Engineering Class of 2008&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sandeep Prabhu:&#039;&#039;&#039; &#039;&#039;Mechanical Engineering Class of 2008&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
&lt;br /&gt;
The overall goal of this project is to create a system that induces a forcing function upon a basic, spring, mass, wall system to achieve an arbitrary periodic acceleration profile (a combination of a 10 and 20 Hz sine wave for our system) on the mass. An accelerometer is mounted upon the mass in the system. A PIC microprocessor records this acceleration data as well as controls a speaker (with the help of a DAC) that provides the external force to the system. This PIC communicates with MATLAB via Serial RS-232 communication. MATLAB processes this data and sends back a control signal for the speaker. After several iterations the actual mass acceleration profile begins to match the chosen profile it is told to &amp;quot;learn.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Mechanics ==&lt;br /&gt;
&lt;br /&gt;
The basic mechanical system for this device is a simple one, but must be assembled with precision. The major components are the speaker, linear ball bearings on a precision rod, and a spring. First, the speaker is mounted perpendicular to the ground. This must then be attached via a rod to the mass. We tried several approaches, but what seemed to be the best solution was to epoxy about a one inch section of PVC pipe to the center of the speaker. The diameter of the pipe we used matched the diameter of the junction in the speaker where the cone turns from concave to convex. This pipe also had two tapped holes running along the length of the section for a plate to attach to. The plate attached to this PVC also had a threaded hole in the center for the rod that attaches to the mass. The forces exerted by the speaker are small enough (our measurements show only about 3 Newtons) that only about an 8 ounce mass is needed. The bearing we used didn&#039;t require any additional mass to satisfy this constraint. The other end of the rod simply attached to the block which we machined and mounted atop the bearing. A piece of sheet metal was screwed into the other side of the block with spacers. This piece of metal is used to anchor spring to the mass, and allows the spring to easily be removed. A similar piece of sheet metal is attached to the wall on the opposite side of the spring. However, this sheet has a vertical slot about an eighth of an inch wide cut from the bottom. This allows the coil of the spring to slide up further on the plate, thereby creating a more solid connection.&lt;br /&gt;
&lt;br /&gt;
It is important to design each component with all other components in mind at the same time. Mainly, making sure that the linear slide is level, and the rod attached to the speaker is centered, level, and in line with the mount on the mass and spring on the opposite side of the mass. This will help to ensure that all motion in the system is one dimensional.&lt;br /&gt;
&lt;br /&gt;
The above was not the first iteration of our mechanical design. We originally had a homemade linear slide. However, we found the lack of precision resulted in unreliable bode plots of the system due to loose tolerances of the design creating side-to-side motion. The first iteration also had the rod epoxied directly to the mass and speaker. During initial testing the connection between the speaker and rod actually severed. &lt;br /&gt;
&lt;br /&gt;
This current design is advantageous  because it is modular. The threaded rod allows for minor distance changes to ensure the spring attached to the wall is at its natural rest length. To attach it the plate is detached from the PVC on the speaker and screwed onto the rod. With the spring detached, the other end of the rod is screwed into the threaded hole on the mass. Finally, the plate is then screwed into the PVC through the threaded holes. The non-permanent spring attachment also allows for springs with different k-values to be added to the system. If the spring is longer or shorter then desired, a simple change in rod length is all that is needed to incorporate the new spring into the system.&lt;br /&gt;
&lt;br /&gt;
== Circuitry ==&lt;br /&gt;
&lt;br /&gt;
[[Image:LearningOscMC |thumb|150px|right| Main Circuit]]&lt;br /&gt;
&lt;br /&gt;
[[Image:LearningOscAM |thumb|150px|right| Accelerometer on mass]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The main elements of the circuit were a PIC, DAC ([[http://hades.mech.northwestern.edu/wiki/index.php/PIC18F4520:_Serial_Digital-to-Analog_Conversion Digital-to-Analog converter]]) and accelerometer. The PIC would store a discretized sine waveform with integer values from 0-255. It would then output a function of this sine wave (our control signal) to the DAC. The analog signal output from the DAC would be sent to the amplifier, which would then power the speaker. The accelerometer on the mass would feedback the actual acceleration profile of the mass back to Matlab. Matlab would then recompute the next control signal and repeat the cycle until the mass was moving with the desired acceleration profile.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Programming ==&lt;br /&gt;
=== PIC Code ===&lt;br /&gt;
In our project, a PIC was used to drive the speaker via its I2C digital-to-analog converter and record accelerometer values at fixed intervals.  Since the control system’s algorithm requires too much processing for the PIC, all the computations are performed in Matlab after the accelerometer data is transported to a computer via a serial cable.  This setup simplified the program for the PIC.&lt;br /&gt;
&lt;br /&gt;
To generate the waveform of voltage levels sent to the speaker, a single quantized period was used.  Since the waveform is periodic in nature, the wave can be repeated indefinitely in a continuous fashion.  The nature of our processing algorithm constrained the number of samples for the accelerometer data to be equal to the number of samples for the control voltage.&lt;br /&gt;
&lt;br /&gt;
Since our system used a waveform of fixed intervals, we used an interrupt service routine (ISR) to change the wave and record accelerations at precise intervals.  We chose to sample each signal every 1ms (as this is an achievable I2C speed and ISR).  To oscillate our system at 10 and 20Hz, we needed at least 100 samples per waveform (1/10Hz / 1ms = 100 samples/waveform).  For this reason, we created two 100-byte long vectors for the control voltage ‘u’, and the acceleration data ‘acc’.&lt;br /&gt;
&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_pic_main.c|&#039;&#039;&#039;PIC Learning Control&#039;&#039;&#039;]]&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_pic_bode.c|&#039;&#039;&#039;PIC Bode Plot Generator&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
=== Matlab Code ===&lt;br /&gt;
The Matlab code simply follows the protocol as established above.  The user specifies in Matlab the parameters for the amplitude and phase of the desired acceleration waveform.  *Note that the frequencies of these waves are set to 10 and 20Hz (as constrained by the transfer functions captured from the Bode and the periodic nature of 100 samples/wave).&lt;br /&gt;
&lt;br /&gt;
Below are the 4 m files used in Matlab.  Their names must be changed to the last part of their filenames as they are functions.&lt;br /&gt;
&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_main.m|&#039;&#039;&#039;Main Matlab Loop&#039;&#039;&#039;]]&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_learn.m|&#039;&#039;&#039;Learning Control Algorithm&#039;&#039;&#039;]]&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_smooth.m|&#039;&#039;&#039;Smoothing Algorithm&#039;&#039;&#039;]]&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_startplot.m|&#039;&#039;&#039;Plot Initialization Code&#039;&#039;&#039;]]&lt;br /&gt;
 [[Media:ME333_Learning_Oscillator_bode.m|&#039;&#039;&#039;Bode Plot Generator&#039;&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
=== Program Flow ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Oscillator_Program_Flow.jpg|center|thumb|400px|Program Flow]]&lt;br /&gt;
&lt;br /&gt;
=== Control System ===&lt;br /&gt;
To “learn” the control voltages to create a desired waveform, a simple proportional control system was used.  We should note much of this code was developed, tested and debugged by Tom Vose, who was invaluable to our final project.  Below is our understanding of Tom’s control algorithm.&lt;br /&gt;
&lt;br /&gt;
The system first guesses a control voltage of all zeros.  This ideally results in no forcing of the speaker and a flat response of acceleration.  After this initial guess, the program uses proportional control to match the desired acceleration waveform.  Mathematically, it multiplies the error by a proportional factor k.  The error is computed by subtracting the Fast Fourier Transform (fft) of the measured acceleration from the fft of the desired acceleration.  This error is multiplied by the proportional control k, and two discrete Bode plot values corresponding to the transfer function of voltage to acceleration at 10 and 20Hz.  The resulting signal is the control signal u, in the frequency domain.  From here, the control signal is converted back to the time domain via an inverse fft, and sent to the PIC.  All of the math is computed in discrete time, for the waveforms of 100 samples long.  Below is a block diagram of this control system.  Note that it is in the standard unity feedback form.&lt;br /&gt;
&lt;br /&gt;
[[Image:Oscillator_Control_System.jpg|center|933x200 px|Learning Control System]]&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
Below are three tests performed on various superpositions of sin waves at 10 and 20Hz.  We have adjusted the phase of the waves to produce different results.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery Caption=&amp;quot;Matlab Plots Acceleration Desired vs. Experimental Acceleration&amp;quot;&amp;gt;&lt;br /&gt;
Image:Oscillator_test1.jpg|Wave 1&lt;br /&gt;
Image:Oscillator_test2.jpg|Wave 2&lt;br /&gt;
Image:Oscillator_test3.jpg|Wave 3&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each diagram, shows one period of an acceleration profile.  The faint blue curve in the left subplot is the desired acceleration motion.  The red lines on top of the blue curve are experimental accelerations as recorded by the pic.  We can see that the red lines converge on top of the desired blue motion.  The right subplot (green) shows the learned voltage waveform to create the desired acceleration profile.&lt;br /&gt;
&lt;br /&gt;
Each wave follows the equation below with the given parameters:&lt;br /&gt;
&lt;br /&gt;
acceleration_desired = amp1*sin(2*pi*base_freq*t+phi1) + amp2*sin(2*pi*(2*base_freq)*t+phi2)&lt;br /&gt;
&lt;br /&gt;
Wave 1: phi1 = 1, phi2 = 2&lt;br /&gt;
Wave 2: phi1 = 0, phi2 = 0&lt;br /&gt;
Wave 3: phi1 = 1, phi2 = 0&lt;br /&gt;
&lt;br /&gt;
== Potential Applications ==&lt;br /&gt;
&lt;br /&gt;
A common question regarding this project is its applications to the real world. In the Northwestern University LIMS lab, a similar type of undertaking is being researched, but on a much grander scale. This same type of oscillation control is being done for 6 dimensions (X, Y, Z, Roll, Pitch, Yaw). However, the microprocessors used in this type of control are extremely expensive, and this one dimensional test of a learning system provides a possibly cheaper solution. The six dimensional control system has possible real-world applications in product assembly.&lt;br /&gt;
&lt;br /&gt;
== Reflections ==&lt;br /&gt;
Great results were achieved from the learning algorithm. For engineers working on it in the future, here are some topics for further investigation to get a more complete understanding about the control system:&lt;br /&gt;
&lt;br /&gt;
* In this project, the control signal was manually phase-shifted by pi radians before outputting it to the speaker&lt;br /&gt;
** This made the algorithm work perfectly&lt;br /&gt;
** It is not well-understood why this step was necessary&lt;br /&gt;
** The algorithm would not work otherwise&lt;br /&gt;
&lt;br /&gt;
* In the input for the FFT, the transfer function used in this project did not have an imaginary component&lt;br /&gt;
** The robust algorithm still worked perfectly, and would shift phase and &#039;learn&#039; when the program was run&lt;br /&gt;
** In future experiments, a transfer function including an imaginary term could be used, to fully utilize the capabilities of the algorithm&lt;br /&gt;
&lt;br /&gt;
* Faster learning&lt;br /&gt;
** The various constants in the algorithm equations can be tweaked for faster learning&lt;br /&gt;
** Currently, it takes about 10-20 iterations to hit the desired waveform&lt;br /&gt;
** More rapid learning would be a huge benefit in real-world applications&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* [http://electronics.howstuffworks.com/speaker5.htm How speakers work]&lt;br /&gt;
* [[Iterative Learning Control]]&lt;br /&gt;
* [http://lims.mech.northwestern.edu/~lynch/ Professor Kevin Lynch]&lt;br /&gt;
* [http://lims.mech.northwestern.edu/students/vose/ Tom Vose], author of the learning algorithm used in this project&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=File:ME333_learning_oscillator.jpg&amp;diff=7981</id>
		<title>File:ME333 learning oscillator.jpg</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=File:ME333_learning_oscillator.jpg&amp;diff=7981"/>
		<updated>2008-03-20T00:23:12Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: Project Picture&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Project Picture&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=ME_333_final_projects&amp;diff=7980</id>
		<title>ME 333 final projects</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=ME_333_final_projects&amp;diff=7980"/>
		<updated>2008-03-20T00:22:48Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Intelligent Oscillation Controller */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[ME 333 end of course schedule]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== ME 333 Final Projects 2008 ==&lt;br /&gt;
&lt;br /&gt;
=== [[IR Tracker]] ===&lt;br /&gt;
&lt;br /&gt;
[[Image:IR_Tracker_Main.jpg|right|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
The IR Tracker (aka &amp;quot;Spot&amp;quot;) is a device that follows a moving infrared light. It continuously detects the position of an infrared emitter in two axises, and then tracks the emitter with a laser.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Robot Snake]] ===&lt;br /&gt;
[[Image:HLSSnakeMain.jpg|right|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
A wirelessly controlled robotic snake which uses a traveling sine wave and servo motors to  mimic serpentine motion.  The snake is capable of going forward, left, right and in reverse.   &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Programmable Stiffness Joint]] === &lt;br /&gt;
&lt;br /&gt;
[[Image:SteelToePic.jpg|thumb|200px|The &#039;Steel Toe&#039; programmable stiffness joint|right]]&lt;br /&gt;
&lt;br /&gt;
The Programmable Stiffness Joint varies rotational stiffness as desired by the user.  It is the first step in modeling the mechanical impedance of the human ankle joint (both stiffness and damping) for the purpose of determining the respective breakdown of the two properties over the gait cycle.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Magnetic based sample purification]] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== [[Continuously Variable Transmission]] ===&lt;br /&gt;
&lt;br /&gt;
[[image:CVT_setup1.jpg|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
A continuously variable tramsission is intended to provide a transition from low to high gear ratios while keeping the engine input running at the max efficient speed. It is achieved by a system of variable radius pulleys and a v-belt.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Granular Flow Rotating Sphere]] ===&lt;br /&gt;
&lt;br /&gt;
This device will be used to study the granular flow of particles within a rotating sphere. The sphere is filled with grains of varying size and then rotated about two different axes according to a series of position and angular velocity inputs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Vibratory Clock]] ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Vibratory_Clock.jpg|right|thumb|Vibratory Clock|200px]]&lt;br /&gt;
&lt;br /&gt;
The Vibratory Clock allows a small object to act as an hour &amp;quot;hand&amp;quot; on a horizontal circular platform that is actuated from underneath by three speakers.  The object slides around the circular platform, impelled by friction forces due to the vibration.  [http://www.youtube.com/watch?v=PV9utFL5J6w Check it out!]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[WiiMouse]] ===&lt;br /&gt;
&lt;br /&gt;
[[Image:HPIM1027.jpg|right|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
The WiiMouse is a handheld remote that can be used to move a cursor on a windows-based PC, via accelerometer input captured through device movement.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Intelligent Oscillation Controller]] ===&lt;br /&gt;
&lt;br /&gt;
[[image:ME333_learning_oscillator.jpg|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
This device &amp;quot;learns&amp;quot; a forcing function that is applied to a spring and mass system to match an arbitrary, periodic acceleration profile.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [[Baseball]] ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Baseball_Playfield.jpg|right|thumb|200px]]&lt;br /&gt;
&lt;br /&gt;
An interactive baseball game inspired by pinball, featuring pitching, batting, light up bases and a scoreboard to keep track of the game.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Interfacing_to_External_EEPROM&amp;diff=7979</id>
		<title>Interfacing to External EEPROM</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Interfacing_to_External_EEPROM&amp;diff=7979"/>
		<updated>2008-03-20T00:19:19Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Circuit */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
An external EEPROM can be useful for several different reasons.&lt;br /&gt;
External EEPROMs allow much more data to be stored than is available on the 18F4520.  In addition, EEPROM memory saves state when power is removed.&lt;br /&gt;
&lt;br /&gt;
In this project, we interfaced to a Microchip EEPROM in random read/write mode.  &amp;quot;Random&amp;quot; write mode specifies that the memory locations accessed do not come in any sequential form.  Although one can use this mode to access data sequentially in the EEPROM, there is a different protocol structure for sequential reads that increases throughput.  In this project, we only developed the random write protocol.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Circuit ==&lt;br /&gt;
&lt;br /&gt;
We used a Microchip 24FC515 external EEPROM.&lt;br /&gt;
(http://ww1.microchip.com/downloads/en/devicedoc/21673E.pdf)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To the right is the circuit diagram for interfacing to the 18F4520.&lt;br /&gt;
&lt;br /&gt;
[[Image:i2c_eeprom_circuit.jpg|500x400 px|right]]&lt;br /&gt;
&lt;br /&gt;
PIN1 and PIN2, A0 and A1 are hardware addressing bits, they correspond to 0xA0 (writing data) and 0xA1 (reading data).  The first 4 bits are internally set to 1010 (A) and the second four correspond to the block of memory (0 or 1), A0 and A1 (hardware addressing we set to low in this circuit diagram) and read/(NOT write).&lt;br /&gt;
&lt;br /&gt;
PIN3, A2 must be connected to high for normal operation.&lt;br /&gt;
&lt;br /&gt;
PIN4, Vss is for grounding the chip.&lt;br /&gt;
&lt;br /&gt;
PIN5, SDA is the serial data line.  This is connected to RC4 (PIN18) on the PIC.&lt;br /&gt;
&lt;br /&gt;
PIN6, SCL is the serial clock line.  This is connected to RC3 (PIN34) on the PIC, this wire must be soldered on the left side of the board.&lt;br /&gt;
&lt;br /&gt;
PIN7, WP is the write protect line which has two states.  When connected to logic high, normal read/write operation can be performed.  When this pin is set low, only read operations are permitted.&lt;br /&gt;
&lt;br /&gt;
PIN8, Vcc is required to power the EEPROM.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:i2c_eeprom_circuit_picture.jpg|thumb|400x300 px|center|Picture of Circuit Layout]]&lt;br /&gt;
[[Image:i2c_eeprom_RC3_pullout.jpg|thumb|400x300 px|center|Picture of RC3 (PIN18) on PIC18F4520]]&lt;br /&gt;
&lt;br /&gt;
== Code ==&lt;br /&gt;
Below is the code generated to interface properly to this Microchip EEPROM.&lt;br /&gt;
It can also be downloaded [[Media:eeprom_rand_access.c|&#039;&#039;&#039;here&#039;&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
 /* &lt;br /&gt;
    eeprom_rand_access.c Scott McLeod 03-05-2008&lt;br /&gt;
    This program shows how to interface to an I2C extrenal Microchip EEPROM.&lt;br /&gt;
    This code is written for random reads and writes to the EEPROM&lt;br /&gt;
 */&lt;br /&gt;
 &lt;br /&gt;
 #include &amp;lt;18f4520.h&amp;gt;&lt;br /&gt;
 #fuses HS,NOLVP,NOWDT,NOPROTECT&lt;br /&gt;
 #use delay(clock=20000000)&lt;br /&gt;
 #use i2c(MASTER, FAST, SCL=PIN_C3, SDA=PIN_C4, FORCE_HW)    // use hardware i2c controller&lt;br /&gt;
 &lt;br /&gt;
 int8 EEPROM_WR = 0xA0;        // I2C Address. 1010 0000.  last bit = 0 =&amp;gt; write command&lt;br /&gt;
                               // First Nibble is fixed as 1010 (internal address);&lt;br /&gt;
                               // Second Nibble is ABCD, &lt;br /&gt;
                                  // A     =  Block Set&lt;br /&gt;
                                  // B, C  =  Hardware Addresses (Configured when you Wire the CHIP) &lt;br /&gt;
                                  // D     =  (0 = write, 1 = read)&lt;br /&gt;
 int8 EEPROM_RD = 0xA1;        // Same as EEPROM_WR except last bit = 1 for READ command  &lt;br /&gt;
 &lt;br /&gt;
 int read = 0;&lt;br /&gt;
 int data = 128;&lt;br /&gt;
 int16 loc;&lt;br /&gt;
 int temp = 0;&lt;br /&gt;
 &lt;br /&gt;
 void rand_write(int16 address, int data)&lt;br /&gt;
 {&lt;br /&gt;
    i2c_start();               // Claim I2C BUS&lt;br /&gt;
    i2c_write(EEPROM_WR);      // Tell all I2C devices you are talking to EEPROM in WRITE MODE&lt;br /&gt;
    i2c_write(address&amp;gt;&amp;gt;8);     // Address High Byte (0x00 - 0x)&lt;br /&gt;
    i2c_write(address);        // Address Low Byte (0x00 - 0xAA)&lt;br /&gt;
    i2c_write(data);           // Data to write&lt;br /&gt;
    i2c_stop();                // Release BUS&lt;br /&gt;
    delay_ms(5);               // Let EEPROM Write Data (5ms from data sheet)&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 int rand_read(int16 address)&lt;br /&gt;
 {&lt;br /&gt;
    i2c_start();               // Claim I2C BUS&lt;br /&gt;
    i2c_write(EEPROM_WR);      // Tell all I2C devices you are talking to EEPROM in WRITE MODE &lt;br /&gt;
                               // (you are first writing to the EEPROM to set the address.  Later you will read)&lt;br /&gt;
                               &lt;br /&gt;
    i2c_write(address&amp;gt;&amp;gt;8);     // Address High Byte (0x00 - 0x&lt;br /&gt;
    i2c_write(address);        // Address Low Byte (0x00 - 0xAA) &lt;br /&gt;
 &lt;br /&gt;
    i2c_start();               // RESTART I2C BUS (necissary for microchip protocol)&lt;br /&gt;
    i2c_write(EEPROM_RD);      // Tell all I2C you are talking to EEPROM in READ MODE.&lt;br /&gt;
    read = i2c_read();         // Read in the data&lt;br /&gt;
    i2c_read(0);               // Read last byte with no ACK&lt;br /&gt;
    i2c_stop();                // release the bus&lt;br /&gt;
    return(read);&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 void main()&lt;br /&gt;
 {&lt;br /&gt;
    output_d(1);               // Output a 1 for reference to start&lt;br /&gt;
    delay_ms(1000);            // *Not necissary pause, just for debugging&lt;br /&gt;
   &lt;br /&gt;
    loc = 0x00;                // Random Location in memory&lt;br /&gt;
    temp = 15;                 // Byte to write&lt;br /&gt;
    rand_write(loc, temp);     // Random Write: rand_write(address, data) &lt;br /&gt;
 &lt;br /&gt;
    output_d(255);&lt;br /&gt;
    delay_ms(1000);            // *Not necissary pause, just for visual purposes&lt;br /&gt;
 &lt;br /&gt;
    temp = 0;                  // Reset temp&lt;br /&gt;
    temp = rand_read(loc);     // Random Read: rand_read(address);&lt;br /&gt;
    output_d(temp);            // Output temp&lt;br /&gt;
  &lt;br /&gt;
    while(true)&lt;br /&gt;
    {&lt;br /&gt;
    }&lt;br /&gt;
 }&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
	<entry>
		<id>https://hades.mech.northwestern.edu//index.php?title=Interfacing_to_External_EEPROM&amp;diff=7978</id>
		<title>Interfacing to External EEPROM</title>
		<link rel="alternate" type="text/html" href="https://hades.mech.northwestern.edu//index.php?title=Interfacing_to_External_EEPROM&amp;diff=7978"/>
		<updated>2008-03-20T00:18:52Z</updated>

		<summary type="html">&lt;p&gt;ScottMcLeod: /* Circuit */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
An external EEPROM can be useful for several different reasons.&lt;br /&gt;
External EEPROMs allow much more data to be stored than is available on the 18F4520.  In addition, EEPROM memory saves state when power is removed.&lt;br /&gt;
&lt;br /&gt;
In this project, we interfaced to a Microchip EEPROM in random read/write mode.  &amp;quot;Random&amp;quot; write mode specifies that the memory locations accessed do not come in any sequential form.  Although one can use this mode to access data sequentially in the EEPROM, there is a different protocol structure for sequential reads that increases throughput.  In this project, we only developed the random write protocol.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Circuit ==&lt;br /&gt;
&lt;br /&gt;
We used a Microchip 24FC515 external EEPROM.&lt;br /&gt;
(http://ww1.microchip.com/downloads/en/devicedoc/21673E.pdf)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To the right is the circuit diagram for interfacing to the 18F4520.&lt;br /&gt;
&lt;br /&gt;
[[Image:i2c_eeprom_circuit.jpg|500x400 px|right]]&lt;br /&gt;
&lt;br /&gt;
PIN1 and PIN2, A0 and A1 are hardware addressing bits, they correspond to 0xA0 (writing data) and 0xA1 (reading data).  The first 4 bits are internally set to 1010 (A) and the second four correspond to the block of memory (0 or 1), A0 and A1 (hardware addressing we set to low in this circuit diagram) and read/(NOT write).&lt;br /&gt;
&lt;br /&gt;
PIN3, A2 must be connected to high for normal operation.&lt;br /&gt;
&lt;br /&gt;
PIN4, Vss is for grounding the chip.&lt;br /&gt;
&lt;br /&gt;
PIN5, SDA is the serial data line.  This is connected to RC4 (PIN18) on the PIC.&lt;br /&gt;
&lt;br /&gt;
PIN6, SCL is the serial clock line.  This is connected to RC3 (PIN34) on the PIC, this wire must be soldered on the left side of the board.&lt;br /&gt;
&lt;br /&gt;
PIN7, WP is the write protect line which has two states.  When connected to logic high, normal read/write operation can be performed.  When this pin is set low, only read operations are permitted.&lt;br /&gt;
&lt;br /&gt;
PIN8, Vcc is required to power the EEPROM.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:i2c_eeprom_circuit_picture.jpg|thumb|400x300 px|center]]&lt;br /&gt;
[[Image:i2c_eeprom_RC3_pullout.jpg|thumb|400x300 px|center]]&lt;br /&gt;
&lt;br /&gt;
== Code ==&lt;br /&gt;
Below is the code generated to interface properly to this Microchip EEPROM.&lt;br /&gt;
It can also be downloaded [[Media:eeprom_rand_access.c|&#039;&#039;&#039;here&#039;&#039;&#039;]].&lt;br /&gt;
&lt;br /&gt;
 /* &lt;br /&gt;
    eeprom_rand_access.c Scott McLeod 03-05-2008&lt;br /&gt;
    This program shows how to interface to an I2C extrenal Microchip EEPROM.&lt;br /&gt;
    This code is written for random reads and writes to the EEPROM&lt;br /&gt;
 */&lt;br /&gt;
 &lt;br /&gt;
 #include &amp;lt;18f4520.h&amp;gt;&lt;br /&gt;
 #fuses HS,NOLVP,NOWDT,NOPROTECT&lt;br /&gt;
 #use delay(clock=20000000)&lt;br /&gt;
 #use i2c(MASTER, FAST, SCL=PIN_C3, SDA=PIN_C4, FORCE_HW)    // use hardware i2c controller&lt;br /&gt;
 &lt;br /&gt;
 int8 EEPROM_WR = 0xA0;        // I2C Address. 1010 0000.  last bit = 0 =&amp;gt; write command&lt;br /&gt;
                               // First Nibble is fixed as 1010 (internal address);&lt;br /&gt;
                               // Second Nibble is ABCD, &lt;br /&gt;
                                  // A     =  Block Set&lt;br /&gt;
                                  // B, C  =  Hardware Addresses (Configured when you Wire the CHIP) &lt;br /&gt;
                                  // D     =  (0 = write, 1 = read)&lt;br /&gt;
 int8 EEPROM_RD = 0xA1;        // Same as EEPROM_WR except last bit = 1 for READ command  &lt;br /&gt;
 &lt;br /&gt;
 int read = 0;&lt;br /&gt;
 int data = 128;&lt;br /&gt;
 int16 loc;&lt;br /&gt;
 int temp = 0;&lt;br /&gt;
 &lt;br /&gt;
 void rand_write(int16 address, int data)&lt;br /&gt;
 {&lt;br /&gt;
    i2c_start();               // Claim I2C BUS&lt;br /&gt;
    i2c_write(EEPROM_WR);      // Tell all I2C devices you are talking to EEPROM in WRITE MODE&lt;br /&gt;
    i2c_write(address&amp;gt;&amp;gt;8);     // Address High Byte (0x00 - 0x)&lt;br /&gt;
    i2c_write(address);        // Address Low Byte (0x00 - 0xAA)&lt;br /&gt;
    i2c_write(data);           // Data to write&lt;br /&gt;
    i2c_stop();                // Release BUS&lt;br /&gt;
    delay_ms(5);               // Let EEPROM Write Data (5ms from data sheet)&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 int rand_read(int16 address)&lt;br /&gt;
 {&lt;br /&gt;
    i2c_start();               // Claim I2C BUS&lt;br /&gt;
    i2c_write(EEPROM_WR);      // Tell all I2C devices you are talking to EEPROM in WRITE MODE &lt;br /&gt;
                               // (you are first writing to the EEPROM to set the address.  Later you will read)&lt;br /&gt;
                               &lt;br /&gt;
    i2c_write(address&amp;gt;&amp;gt;8);     // Address High Byte (0x00 - 0x&lt;br /&gt;
    i2c_write(address);        // Address Low Byte (0x00 - 0xAA) &lt;br /&gt;
 &lt;br /&gt;
    i2c_start();               // RESTART I2C BUS (necissary for microchip protocol)&lt;br /&gt;
    i2c_write(EEPROM_RD);      // Tell all I2C you are talking to EEPROM in READ MODE.&lt;br /&gt;
    read = i2c_read();         // Read in the data&lt;br /&gt;
    i2c_read(0);               // Read last byte with no ACK&lt;br /&gt;
    i2c_stop();                // release the bus&lt;br /&gt;
    return(read);&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 void main()&lt;br /&gt;
 {&lt;br /&gt;
    output_d(1);               // Output a 1 for reference to start&lt;br /&gt;
    delay_ms(1000);            // *Not necissary pause, just for debugging&lt;br /&gt;
   &lt;br /&gt;
    loc = 0x00;                // Random Location in memory&lt;br /&gt;
    temp = 15;                 // Byte to write&lt;br /&gt;
    rand_write(loc, temp);     // Random Write: rand_write(address, data) &lt;br /&gt;
 &lt;br /&gt;
    output_d(255);&lt;br /&gt;
    delay_ms(1000);            // *Not necissary pause, just for visual purposes&lt;br /&gt;
 &lt;br /&gt;
    temp = 0;                  // Reset temp&lt;br /&gt;
    temp = rand_read(loc);     // Random Read: rand_read(address);&lt;br /&gt;
    output_d(temp);            // Output temp&lt;br /&gt;
  &lt;br /&gt;
    while(true)&lt;br /&gt;
    {&lt;br /&gt;
    }&lt;br /&gt;
 }&lt;/div&gt;</summary>
		<author><name>ScottMcLeod</name></author>
	</entry>
</feed>